High-Speed Digitizers

cancel
Showing results for 
Search instead for 
Did you mean: 

Fetch and peak detection all channels of PXI-5105 with 4M record... HELP!

Solved!
Go to solution

Dear collegaues!

 

Please help me to improve performance my application, see attachment, and sorry for my English.

 

So, my task is to fetch and peak detection all (eight) channels of PXI-5105 with 4M record and sample rate 4M with loop 1 sec...

 

Inputs of all my channels are wiring to NaI detectors with 0,5...1 microsec of pulse width (really) and freq from 0 kHz to no more than 40 kHz.

 

Why I've selected 4M record and 4M sample rate namely? Answer is that I've tested PXI-5105 previously by generator 40 kHz and 0,5 microsec width pulse. It is working fine and peak detection indicate for me 40000 pulses/sec. If I set lower than 4M record and 4M sample rate it is no working. In my honest opinion 4M record and 4M sample rate are very min settings.

 

In the present time peak detection working only 6 channels... When I've connected to diagram more than 6 "peak detector.vi" - I see the error "...out of memory...".

 

Please advise me, what is to be done for that is all working fine.

0 Kudos
Message 1 of 16
(7,997 Views)
I forgot my PXI-5105 has 512 Mbytes of memory and also is PXI-8106RT with 1,5 GB memory and of course WinXP SP2 English.
0 Kudos
Message 2 of 16
(7,996 Views)

What you are running into is an out of memory error in LabVIEW.  You have enough onboard memory to capture 4M samples per channel on the digitizer.  The issue is with fetching and manipulating that data in your LabVIEW application.  You will want to step back and take a look at how you are handling your data to understand why that is happening.

 

1) 4M samples/ch = 4M Samples x 2Bytes/sample/ch = 8M Bytes/ch

2) Expanding to 8 channels creates 64M Bytes of data in the raw binary format

3) You are scaling your data by fetching in a 1D WDT format.  This stores each sample in a 32 bit double, expanding the memory to 256M Bytes (in addition to timing information)

4) By splitting up the array of waveforms and branching data it you can easily create copies of this data, and if your consumer loop is not completed with the last data, you may be trying to capture a whole new set, creating yet another copy.

 

So you can see that while you have 1.5GB of controller memory, when dealing with large arrays of data you can easily eat up that memory.  There are several things you can try to make your application more efficient.  You could work with an unscaled binary data format, you can wire the array of waveform directly to the peak detect vi (instead of creating 8 copies, you will have a single copy with arrays of output) or you could revisit the record size you have chosen (experimenting with your threshold and width settings might help you to get the results you want with smaller record lengths).

 

-Jennifer O.

Message 3 of 16
(7,990 Views)

Thank you Jennifer for your reply, now I replaced WDT to scaled double format - and it's better!

 

But how can I connect whole 2D array double from "Fetch" to one peak detector.vi ?? Accross "For loop" ??

 

While I can not to do it.

0 Kudos
Message 4 of 16
(7,983 Views)

I've added a producer-consumer loop to my task and changed to scaled double, so that is 6 channels are working.

 

But CPU loading is 94%... Smiley Sad

 

Will be good and help for my task if I add max memory to my 8106, i.e. from 1,5 GB to 4 GB, for example??

 

 

Message Edited by Current 93 on 10-30-2009 07:48 PM
0 Kudos
Message 5 of 16
(7,978 Views)

While additional memory is generally helpful,  I don't think it will solve your challenge.  You seem to not only be pushing your memory to the limit by trying to operate on large data sets, but also taxing your cpu with the calculation.

 

I recommend doing some benchmarks to see what sort of processing you can handle with your current system.  You can temporarily remove the digitizer from the equation by using simulated data.  This isn't required, but might make it easier.

 

Try using different waveform lengths and measure the time for the peak detect to complete to determine if real-time analysis will be possible for 8 simultaneous waveforms. If you cant complete the analysis in less time than it takes to acquire another waveform, you will quickly fill up your memory as you continue to fetch and queue up new data.  The Labile forum would be a good place for advice on optimizing your performance.  You can use the "Tick Count (ms)" VI to determine the execution time of the peak detection.  If you open up the detailed help for this function there is a link to an example "Timing Template" which shows how to time a piece of code if you have not done this before.

 

Jennifer O.

Message 6 of 16
(7,944 Views)

Thank you very much Jennifer. You are the real friend. Smiley Happy

I'm working now to your answers.

 

0 Kudos
Message 7 of 16
(7,941 Views)

Do you need the resolution? Might 8 bit fit your needs of peak detection?

 

Another idea:

Never went so deep into the driver, but maybe you can let the trigger do the work. Set up a task for each channel with its own analog trigger, 2µs data lengh , some points pretrigger, each trigger write it's own record. I just don't know if the hardware and software would be powerful enough 😉

 

Greetings from Germany
Henrik

LV since v3.1

“ground” is a convenient fantasy

'˙˙˙˙uıɐƃɐ lɐıp puɐ °06 ǝuoɥd ɹnoʎ uɹnʇ ǝsɐǝld 'ʎɹɐuıƃɐɯı sı pǝlɐıp ǝʌɐɥ noʎ ɹǝqɯnu ǝɥʇ'


0 Kudos
Message 8 of 16
(7,939 Views)

You should read the tutorial, Managing Large Data Sets in LabVIEW.  It will give you a number of tips and tricks for using large data sets.  In particular, you should probably fetch your data in pieces from the digitizer.  It is actually faster this way.  Use a chunk size of about 300ksamples to about 1Msample (benchmark it!).  You can probably process in chunks, as well, making memory not an issue.

 

The tutorial is a bit out of date.  You may want to look at the In Place Element structure to help keep your memory usage down, as well.

 

Note that the NI-SCOPE Soft Front Panel uses the chunking technique for larger data sets, allowing it to handle enormous data sizes by using the scope as a buffer.

0 Kudos
Message 9 of 16
(7,934 Views)

Henrick

It seems to me that is a good idea!

I check it now.

 

I think that my record can reduce by adding pretrigger and window trigger at once, i.e. it's possibility to solve many problems, for example two thresholds for my peak detection.

These two levels are setting up into window trigger and this is it.

Message Edited by Current 93 on 11-03-2009 07:55 PM
0 Kudos
Message 10 of 16
(7,929 Views)