Counter/Timer

cancel
Showing results for 
Search instead for 
Did you mean: 

Photon Counting, Time Stamping, and basic help with how DAQmx works

The first most helpful thing, as gus suggested, is to post your code so folks here can have a look or test it out themselves.
 
Are you really sure that your buffer size is way bigger than the # of available samples?  I gathered from your posts that:
- you always have [at least] 100's of samples available
- you claim that the buffer is 5 orders of magnitude (100000 times) bigger than that --> a 10+ Mega-sample buffer?  Are you
sure it's that big?
 
Are you building an array in your main loop with the samples returned by DAQmx Read?  Array building in a loop is a very
common cause of apps that bog down after running a little while, due to all the memory management that must happen as
the array gets bigger and bigger (and bigger...) 
 
Did you remove the queue & parallel loop from the original example I posted?  Those were intended to help guard the Data Acq
loop from getting bogged down by whatever processing is done on the data.  They aren't strictly necessary in every app, but
they often prove to be helpful.
 
-Kevin P.
ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 21 of 51
(4,540 Views)

Yeah it is. I'm guessing there's a coding error somewhere, but I can't find it

I believe the error number is 200141 (it was that for a different version of the software) but I'll check again this afternoon

 

Thanks for all the help. I've attached the file

 

Nice Icon, by the way. There's no such thing as mysterious

0 Kudos
Message 22 of 51
(4,566 Views)
At first glance with your code, it may be that the tight while loops (no delays) might be eating the processor and keeping you from reading often enough.  You may be able to get your app working with something as simple as a "wait until next ms multiple" VI added to each of your two loops so both loops give up the processor to each other relatively often. 
 
Another possibility is that your signal may be noisy.  In that case you might want to look into using digital filters.
 
Also, since your application is so simple, you may be able to simplify your VI down to a single while loop where you read the data and write it to file at once rather than using the queue-ing scheme.
 
Let me know if you continue to have problems.
gus....
0 Kudos
Message 23 of 51
(4,554 Views)
I will definately look at the wait until next ms loops. I did try eliminating the queue though, and putting it all in one loop seemed to slow it down a lot. I'd guess that's because of file writing?


I'll let you know how it goes... thanks again!

Tim
0 Kudos
Message 24 of 51
(4,554 Views)

The slowdown is likely is due to the file writing.  I don't have much experience with writing data to file so you may be better off keeping the separate loops.  Let me know how the wait until next ms goes.

Good luck!

gus....

0 Kudos
Message 25 of 51
(4,546 Views)
I took a quick look over your code too and briefly compared it to the example I had posted earlier in the thread.  The main difference that stood out to me was just like gus said, your loops no longer have any delays.  The original example used event structures with timeout cases to supply those delays.  When you removed the event structures, you also lost the delays.  The constants are still sitting on the diagram though, so you could just wire them into either of the "Wait" functions from the Time & Dialog palette.
 
It may seem counter-intuitive that adding delays will make a program run more reliably at high speed, but it's often true.  More info can be found with some searching through the forums -- for example, see reply #7 by altenbach here.
 
Good luck!
 
-Kevin P.
ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 26 of 51
(4,518 Views)
Thanks a lot. I'll start looking through those threads

I tried Gus's suggestion about the "Wait Until next MS" vi. The program seemed to run ok when we had the 6602 hooked up to a function generator, even to the point of a 1 Mhz square wave signal (although eventually we ran out of memory from that!)

The trouble came in when we hooked the counter up to an avalanche photodiode. The only big difference we could see was that the signals from this would be random, and there was the possiblility of a large number of signals in a short time. The difference between the two situations is striking... when we measure the counts/50 ms (using a different VI.) With the function generator, it can handle 50-60,000 counts/50 ms, at least for a while. When it finally does crash, the computer first seems to look like it's out of resources, and finally the hourglass comes up and we get "Out of Memory". The availible samples property node starts returning a number that gets bigger and bigger, and at last it's bigger than the buffer (66,000,000)

When we're using the APD, a signal around 30 counts/50 ms crashes things, and the error message is quite different. The computer just stops refreshing the timestamps, but the program is running fine. When you hit stop, you get error -200141, Data was overwritten before it can be read. It then gives advice about data transfer mechanisms

I'm assuming that with the function generator, the problem is that the queue gets too big and that eventually takes up computer resources that aren't there. But with the APD, it really seems like it is a buffer problem. Given that the property node is telling us there are a few tens to a few hundreds of samples availible, and we have the buffer set so high, this is confusing us.

I appriciate all the help you guys have given... I'll post if we figure anything else out!

Tim
0 Kudos
Message 27 of 51
(4,516 Views)
Hello everyone
 
We've heard from a recent paper (Magatti and Ferri, Review of Scientific Instruments 74 (2003) 1135) that the 6602 has a really small FIFO buffer on the card that can fill up quite quickly if  the DMA transfer doesn't happen quickly enough. We think that this sounds especially likely about what is happening here, considering that the random signal with sporatic bursts of photons causes much more problems than a constant, high frequency signal.
 
Does anyone have an opinon about whether they think that's the case? If so, is a new computer our only option? We're currently using an 867 Mhz PIII, and we can seem to work with a signal as high frequency as 2MHz. The most our APD can put out is 5MHz, so we're just looking at doubling our ability to process data before it is overwritten
 
 
Thanks
 
Tim
0 Kudos
Message 28 of 51
(4,497 Views)
Tim,

This is likely the cause of the problem, though upgrading your system will provide only limited benefit. The real bottleneck here is your PCI bus, which has not gotten any faster. The speeds that you are experiencing are at the upper end of what most people see, even with faster computers. Also, since this is a DMA operation, it really does not rely on your processor speed. Do you have any other cards on the PCI bus? With AMD's 64 bit processors, I've seen increases in I/O bandwidth due to significant architecture changes, but these were for single-point type operations. I can't say whether or not you would experience a speedup with such an upgrade.

Can you tell us a little more about your application from a high level? Perhaps we could reccomend another type of operation that would achieve the same results. Maybe some sort of binning would produce the desired effect? Lets us know!

Regards,
Ryan Verret
Product Marketing Engineer
Signal Generators
National Instruments
0 Kudos
Message 29 of 51
(4,477 Views)

Ryan made a suggestion about binning.  There've been quite a few photon detector threads here the last year or two, and binning seems to be the most typical approach.

The code you posted is different than that.  It's designed to perform precise time-stamping of every single incoming photodetector pulse.  The count value is incremented at 80 MHz.  while each photon pulse causes the instantaneous count value to be buffered.  For high pulse rates or long durations, your system may have trouble keeping up due to the board's small FIFO and the shared PCI bus.  I haven't done a lot of testing recently, but I'd anticipate trouble as pulse rates get up toward >100 kHz for >1 sec or >500 kHz for any amount of time.  Again, the code you posted uses the photon pulses like a sampling clock, producing an unpredictably variable sampling rate.

Most of the people who've wanted to measure photon rates have taken a different approach - binning.  In this approach, the count value is incremented by each photon pulse but the buffering rate is throttled down to a constant sampling rate the system can maintain -- say 10 kHz for example.  The simplest measurement is just to let the count value accumulate throughout the measurement time.  You can simply subtract adjacent elements to determine # of pulses per 0.1 msec bin, giving you an average pulse rate during each 0.1 msec interval.

Odds are that when you timestamp individual photon pulses, the rate will be quite variable benefit from averaging anyway.  May as well let the hw do it.

If you go with binning, the other two things to watch for (some info can be found on the forums here) are "Duplicate Count Prevention" and counter rollover (aka terminal count or TC).

-Kevin P.

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 30 of 51
(4,462 Views)