Counter/Timer

cancel
Showing results for 
Search instead for 
Did you mean: 

photon counting, buffering, DMA, and error Error -200141

I think I need some help with data transfer. If that's not the problem, then I need some help with problem analysis, because I'm not sure what's going on.
I've got two counters of a PCI-6602 board hooked up to two photon counters and running a buffered counting application, so with each incoming pulse it buffers the count of the system clock, effectively timestamping the signal. Most of the VI we built (attached) was pieced together from the following thread:
The photon counters spit out a 2.5(min)V pulse that is 30 nanoseconds long each time a photon strikes them (randomly). The plan is to have coincident photon pairs coming into the detectors and use the timestamp data to build a histogram of the time delay between photon arrivals. The photon arrivals at each detector are randomly spaced, but we are seeing about 5000-40000 counts on each detector per second.
When we hook it up, however, it runs fine for a few seconds and then chokes, giving us the following message:

Error -200141 occurred at DAQmx Read (Counter 1D U32 1Chan NSamp).vi:1
Possible reason(s):
Data was overwritten before it could be read by the system.
If Data Transfer Mechanism is Interrupts, try using DMA. Otherwise, divide the input signal before taking the measurement.
Task Name: _unnamedTask<151>

Both counters do not stop simultaneously. In fact, there seems to be no real way to predict how long it will run before giving the error. We have it set to use DMA, so could it just be a buffer size issue? If it is a data transfer issue, is there a way to fix it without discarding any of the samples? This seemed similar to the issue on this forum, but there were no responses posted:
When we hook it up to a 100kHz signal generator for testing, it works just fine and will count for a long time without any problem.
Any advice would be greatly appreciated.
 
LJ
0 Kudos
Message 1 of 11
(7,697 Views)

Hello LJ. 

 

 

Thank you for posting to the NI Discussion Forums. 

 

 

The error that you are receiving indicates that the software circular buffer of the counter task is full and therefore previous samples were overwritten causing a loss of data.  The software buffer size is directly dependant on the mode of operation as well as the acquisition rate.  The actual buffer size can be found on page 2-8 of the NI 660x User Manual, which can be found here:

 

 

https://www.ni.com/docs/en-US/bundle/ni-660x-feature/resource/372119c.pdf

 

 

So, part of the troubleshooting process is to find out what rate you are sampling at and if you are doing finite or continuous sampling on the counter task.  This will help us determine what size buffer you currently have and thus if it is likely that we are overflowing this buffer. 

To prevent the overflow from happening so soon, we can increase the size of the buffer.  This can be done using the DAQmx Buffer Property node which is available on the DAQmx functions palette>>DAQmx Advanced Task Options.  See if increasing this value improves the performance of your application. 

Let us know your findings and we will be happy to help you further. 

Have a great day!

Brian F
Applications Engineer
National Instruments

 

0 Kudos
Message 2 of 11
(7,683 Views)

Thanks for the help Brian,

We are using Continuous Sampling, and I've been controlling the Buffer Size using the Samples per Channel input on the DAQmx Timing (Sample Clock).vi . Will this do the same thing as altering the buffer property node?

Because the Photons arrive randomly, it is difficult to state an actual rate at which they are arriving. We are seeing about 5k-40k counts per second, but they are randomly spaced. Could a possible cause of this error just be that two photons happen to arrive too closely?

I noticed on the link you referred that finite operation seems to allow for much higher sampling rates. We have been gating our data aquisition by triggering initiation of collection on the rising edge of a pulse and then running in continuous sampling for a little longer than our gate period, then truncating any samples that are timestamped as having arrived outside of our gate period. Would it possibly eliminate our error to find a similar way of accomplishing this with finite operation, even though we don't know exactly how many samples will arrive in our gate window?

Attatched is a copy of our code

Thanks again for the help

LJ

0 Kudos
Message 3 of 11
(7,679 Views)

One other thing with regards to our signal that I failed to mention first. The TTL pulses that the photon detectors produce when a photon arrives are 30 ns long (2.5-5V), but there is 50 ns of dead time between the pulses. This means that the detectors can't send pulses spaced less than about 80 ns apart. So the quickest signal that the photon detectors could possibly generate, assuming photons were arriving at that rate, would be about 12.5 MHz. Once again, the photon arrivals are random, but it is possible that we could get a pair once in a while that are this close together, even though our count rate is MUCH less than 12.5 MS/sec

LJ

0 Kudos
Message 4 of 11
(7,674 Views)

Hello. 

This error results from the fact that the counter FIFO size is only one sample, so that error -200141 is thrown if two consecutive samples infringe the specs. For example, a 10,000 sample buffer should handle 1.6 MS/s according to spec, which means that two consecutive signal edges must have a time interval of at least 625 ns.  So, if you receive two photons within 625 ns, the 1 sample counter FIFO will be overflowed and data will be lost.  The best solution for this application is to buy a card with a larger counter FIFO.  The 6210 is probably the best card for this as it features a 1,023 sample FIFO instead of a 1 sample FIFO. 

Let me know if I am explaining this clearly.  Have a great day!

Brian F
Applications Engineer
National Instruments

0 Kudos
Message 5 of 11
(7,656 Views)

Isn't the 1023-sample FIFO for analog tasks?  I thought that all the M-series counter boards were limited to a 2-sample FIFO, just like the 6602...

Previous threads on similar apps seem to confirm that finite acquisition can be a better choice due to less DMA-related overhead.  I can't seem to find the exact thread right now, but search on terms like counter, finite, fifo, dma, 200141, etc.  One of the guys from NI had a real good explanation about the nature of why finite acq can sustain higher sampling rates than continuous acq.  I don't remember who it was just now, but two likely candidates that come to mind are reddog and gus.  I also think that the thread was sometime in the last several months.

- Kevin P.

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 6 of 11
(7,650 Views)
 

Hello. 

 

 

You are correct in asserting that normal M series boards have a fairly small FIFO.  However, the 6210 is a member of our Bus Powered M Series devices, which have a much larger counter FIFO. 

Brian F
Applications Engineer
National Instruments

 

0 Kudos
Message 7 of 11
(7,627 Views)

Thanks guys,

The information you've given has been very helpful. Is there a way to tell the 6602 board to disregard the error message I can still live if I lose one of two closely spaced signals, but the big problem has also been that whenever the error gets tripped, the array that I'm building (basically the code above just reads the buffered data in a while loop and appends it onto a growing array with each iteration) has a whole bunch of zeros appended onto it at the end. For example, the counter might be running just fine and at about, say, 4000 counts the error gets tripped. Suddenly my array is no longer 4000 elements but 16000 elements, and the last 12000 are all zeros. Is this just a problem with my code, or is this some problem that's inherent in the error itself? If I can tell the counter to just keep running like normal in this operation and disregard one of the signals, it would solve my problem. At least it would be a preferable solution to buying something to filter out closely spaced signals.

If not, would the M-series 6210 work well with this application? Basically I chose the 6602 because it has 8 counters and an 80 MHz internal time bas with which to timestamp. If the 6210 only has 2 counters, I suppose it would still work, but would I still be able to timestamp in the same manner?

Thanks again for the help, everyone

LJ

0 Kudos
Message 8 of 11
(7,624 Views)
Hello. 
 
To most efficiently handle this issue, I would recommend placing the build array VI's inside a case structure that only executes if there has been no error on the DAQmx counter task.  That way, you will be able to simply discard the "corrupted" data. 
 
The other option is to build the array, and then post process the data to remove long strings of zeros by using the Search 1D Array and Delete from Array VI's. 
 
Then, you can remove the error and resume your acquisition with another case structure that executes when the error code matches the error code that you commonly see when 2 photons arrive too close together.  Then, inside that case structure, you can use the Clear Errors.vi to clear the error from the data wire and continue your acquisition. 
 
All of this will help you work around the core issue, but if you desire a full solution, I would still recommend the USB-6210 bus powered M series card with the 1023 sample counter FIFO. 
 
Let me know if this answers your questions. 
 
Brian F
Applications Engineer
National Instruments

Message Edited by Brian F. on 05-15-2007 03:53 PM

0 Kudos
Message 9 of 11
(7,610 Views)
Thanks for the help,
 
I seem to still have a problem when trying to eliminate the corrupt data with case structures. First of all, the large array of zeros seems to be read into the array before the error is registered (the Read.vi doesn't return an error until the subsequent iteration). I think this eliminates the hope of preventing that data from being written into the array by using a case structure. While this could still be overcome without much problem by post-processing, is there another way to prevent it from being written into the array in the first place? I'm hoping for some way to tell the task before starting it to ignore the problem.
 
The second problem is that after clearing the error, in the next iteration of the while loop the DAQmx Read.vi spits out the same error again, with no data. It continues to do this for all subsequent iterations. Is there a way to get the task to continue again as normal, short of clearing the task and starting it over again (which would use up a lot of time)? Is the error basically in the hardware, and thus it wouldn't be possible to clear it short of reinitializing the board (i.e. clearing the task and starting it over)?
 
Thanks again for all the help.
 
LJ
0 Kudos
Message 10 of 11
(7,594 Views)