From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

Counter/Timer

cancel
Showing results for 
Search instead for 
Did you mean: 

Onboard device memory overflow

I have a USB-6343, X series DAQ device with BNC termination that I am using to collect counts from an avalanche photodiode connected to the ctr1 channel in a counter task. I am timing the task using a pulse streamer connected to the PFI0 channel. The task is triggered automatically and reads the samples into a buffer with pre-allocated size at every rising edge detected from the external clock (which is connected to PFI0).

 

The pulse streamer produces 10 ns pulses in the following sequence:

100 ns off - 10ns pulse - 390ns off - 10ns pulse - 4us off - 10 ns pulse - 390ns off- 10 ns pulse- 1 us off

 

This means that the shortest time between two ticks of the external clock is 400ns, corresponding to 2.5 MHz, which is the value I set the DAQ sampling rate. The sequence runs n times (typically ~10,000) and total tick number is 4*n, which is the size of my counter buffer. 

From a reason I don't understand, I get the following error at random times (everything will work most times, but every now and then when I try to run the code it produced this error):

 

nidaqmx.errors.DaqError: Onboard device memory overflow. Because of system and/or bus-bandwidth limitations, the driver could not read data from the device fast enough to keep up with the device throughput.
Reduce your sample rate. If your data transfer method is interrupts, try using DMA or USB Bulk. You can also use a product with more onboard memory or reduce the number of programs your computer is executing concurrently.

Status Code: -200361

I tried reducing the sampling rate, running the sequence for fewer times, and closing all programs on my computer, but the error still comes up pretty randomly. It happens less often when I run the sequence fewer times, but it still happen with no apparent pattern.

I would really appreciated your help in this matter. 

It is important to note I am using the python API with the following code:

for timing
timing.cfg_samp_clk_timing(
sampling_rate,
source= self.clk_channel, #PFI0
sample_mode=AcquisitionType.CONTINUOUS)

 

for reading

read_many_sample_uint32(
self.ni_ctr_sample_buffer, 
number_of_samples_per_channel= self.buffer_size)

0 Kudos
Message 1 of 7
(5,295 Views)

To my eye, it looks like a mistake to set up a continuous task and then try to read the entire buffer size worth of samples one time, all at once.  While your app is trying to retrieve data out of your task buffer, the board and driver are still in continuous mode trying to deliver more data into it.  If the driver tries anyway, that would normally lead to a different error, commonly called "buffer overflow".  But for sure you're subjecting your system to an unnecessary *stress condition*.

 

Two things to try:

1. Configure for Finite Sampling instead.  Then there won't be any of the contention I hypothesized about.

2. If you stick with Continuous Sampling, use 2 or more Read calls to retrieve your data, requesting 1/2 or less with each read.  This should at least *reduce* any such contention.

 

All that being said, your device should have a decent-sized hardware FIFO for counter tasks, and it's unusual to see the error you report associated with a PCIe X-series device.  Note how this help doc about the error is pretty much solely focused on USB devices where data transfer requires CPU.  PCIe devices use DMA and aren't nearly so susceptible to system busy-ness. 

   Errors like this used to be fairly common on older boards with very tiny FIFO's, especially in cases like period measurement in a noisy environment that could produce spurious digital glitches.  One strategy is to apply a digital filter to the input pin, but you won't be able to do that with 10 nanosec pulses.

 

BTW, 10 nanosec is an awfully short pulse.  I couldn't find a spec for min pulse width, but I would suspect you're pushing the limits for your DAQ device, if not already exceeding them.  Can you control the pulse streamer to make wider pulses like maybe 50-100 nanosec?  You're still only reacting to the rising edge so it shouldn't throw off your measurement.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 2 of 7
(5,259 Views)

Thank you so much for your thorough reply.

I forgot to mention I already changed it to Finite Sampling at the exact amount of samples needed to be read (4 times per sequence). I also changed the clock pulse duration up to 90 ns with no difference. What I did find to make a difference is changing the pulse duration altogether, which is very weird to me, but seem to work consistently. I would still like to understand and resolve this problem because some of my sequences can't be too long.

Here is how I was able to make it work: I added 2 microseconds to either the 5th or the 9th pulse, such that the sequence was one of these two:

 100 ns off - 10ns pulse - 390ns off - 10ns pulse - 6us off - 10 ns pulse - 390ns off- 10 ns pulse- 1 us off

or

100 ns off - 10ns pulse - 390ns off - 10ns pulse - 4us off - 10 ns pulse - 390ns off- 10 ns pulse- 3 us off

This seems to work consistently, and I believe it would work with changing other pulses as well, but the other ones are critical to other synchronized operations. Any idea why would that be the case? I am running each sequence a set amount of times, n_runs, where my finite sampling amount and read buffer size are both 4*n_runs, and the sampling rate set to 2.5 MHz, the highest frequency between two rising edges.

I changed n_runs between 10 to 100000 and it still works, so I am a bit lost as for where the problem is.
It might be important to say that my pulse streaming and read functions are in a loop that repeats while changing the last pulse from 5ns to 1 us and in each loop I start the task, stream n_runs of the sequence, read the data into a buffer, and stop the task.

Thanks again

0 Kudos
Message 3 of 7
(5,254 Views)

The "extra 2 microsec" fix seems very weird to me too.  And more than just weird, I really can't make any sense of it.

 


It might be important to say that my pulse streaming and read functions are in a loop that repeats while changing the last pulse from 5ns to 1 us and in each loop I start the task, stream n_runs of the sequence, read the data into a buffer, and stop the task.

Wait, you mean *you're* generating this pulse sequence with your same 6343 device?  How are you getting 10 nanosec pulses?  As far as I new, all pulse params must be a minimum of 2 timebase cycles and the fastest timebase on X-series devices is 100 MHz.  So I'd think 20 nanosec would be the minimum that's actually achievable.

 

Please post the code related to generating this pulse sequence, including the loop you mentioned above.  I'll do my best to try to follow along with the text API.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 4 of 7
(5,234 Views)

Thanks again for all your help. 

I am using the Swabian pulse streamer 8/2, with which I can generate pulses as short as 1 ns. Here is there main commands I am using. I can post the full code, but it has a lot of non-relevant info that you don't need so I will not post it unless you think it is necessary. My apologies for the code being in the form of an image. The system doesn't let me post .py files.

Let me know if this gives you any insights 🙂

Thanks again!
 

 


0 Kudos
Message 5 of 7
(5,222 Views)

I was only able to follow along to a limited degree -- the syntax for everything related to the "stream readers" is pretty foreign to my eye.

 

No particular insights I'm afraid.  The things I'm inclined to suggest next are little side-trails that won't necessarily help *solve* things, but they might (or might not) yield a little insight.

 

Right now you have 2 variable rate pulse sources feeding into your counter task, one controlled and the other reactionary.  Let's focus first on the one you can control.  I'm wanting to investigate the possibility of spurious digital "glitches" that demand more from your DAQ hardware than it can handle.

 

1. Try setting up a simpler counter task such as frequency or period measurement to characterize the pulses coming in from your Pulse Streamer.  First try it with nice "long" pulse widths such as 100 nanosec.  You should be able to reproduce a set of interval measurements that match what you're generating.  If you have success, see what happens as you shorten them.  If no success, perhaps you'll have a new clue.

 

2. If #1 fails, then we'll make use of the fact that the counters are better-equipped to *count* fast pulse signals than to *sample* as fast as they arrive.  So here you'd set up a precision-timed edge counting task, using the Pulse Streamer as the signal whose edges are being counted.  The precision timing should be done via a Pause Trigger config with a hardware-timed pulse width.  You can generate this with one of your other counters.  The pulse width should be some large integer # of periods of the overall Pulse Streamer pattern, maybe 1000 - 10000.

   You should be able to repeatedly measure a pretty exact expected count (give or take 1 or 2 due to slight discrepancies between the internal timebase clocks for your Pulse Streamer and your DAQ device).

 

3. If those two things haven't led anywhere, let's get back to using the Pulse Streamer as a sample clock, but for an AI task this time.  Set up a single-channel AI task that uses the Pulse Streamer as its sample clock.  Again, use long pulse widths and keep the max freq within the specs for AI sampling.  What I would do is generate a sawtooth AO waveform whose period is the same as the overall period of the Pulse Streamer.  Physically wire it over to the AI channel you're measuring.  Then the pulse spacing you've defined should dictate which points along the sawtooth keep getting sampled (again, give or take a single-sample shift here or there due to slight timebase differences).  

   If you have triggering set up correctly, you could predict which points should get sampled ahead of time.  But even without a trigger, you should see *consistency* for which points get sampled from cycle to cycle.

 

There's probably less you can learn from substituting the photon detector signal for the Pulse Streamer signal in these experiments because you won't know the expected result.  But you might still get some info from specific error codes or their absence.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 6 of 7
(5,203 Views)

Onboard device memory overflow. Because of system and/or bus-bandwidth limitations, the driver could not read data from the device fast enough to keep up with the device throughput. Reduce the sample rate, or reduce the number of programs your computer is executing concurrently.

Thanks and Best Regards:

Perfume Reviews

0 Kudos
Message 7 of 7
(5,030 Views)