LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

1 Msample DAQmx analog output buffer size limit

Solved!
Go to solution

Is there a way around the 1 Msamples buffer size limit when generating analog signals from NI DAQmx boards (here concretely USB-6343)?

 

From what I could understand (in the documentation), the analog buffers are not fully stored on the board but are streamed on demand by Labview from the computer memory to smaller buffers residing on the board. Under these circumstances and as long as the aggregated throughput of the analog signals is quite below USB 2 bandwidth there should in principle not be any hard limit set on the size of the buffers. 

0 Kudos
Message 1 of 12
(1,886 Views)
Solution
Accepted by SebastienT

You've interpreted the docs correctly.  There isn't really a hard limit.  Have you tried it and had a problem?

 

Note that larger buffers lead to more lag time between when you write data to the task and when it shows up as an output signal.

 

There are ways to influence this with DAQmx properties like "Data Transfer Request Condition", and I think there are some special properties specifically for USB devices.  But it can get pretty tricky to combine high speed generation with low lag times.

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
Message 2 of 12
(1,871 Views)

Thanks for your answer! I double checked everything and the problem was actually lying in the default timeout of "DAQmx Wait Until Done" (incidentally 10 sec when I was using a clock rate of 100 KHz which led to 1 Msamples).

 

Regarding lag time on buffer load I would actually believe that for signal generation the latency should only increase up to the capacity of the on board buffer and not past this point since excess data should only be transfered on demand by the driver.

 

However, in practice, I actually see a clear linear increase in loading time (at least in the range 1 to 16 Msamples). Would this loading time be related to the driver allocating computer memory and copying data? Or would the onboard buffers be larger than 16 Msamples? I could not find this information in the user guide of the USB-6343...

0 Kudos
Message 3 of 12
(1,812 Views)

Regarding lag time on buffer load I would actually believe that for signal generation the latency should only increase up to the capacity of the on board buffer and not past this point since excess data should only be transfered on demand by the driver.

Well, no, not necessarily.  If you write too much data too quickly to your task, the device's hardware FIFO buffer will fill up and then the task buffer will *also* start to fill up because DAQmx can't deliver it to the board until the board has room for it.  So you can for sure build up a total latency that's (nearly) the sum of the FIFO size and your task buffer size.

 

And that probably explains the linear increase in latency you see.  Your device is spec'ed for only a 8k AO FIFO, so the vast majority of your samples are getting backlogged in the task buffer.

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 4 of 12
(1,804 Views)
But this process (progressively filling up the FIFO as data are streamed) should only happen AFTER task start, not before. The linear time I measured is before task start, more precisely when calling the VI writing an input array *somewhere* else (to the task buffer). Optimally, the driver should only transfer up to 8K to the board (low/fixed transfer time) and only stream to the FIFO on demand after task start. This said, are you sure that the linear time observed does not actually relate to memory allocation / copy for the task buffer on the computer side only?
0 Kudos
Message 5 of 12
(1,796 Views)

Sorry, I auto-corrected you when I shouldn't have.  You did refer to "loading time".  Based on an awful lot of threads where terms are used loosely, careless, and wrongly, I did an auto-correct and assumed you were talking about latency from the time you write until the time the signal shows up as output.

 

Can you post the code in question and clearly identify where you start and stop your measurement of "loading time"?   Also, what are some of the times you measure for different size writes?  It's one thing to have a linear relationship, but I'm also wondering whether the slope is large or small (thus whether the total time is "significant" or not.)

 

I also have a vague awareness that USB devices use a 3rd "USB transfer" buffer, but haven't explored or memorized the implications in any detail.   Sorry, can't seem to find any useful articles just now on the site.

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 6 of 12
(1,783 Views)