Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

limitation of acquisition loop speed NI6361 USB2

Good morning

I am trying to acquire 30000 points at 1MHz on NI6361 acquisition device, linked to my PC with USB2. I used C# software with daqmx library.

 

I create a task:

myTask2.AIChannels.CreateVoltageChannel(physicalChannelComboBox.Text, "", AITerminalConfiguration.Nrse, rangeMinimum, rangeMaximum, AIVoltageUnits.Volts);

 

then I start acquisition

reader = new AnalogMultiChannelReader(myTask2.Stream);

reader.SynchronizeCallbacks = true;

reader.BeginReadWaveform(samplesPerChannel, new AsyncCallback(myCallback2), null);

on external trigger I receive 30 000 points, in a callback function, OK. At the end of this callback, I start new acquisition by calling again the code 

reader = new AnalogMultiChannelReader(myTask2.Stream);

reader.SynchronizeCallbacks = true;

reader.BeginReadWaveform(samplesPerChannel, new AsyncCallback(myCallback2), null);

 

Then I receive a new 30000 pts table, etc etc.

Everything works well, except the the frequency I can reach to get these 30 000 points tables. I can get a 30 000 points table every 150ms, which is far from what I need, that is, 30 000 points every 50ms. 

My trigger signal is 50kHz, so this is not the limiting factor.

Does anybody have an idea of what could limit ?

USB2 is 60Mo/s normally..  am calculating (8 bytes value + 24 bytes timestamp) *30000 / 0.15 = 6.4 Mo/S

 

thank you very much

Olivier

 

0 Kudos
Message 1 of 5
(958 Views)

I don't know the C# API for DAQmx, just the LabVIEW one, so I can only describe some general ideas rather than the specific lines of code.

 

It sounds like you're trying to do this as a sequence of finite acquisitions.  Run a 30k sample task to completion, return to the app software layer, create a new "Reader" and begin another 30k acquisition.  There's overhead involved when you clean up after one task and get a new one going.

 

A better plan would be to do continuous acquisition to avoid the overhead.   Read 50k samples at a time from the task.  Keep the first 30k and ignore the remaining 20k.  You'll keep getting a fresh set of 50k samples every 50 msec without the overhead of stopping and restarting.

 

Note: it's also possible that your callback code is doing some time-consuming things that contribute to the overhead.  If you keep doing those things under this method, you'll eventually get a buffer overflow error for not keeping up with the continuous acquisition.  That will be the signal that you need to structure your code differently.

 

In LabVIEW such things are normally done by keeping the data acq read loop lean and efficient while deferring further processing to a parallel loop.  I don't know how you'd do a similar Producer Consumer pattern in C#.

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 2 of 5
(952 Views)

dear Kevin, 

thank you very much for your relevant advice.

My problem is I need my 30k samples to be synchronised on external trigger (50kHz) because there is some phase calculation afterwards. In your solution, I would not know when to pick up the 30k from the 50k. Or did I miss something in your explanation?

 

0 Kudos
Message 3 of 5
(928 Views)

I figured you'd still configure your AI task to be triggered by the external pulse, like you referred to in the initial post.  Then the first 30k samples are relevant, the next 20k would be ignored, then 30k relevant, 20k to be ignored, etc.

 

The approach is simple, but admittedly has a subtle flaw.   If the external signal's timing source for the 50 kHz pulses doesn't match your 6361's timing source, then the scheme of keep 30k, toss 20k... can get out of sync with those pulses.

 

Your device *does* support retriggering, but that doesn't automatically fix everything either.  Retriggering would require a finite sampling task which in turn would require your app to reliably respond to the end of each 30k acquisition before the next trigger arrives.  20 msec will probably often, but not always be enough.

 

Further, upon re-reading your original post, I'm confused about a detail of your requirements.  You say you want 30k samples every 50 msec.  But you also say that you have a 50 kHz external pulse, which implies only 20 microsec between triggers.  At 1 MHz sampling, there's only time for 20 samples per trigger pulse not 30000.

 

In the end though, I suspect an indirect approach to retriggering will be the best way forward, though a bit more more complex.  You'll pair up a counter output task with your AI task to accomplish it.  The AI task will be continuous and will specify the counter's 1 MHz output to act as its sample clock.  The counter will be a finite pulse train of 30k pulses and will be retriggered by your external pulse signal.

    In this mode, the counter hardware keeps re-syncing to the external pulses, causing 30k AI samples to be taken each time it triggers.  Your app would no longer be required to retrieve each block of 30k before the next block starts arriving due to the next trigger.  You could for instead (for example) retrieve 4*30k=120k samples at a time and split them into 30k blocks later in your subsequent analysis.

    If your external pulses really come in at 20 microsec spacing, you can expect to capture 30k samples every 30.02 msec (30 msec of samples plus ~20 microsec until the next trigger).   So even this scheme doesn't give you exactly 30k samples exactly every 50 msec.  More complex schemes could probably get you there, they'd involve another counter or two, making the overall approach even more indirect and complex.

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 4 of 5
(918 Views)

I will read and re read carefully your explanations.

 

Meanwhile, one explanation is needed: there is no requirement of having 30k samples recorded between each rising edge of 50kHz signal. Only requirement is to have these 30k samples start on a rising edge. This means I record 30k samples, then I wait for next available rising edge of 50kHz, etc

This requirement is because of phase computation we do afterwards.

0 Kudos
Message 5 of 5
(908 Views)