Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

NI-DAQ PCI-6110 Unaccounted Overhead

I did try using wait mode and committing the finite ask and the overhead decreased substantially, however for only one channel. Once I increased the number of channels from 1->2 per task, i.e. ai0 -> ai0:1, the overheard shot up back to where it was around originally. Does wait mode and commit apply to all channels in a task or would I have to separate each (one) channel to each task?

 

Suppose I do use the seasaw, or split the triangle waveform in half, how I would ramp up and ramp down? By feeding a new waveform into AO while it is running? How would I hold steady for 0.1? Adding the right amount of samples to the waveform? Instead of splitting the waveform, or feeding a new waveform in, would I have to customize one waveform to be run for one loop and delete segments of AI data appropriately? This is certainly becoming messy.

 

I would have to reverse alternate the AI data in any case if using during AO ramp up/down since AI continuously reads data. If I were to ramp in one direction then I would need to delete the amount of samples read when the X galvo returns to position. This becomes very messy and dependent on how fast the galvo is being run by the user.

 

I tried using your last suggestion (and suggestion in your first post), self-generated finite pulse train as a shared sample clock for AO/AI and it does work with simulation on the programming machine, however freezes and times out on the production machine. Am I setting the states in the wrong order? I attached the VI.

The board has two counters, would it be possible to use one counter for AI and one counter for X AO, both set to retrigger on AO? I've read the there won't be a trigger during read/writing samples.

 

The main reason for using the above method over commit is, as written in the first paragraph, commit does decease overhead to 1-2 ms but it ramps back up to 10+ ms when adding more than one channel. Otherwise, I would be using that method.

0 Kudos
Message 11 of 37
(915 Views)

In the above, I meant to say that the overhead is still quite substantial when combining committed finite read/write, and not two AI channels in one task. I attached a VI demonstrating what is meant. When read and write are committed separately, the overhead is in the 1-2 ms range.

0 Kudos
Message 12 of 37
(908 Views)

Your first code with the finite pulse train is probably freezing because the counter never gets triggered.  The AO start trigger happens only once and never recurs.  Further, it happens before the counter task is started.  So, the counter task never sees a trigger and the AO and AI tasks never get their sample clock.

 

What I intended was that the counter task would not be triggered at all, but would be explicitly started and stopped every iteration.   This makes a small amount of overhead, but very likely less than there is when you start and stop the buffered AO and AI tasks.

 

A couple little things that can help slightly more:

- set your buffered AO task to regenerate.  Then you can write one or two full cycles of the repetitive sweep data one time before the loop and never rewrite inside the loop.

- defer the processing & display to another loop via queue.  But if you leave it here, I'd sequence things so that the counter stop and restart can happen in parallel with it.  Just be sure the counter stop and restart can't happen until after the AI Read.

 

I'll comment on the other code in a separate post to try to keep things more clear.

 

 

-Kevin P

 

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 13 of 37
(902 Views)

I know no reason why a 2-channel AI task would show substantially more overhead than a 1-channel AI task.  I'd recommend you give that conclusion more scrutiny to be sure there isn't another explanation.

 

I see some things I'd change in the code with finite AO and AI to try to speed up the loop iterations.

 

1. Surprisingly, the "Wait Until Done" vi's may be a major culprit.  See this thread where I was recently surprised to learn this myself.  (The link goes straight to a specific post mid-thread, but it'll be helpful to at least skim some of the rest).

   You should be safe to remove the Wait Until Done calls if you add sequencing to be sure that you don't Stop the tasks until they've completed their finite samples.  This is easy enough, just make sure the AO Stop can't occur until after the AI Read returns with the entire line's worth of samples.  

 

2. You could let the DAQmx Stops run in parallel with the processing & display.  (Actually, I'd defer the processing and display via queue, but we've been over this before.)

 

3. You could also have the software timed AO update happen in parallel with those things.  All the finite sampling will have ended even if the official DAQmx Stop hasn't yet executed.

 

 

-Kevin P

 

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 14 of 37
(900 Views)

Oops, there was a 3rd line of inquiry I forgot to address that wasn't in the posted code.

 

I already outlined the general approach for a purely continuous acquisition containing both AO signals in the same hw-timed task.  It's a little messy but not horrific, and it's the only method that can remove *all* unnecessary overhead.

 

Key elements are that the AO data is pre-known and can be computed outside the loop.  You'll probably still need to feed the AO task in chunks inside the loop though.  Whenever you request a sweep worth of AI data, it'll help to put other time-consuming code in parallel.  The AI Read will be stuck waiting a few msec for the requested data to arrive so you may as well get other things done during that time (such as feeding future data to the AO task).

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 15 of 37
(899 Views)

Right, I tried removing "Wait Until Done" from testDAQ2_simple.vi posted above but the overhead remained the same. I still do not know why the overhead jumps back to 10+ ms when committed finite read/write are combined in a single loop and yet return back to 1-2 ms when they are in separate loops.

 

I was under the impression that if the finite CO was set as retriggerable while also fulfilling the role as clock for AO/AI, then it would fire off sample pulses every time a rise was detected from AO. I see it seems the start trigger is limited to once per state. Is there a way to set the trigger to detect a rise from an AO for example?

In short, is there a way to have the CO fire off a pulse train every time a rise is detected from the on-demand AO? In the meantime, I will try to see how the overhead fares when restarting the task for the finite CO.

 

I'm also already working on a custom waveform that will handle the whole process in continuous state. It's not as bad as I thought. I've already converted the Y AO seasaw into digital steps representing each line. Next, for X AO, I will be adding a delay in the middle of the triangle to account for Y AO. I figured that if the delay is long enough to give the Y galvo enough time to move then things should be fine. The data processing for AI is also trivial when taking these things into account if you know how to deal with arrays in labVIEW.

 

Ironically, though solutions are being found, we still do not know why restarting tasks adds overhead. Changing from finite to continuous, and restarting the tasks in a loop adds as much, if not more, overhead as well.

0 Kudos
Message 16 of 37
(889 Views)

I'm having a hard time matching up the posted code, your description of various code variations, and the symptoms you see when running different variations.  Can you post both versions of code you referred to in the 1st paragraph?  Earlier in the discussion I thought you said the 10+ msec overhead appeared when there were 2 channels in the AI task.  This description talks about it depending on whether AI and AO are in the same loop or different loops.

 

I don't think CO can be retriggered by the on-demand AO.  I've not known of any on-demand tasks exposing timing signals that can be used to sync other tasks.  I haven't really investigated this deeply though.

 

Your 3rd paragraph is right in line with my earlier illustration of how to handle a 5 msec sweep and a 0.1 msec delay to increment the Y galvo.  The same basic idea holds for any other time intervals.

 

No, the root cause of the "overhead" isn't yet well understood.  Up til now though, it seems that there have been several code variations in addition to the ones posted, and I'm not always sure which behavior and symptoms you've described correspond with a particular version of code.   Can you post 2 versions of code with minimal changes that illustrate this jump from 1-2 msec to 10+ msec?  

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 17 of 37
(881 Views)

I made a second post on this page with the VI correcting the description. The one I made about increased overhead when adding channels was a mistake.

The symptoms when AI/AO are in the same loop is the correct one and the code posted above (testDAQ2_simple.vi ‏57 KB) shows the symptoms of 10+ ms when combining committed finite AO/AI.

"In the above, I meant to say that the overhead is still quite substantial when combining committed finite read/write, and not two AI channels in one task. I attached a VI demonstrating what is meant. When read and write are committed separately, the overhead is in the 1-2 ms range." -> 2nd post on this page

 

The code showing 1-2 ms is on the first page with the below name.

0 Kudos
Message 18 of 37
(877 Views)

Those 2 sets of code you linked still differ in too many distinct ways to figure out for sure which thing(s) are the main contributor to the "overhead".

 

I *do* note that the slower code *still* includes the processing and waveform graph displays inside the loop you're observing while the faster method defers them.  That isn't the only difference, but it's the one I've been suspicious of for half a dozen rounds back and forth in this thread.

 

Other ways the code differs:

- slower code is less careful about making sure time measurements catch only the intended parts of the code.  It also measures at lower resolution.

- slower code includes the on-demand AO in the loop, adding to execution time

- slower code allows a little less parallelism between AO and AI task (starts happen in sequence), possibly adding a small increment to execution time

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 19 of 37
(862 Views)

I did look more closely at separate sections of the the slower code as you said and here are my findings;

1. For AI, "Start Task", and then reading the buffer ("Read") takes 5 ms +- 2 ms, the overhead for starting AO is negligible.

2. Processing for the waveform adds 2 ms

3. "Wait Until Done" and "Stop Task" for AI has negligible overhead, however 5-7 ms overhead is seen for "Wait Until Done" and "Stop Task" in AO. The on-demand AO naturally does not have Start/Wait/Stop

 

In this case, all the overhead is accounted for which explains why I saw such an increase when measuring the time for the whole loop with the combined AO/AI compared to the VI when they are separate.

 

On another note, for future use I tried syncing the clocks of the AI/AO to the CO for finite pulse train however I have trouble with timeouts. Last time I tried using a start trigger however that did not work out as intended. This time, I omit the start trigger and manually start/stop the CO after the AO demand however I am running into the exact issue I had with the start trigger. I attached the Vi in question. Is there something being configured incorrectly? The VI runs with simulation but does not on the production machine.

0 Kudos
Message 20 of 37
(856 Views)