Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

NI-DAQ PCI-6110 Unaccounted Overhead

I'm certain I don't have a clear idea of what you mean by "shift".  Not exactly.

 

Your code appears to sync the AO and AI tasks by having the AI depend on the AOStartTrigger, but starting AO last (and thus asserting the AOStartTrigger).   That looks essentially right to me.  If they start and run without error, the problem is *not* going to be that the tasks get out of sync in hardware.  It is much more likely a problem in data handling somewhere.

 

 

Possible sources:

  • signal generation subvi for AO.
  • failure to logically sync the AI reads with the known AO pattern. Did you insert "dead zones" into the AO waveform (as suggested earlier in the thread) to allow time for the Y galvo to increment to the next line?  If so, you need to account for that by discarding the corresponding AI data.
  • the "reshape" subvi in the image display consumer loop.  

Give these some extra scrutiny

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 31 of 37
(720 Views)

It turns out that the origin of the problem is the delay in response the galvo driver has when coupled with a DAQ. This lag is rated at around 400 μs according to the specifications meaning that there is a ~400 μs delay after the drive has received a voltage point from the signal generator.

 

Suppose the slope up of the triangle is the first line and the slope down is the second line. The AO/AI is started with a a trigger, but in reality the AO lags behind 400 μs from the perspective of the galvo. So, once the AI starts collecting data on the slope down, the galvo is still around the top of the triangle thus causing a shift in each second line. There were no problems in the way I was processing the data or my AO waveforms.

 

When I set the trigger to delay at ~400 μs, the shifting vanishes and the system images with no problems.

 

I just wanted to thank all the advice and help you've give these weeks. I came in with the intention of being an intermediate labVIEW user but learned a tremendous amount.

0 Kudos
Message 32 of 37
(712 Views)

I seem to have run into another snag.

 

I've been trying to turn the posted VI into a subvi to incorporate into the main program. I have everything being passed through control refs in the subvi and references in the mainvi so the subvi does all the work. When I run the subvi in the main, AI stops reading after 100 or so lines and complains about the hardware not being able to keep up with the software. I've implemented the consumer/production setup in the subvi. Does consumer/production not play well in terms of performance when being passed through refs?

 

 

Download All
0 Kudos
Message 33 of 37
(708 Views)

Only time for a couple brief comments:

 

- I'm not the most knowledgeable about things like the UI thread and thread-switching issues, but I do know *some* things.  Generally, anyplace I need high performance code execution, I do not update the GUI or use other things that require the UI thread.  Property nodes and control refs for controls/indicators are among those things.

- Producer/Consumer is a helpful pattern when the consumer can *on average* keep up with the producer.  Falling behind temporarily can be ok due to the buffering built into the queue, as long as the consumer can then consumer fast enough to catch back up.  

 

I suspect that your use of control refs in both producer and consumer loops is making them "fight" and interfere with one another over access to the UI thread.  It isn't obvious on the surface that control refs and property nodes of gui elements should be so much more inefficient than other methods of handling data in LabVIEW, but it's still true.

  The very simplest thing I can think of that might help in the short run is to read way more AI samples per iteration (thus slowing down the loop rate, and making the overhead of UI thread access and data processing a much smaller fraction of iteration time).  

   In the long run, you need to get those control refs out of your high-performing loops.

 

There's another major snag waiting for you too.  It looks like you launch this entire producer/consumer set of processes inside an event structure's timeout handler.  Even if that doesn't blow up in your face right away, it's definitely a bad practice that you shouldn't keep doing.  Event handling code should execute quickly, not get hung up waiting for a long-running subvi to complete.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 34 of 37
(699 Views)

I did confirm that the refnums and control refs were slowing things down considerably when I probed in a little more detail in the consumer loop (the production loop was fine). I am guessing I will need to run the consumer/production loop in the main.vi.

 

This VI will be intended for the end user who simply wants to scan and won't be dealing with any of the backend labVIEW aspects. I'll be compiling the main.vi into an executable. Event structure driven by state machines is the only way I know in packaging the system into an easy-to-use GUI plus functionality

What are the bad points in executing long running subvi(s) in event structures? How would things blow up in this case? Should I include the GUI in the event structure and throw off the other processes to other loops?

0 Kudos
Message 35 of 37
(695 Views)

When you call a subvi inside one of the event cases of an event structure, your program becomes unresponsive to any other events until that subvi completes.  In your case, that subvi is a potentially long-running process.  While it runs, your program can't respond to any other user inputs because it's stuck inside one event case, waiting for the process subvi to finish. 

 

One step toward a better approach is to add a "message handling" loop parallel to your event loop.  Now the event case does nothing more than to queue up a message like "start processes" for the parallel loop to receive.  The event loop finishes quickly and is immediately responsive to other inputs, the parallel loop can launch the producer/consumer processes.

   This won't be the end of thinking through how to keep parallel processes communicating, responsive, but also allowed to run for long periods or indefinitely.  Just a first step.  You can learn more by looking at the project template for a Queued Message Handler from the LabVIEW "File" menu, and pick "Create Project...".

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 36 of 37
(687 Views)

In my case, since the subvi was placed in the timeout case of the event structure, the user does have access to the UI during processing. I had to pass a Boolean from a stop button to the subvi through control refs and refnums in order to access the subvi and it did work as intended however that imposed almost 1000% overhead. Upon moving the production/consumer directly to the mainvi the overhead vanished but that's very bulky and hard to manage inside an event structure.

 

I'll look into this further and investigate Queued Message Handler and other designs like Maser/Slave.

 

Again, thanks for all the help!

0 Kudos
Message 37 of 37
(684 Views)