Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

Fast Read/Write loops using USB 6356

Solved!
Go to solution

Hi,

 

I am running into trouble setting up a read/write loop on my DAQ board.  From the hardware side I have two analogue channels in and one out.  I want to read in single values of the input, do some math with those two numbers, and then output the result as fast as possible.  I have attached a screenshot of my program for what I have gotten to work so far.  This seems to only be able to loop at 500 Hz, and I need to run  >2 kHz.

 

Previously I have run this program (without the writing task) at 3 kHz without any problem so I do not believe that it is the math required in the sub VI's that is slowing me down.  I realize that the write task is currently software timed which is probably slowing me down a lot, but I haven't been able to setup any timing on that task without getting errors that my device doesn't support continuous write without a buffer.  But if I buffer the write task then the output will not be synchronous with the input data?

 

In short, I need the information coming into the DAQ to be written to the output with as little latency as possible and with as fast a rep rate as possible.


Thanks, I appreciate any suggestions on what I could do to speed up my loop!

 

-Zach-

0 Kudos
Message 1 of 7
(3,469 Views)
Solution
Accepted by topic author ZF-8014

I do not think you can accomplish <500 microsec latency across USB.  Frankly, you won't get it consistently over a desktop bus like PCIe while running Windows or other non-real-time OS either.

 

You're right that buffered writes will also give you latency.  If you really need a 2 kHz control loop:

- you can't do it using a USB card

- you can't count on it under Windows.  But you can probably get partial success there with a desktop board and use of "hardware-timed single point" mode for AO.

 

 

-Kevin P

 

 

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 2 of 7
(3,428 Views)

I was a little afraid of that, thanks!

So I do have another option but one that I wasn't able to get to work at all previously.  Is there any way to run a software timed loop using a digital trigger collected via DAQ board?  Basically I can read values on a DAQ with an external trigger at 1 kHz collecting some reasonable number of samples ~1000.  Can I also run a timed loop with the same triggers that reads a global variable in LabView such that the information from the DAQ and information from the variable are synced with eachother (i.e. the DAQ would read outside the loop, but the loop still runs parallel to the DAQ collection).  This is kind of a longshot I realize, but I will have to retool how I am collecting data otherwise so it was worth a shot.

 

Thanks!

0 Kudos
Message 3 of 7
(3,423 Views)

Not sure I have a clear & accurate picture of your 2nd proposal, but am inclined to be skeptical.  The essential problem of latency comes from the need to involve Windows, DAQmx, and the USB bus itself.  I don't see how you avoid that by attempting to get the same info by a more indirect method.

  

I really don't think you can gradually increment your way toward a solution with your current USB DAQ, your rate & latency requirements, and your non-real-time platform.  At least one of them will need to change.   

   So what's being controlled and what's its response bandwidth?   Could you target a possibly-feasible 500 Hz rather than 2 kHz?  Can you move to a desktop PCIe DAQ card?

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 4 of 7
(3,418 Views)

This whole write out through the DAQ board thing is actually already kind of an indirect solution to my problem.  I don't actually have a control (I figured it was an option because it looks like a control loop), I just want to take a calculated position that I can calculate using the DAQ inputs and use that number in other VIs, timed with collection of other data.

 

My first thought was to control a timed loop via a (second) USB DAQ board so that the loop was hardware timed and the DAQ and other camera were simultaneously collecting chunks of sample (see attachment for multisample).  The timed loop didn't seem to run until after everything else was already finished, or at least didn't seem to be running on the same time scale as the DAQ so I tried moving the DAQ readout inside the loop and collecting single samples.  Again nothing seemed to be correlated which is when I decided to just hardwire the calculated position from the first DAQ into the second DAQ board which I already know how to trigger and run simultaneously with my camera.

 

To answer your other questions, the mimimum I can run is 1 kHz, and we don't have any PCIe cards so cost is the only thing stopping us there.  Obviously if there is no way around it, we can go that route, but would really rather not.

 

Thanks for your help, this has really been throwing me through a loop (pardon the pun)

Download All
0 Kudos
Message 5 of 7
(3,411 Views)

Ok, so now I have the idea that your looping speed is primarily meant for keeping separate streams of input data properly correlated in time.  Is that more or less right?

 

If so, you can probably do simple things to get fairly close and it's pretty unlikely (given the issues I brought up previously) there's *anything* you can do that'll reliably be a heckuva lot better with your current DAQ hw and OS.

 

Basically the advice is, accept your losses (literally) and move on.  Specifically, go ahead and attempt to get updated AI readings as often as you like.  "Publish" these (via global, functional global, notifier, whatever) immediately for use by your other process.  That other process now gets to do calculations on the freshest possible data it can have access to.  It isn't perfect.  You'll lose some samples and do multiple iterations on stale data sometimes.  But it's about as good as you're going to be able to do without significant changes in equipment.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 6 of 7
(3,385 Views)

Yeah, that is precisely what I am trying to do.  If I am limited by the hardware and OS, I will just have to see how much loss I get and start looking into some new equipment or try and save some of the math for post processing.  I appreciate the advice, I've not had to deal with these high speed considerations before so it is good to know where my limitations are.

 

Thanks!

 

-Zach-

0 Kudos
Message 7 of 7
(3,377 Views)