From 04:00 PM CDT – 08:00 PM CDT (09:00 PM UTC – 01:00 AM UTC) Tuesday, April 16, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

Analouge Input-Output delay

Hi

I am a new labview user. I am trying to implement a controller in real time using labview. For that purpose, I started doing analouge input output exercise. As a part of learning process, I was trying to apply input on a system, get the data and feed it back through analouge channel. However, I noticed there is a signficant delay between input and output, It was about 1 ms. Then i thought of doing the simplest exercise. I generated an input signal read it through labview and feed it back again. So, basically its task for ADC and DAC only. But still, it has the same amount of delay. I was under impression that if i do hardwared time data read and write, it would reduce the delay. But still, no change. Can anyone please help me out regarding that? or there would always be this amount of delay.

For the ref i am attaching the .vi file. I am using PXI 6363 

 

Any kind of help would really  be appreciated 

0 Kudos
Message 1 of 2
(795 Views)
I was under impression that if i do hardwared time data read and write, it would reduce the delay.

Nope.  Most hardware-timed I/O is also *buffered* which increases the delay, often quite significantly.  Your use of "hardware-timed single point" mode avoids the buffer, but reads and writes are still *initiated* by software calls under your OS.  So you'll still be subject to the timing variability of that OS (presumably Windows, which is probably 98% likely).  And you also add a *little* delay as every individual I/O sample needs to wait until for the next sample clock to arrive.

 

You call a read function.  The board must then wait for the next hardware sample clock, capture a sample, then the driver delivers the data up to your app.  Then you call a write function.  The driver must deliver the data down to the device, then the device waits for the next sample clock after that to generate the sample as a real world signal.

 

The *best* your code can do is to have output delayed by 1 sample period.  And you will not always get this best case.

 

Your code then calls a function to wait again for *another* sample clock before looping back around to the next iteration.  This sets your best case iteration rate to 3 sample clock intervals.  As far as I can tell, this function isn't needed and only hurts you.  Removing it gets you down to a best case of 2.

 

On-demand software timing would likely give you a significantly better best case, (especially if you took a pipelining approach by sending the input data to a shift register for use by the output task on the next iteration, and letting the input and output tasks in parallel without any dataflow dependency).  But you'd still be subject to OS timing variability.

 

On a much more general note, loop rates in the order of 1000 Hz won't be reliably achievable under Windows anyway, so while you're in learning mode try working with rates more in the realm of 10's of Hz.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 2 of 2
(764 Views)