....
> Surprisingly, it depends mostly on the sample rate. If I set the rate
> (the waiting time) to 1ms, the digital output responds within that
> 1ms. If I set the rate to 20ms, the digital output also responds
> within that 20ms with an average of 10ms.
>
> It seems, LabView knows that there's plenty of time in a 20ms-loop and
> thus spreads the execution of the code inside the loop over that time.
>
> Can I change that? Can I tell LabView to execute the code rather at
> once and then wait for the rest of the time, do nothing or drink some
> tea but give me a bit more realtimeness?
>
It sounds like the I/O knows the sampling rate and the
driver basically waits until the correct time to
make the reading. There are multiple ways of doing
the I/O including
making is a single point read and
using SW timers like wait MS or wait MS multiple to
throttle the loop.
What you will find is that the SW timing will be easy
to implement, but will have the same jitter as the OS.
This may be acceptable for you, but often you will want
to init a periodic HW timed reading operation, start
the operation, and then in the loop you will want to
read from the back side of the buffer using the lower
level AI Read icon and I think a negative offset into
the data. I'm not much of an expert on DAQ, so I'm
sure that I'm not describing it correctly, but I know
that this can be done and it will help avoid the spin
loop that the driver performs if you read in advance of
the data having arrived. Another technique is to use
a DAQ occurrence to notify you when the I/O is available,
again so that you do not make the read call until the
data is ready.
Greg McKaskle