I never tried to do a timed data aquisition in a simple loop without
using a hardware timed aquisition. That's not the right way to do it.
But I made a view tests for this timing issue, because sometimes
others have similar questions.
The tests are running on a 450MHz PIII machine. No cpu consuming
tasks are running during the tests.
I aquired 1 sample of one channel with the AI Read One Scan.vi
in each iteration. I don't had any delay's inside the loop. In
addition to the data aquisition, the loop has to calculate the
loop-execution time and has to find out the maximum loop-execution-time.
The two-VI solution passes the data through LabVIEW queue-VIs.
The data was displayed on a standard-sized waveform chart (I didn't
change the size of the chart after opening it from the palette).
The data aquisition VI is running in the data aquisition thread, the other
VI in the standard thread. I found out, that only the average execution
time was different when using higher or lower priorities. The maximum
execution time was always (nearly) the same. So I decided to use normal
priority for both VIs.
If I use the flatten to string function to get a string for the queue-VIs,
the
maximum time was around 15ms. The loop-time was most of the time
below 2ms. Only a view loops are rising up to 15ms. I executed the VIs
for a one minute period.
I made some additional tests to optimize the timing.
flatten to string -> 15ms
type-cast (instead of the flatten to string) -> 12ms
one-VI solution ->12ms
The loop times are much higher, when I moved a window with the mouse.
The max. time rised up to 50ms and more.
I moved to a 2x 500MHz PIII machine running Windows NT. This PC
didn't actually have a DAQ board, so I used a remote DAQ device over
network (a RDA-Server solution). But with this solution the loop time rised
up in a view cases (about each 15seconds) to 50ms and more. But in most
cases, the execution time was 3ms or below.
I moved the DAQ board to this machine and tried again... and got a
max. loop time of 4ms.
By the way, the two VI solution was running smother but not faster or
slower.
On this machine, it doesn't change the timing, when moving a window
during the aquisition.
Martin
Martin Henz Systemtechnik
Dipl. Ing. (FH) Martin Henz
Walchensee Str. 3
70378 Stuttgart
Tel. ++49-711-5302605
Fax ++49-711-5302605
http://www.mhst.de
Max Weiss schrieb in im Newsbeitrag:
383E82A6.F0E731C1@mvmpc9.ciw.uni-karlsruhe.de...
> I have two independent VIs running on my PC.
> One for DAQ and one for displaying my data.
> The DAQ-VI should be very fast, because I do not want to have any
> timeouts longer than 20ms -> I want to sample with 50Hz without using a
> DMA or a Fifo. The other VI is just displaying the data in four charts
> (transfered by a queue).
> I set the priotity of the DAQ-VI to "time critical priority (highest)"
> and the priority of the other VI to "normal priority".
>
> When the two VIs are running and I display no data in the VI with
> "normal priority" it works well. But when I display data (updating only
> every 2th second), I have timeouts with more than 20ms.
> I set the priority of the DAQ-VI to "normal priority" and I had not much
> more timeouts.
> How powerful is the priority setting?
> Can I make it faster with changing the setting in "Preferred Execution
> System"?
>
> My comupter works with Windows �95 and I�m using LV 5.0 . I know that
> Windows has those timeouts, but can�t I do something to reduce them to
> 20ms or even 10ms??
>
> It becomes faster with only 800x600 points and 256 colours, but I want
> to use the full screen.
> --
> Ciao
> Max
>
>
> * Max Weiss*Adlerstr.22*76133 Karlsruhe*0721/3842835*Germany *
> * max@mvmpc9.ciw.uni-karlsruhe.de*DB8MWE *
>