LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Loop tempo doesn't work well on SUSE 10.0

Hello all!

I'm experiencing troubles with labview 8.0.1 and 8.20, using suse linux 10.0.
The problem comes when I need to put a little tempo (<10ms) in the loop. If I try to put a 2ms tempo, for exemple, the loop takes 8ms to run, even if it doesn't do anything.
In my case I want to send data on UDP very fast (about 2ms between two datagrams), so what could I do?
I tried to run my exemple on Windows and RedHat WS4, and it works fine. It seems that the problem depends on my OS configuration (or maybe a kernel bug?). I'll try to update my kernel, and give you feedback.
Has someone ever seen this issue? Is there a known solution?
Thank you for your help!
0 Kudos
Message 1 of 15
(3,451 Views)

hi there

please post code in 8.0.

Best regards
chris

CL(A)Dly bending G-Force with LabVIEW

famous last words: "oh my god, it is full of stars!"
0 Kudos
Message 2 of 15
(3,447 Views)
I updated the kernel, but nothing changes...

This is a 8.0 version of the "loop" snippet.
0 Kudos
Message 3 of 15
(3,443 Views)
i see, it's not the code..
 
have you tried a timed loop instead?
Best regards
chris

CL(A)Dly bending G-Force with LabVIEW

famous last words: "oh my god, it is full of stars!"
0 Kudos
Message 4 of 15
(3,439 Views)
I didn't think about it!
However the timed loop doesn't exists on linux...
I tried doing a VI in windows and then open it in Linux but it didn't work because some .xnode are missing!
0 Kudos
Message 5 of 15
(3,435 Views)

hm, maybe the jitter of the ms ticker is very high on the linux system..

try to calculate the total ellapsed time of the while loop with the ms ticker and the system clock ("Get Date/Time in seconds" function) and compare both values. visualize the time between two cycles calculated with the ms ticker and the system clock, maybe you then can see some differences. if so it's possible that it 's the systems fault. i've seen systems with a ms ticker jitter of ~ 10ms and others with << 1ms.

Best regards
chris

CL(A)Dly bending G-Force with LabVIEW

famous last words: "oh my god, it is full of stars!"
0 Kudos
Message 6 of 15
(3,427 Views)
Thanks for your answers!
I'll try this tomorrow, because it is late now (I'm in France).
I'll give you feedback as soon as it's done.
0 Kudos
Message 7 of 15
(3,421 Views)
I measured time with the two methods, and it gives some interesting results. You can look at the code and the screenshots to compare by yourself.
In both linux boxes (RedHat and Suse) the time stamp is nearly equal to the ms timer, whereas in Windows the time stamp uses 15-16ms steps.
It seems also impossible with this method, on any system, to loop faster than 2ms!
Note that 2ms is a suitable value for my needs. It would be good enough if could make it work on suse.
Maybe the solution for me could also be not to use a tempo, but a piece of code that executes in 1 or 2ms? This code should not take to much CPU time, like doing a harware config or something else... What do you think about it?

NB: on the exemple VI
** Waveform chart is the chart of time between each loop, using the ms-timer.
** Loop Time array is the first values of the waveform chart.
** Waveform chart 2 is the same as the Waveform chart, but using the time stamp.
** Array is the same as Loop Time array, but using the time stamp.
** Array 2 is the same as Array, but with 6 significant digits instead of 2.
**
Download All
0 Kudos
Message 8 of 15
(3,402 Views)
see the pictures for the SUSE 10.0 system. they clearly show the minimum value of the ms ticker of about 8ms and its quite large jitter. that explains the observed behaviour with your while loop. see attachment for comparison (win xp, P4, 2.2 GHz, 1GB). the behaviour depends on the system, not the OS or the language.
 
"Maybe the solution for me could also be not to use a tempo, but a piece of code that executes in 1 or 2ms? This code should not take to much CPU time, like doing a harware config or something else... What do you think about it?"
 
 -> can't be done! you won't find some code that execution takes exactly 1 or 2 ms!!
 
BUT: the question is: do you need that accuracy of 1 ms? you are sending data over udp. this is a network protocol that has it's own jitter of some ms, so you would loose the 1ms accuracy at the clients side. so my my suggestion is: use the "wait (ms)" function with a ms value of 0 (zero), this will avoid the 100% cpu usage. then execute your code and send the data over udp as fast as you can. note that you can execute code FASTER than 8 ms, but then you can't use the ms ticker!
 
another solution (requires DAQ hardware): use a counter output with a rate of 1000 Hz as a trigger for a acquiring task. read 1 sample, and then you have an external clock with an accuracy << 1ms.
Best regards
chris

CL(A)Dly bending G-Force with LabVIEW

famous last words: "oh my god, it is full of stars!"
0 Kudos
Message 9 of 15
(3,395 Views)
quote : "you won't find some code that execution takes exactly 1 or 2 ms!!"
My explanation was bad : as you noted, I don't really need exactly 1 or 2ms, but I need a period between 1 or 2ms!

I have to precise : all my measures are done on the same hardware! I have triple boot with WinXP, Suse and RH. My computer is (P4, 3.2GHz, 1G), the only difference between the OSs is that Suse is not on the same disk as the others.
So I am sure it is OS dependant and hardware dependant, as your result shows.
Do you think that reducing the jitter (if possible) would solve the problem?

The DAQ solution is a possibility, but a bit cumbersome, because I'm on 4-persons projets. That means 4 DAQ boards (simulation is not supported on Linux).

Have you ever tried to open and close a UDP connexion for each datagram? If it makes a good tempo (1-2ms), this can be a solution, but I'm not sure it will be stable?!
0 Kudos
Message 10 of 15
(3,391 Views)