12-23-2013 01:29 PM
I ran in to some timing issues in my code. I realized that with a timed loop, if I set the period to 1, 2, 3, or 4 it runs very slow. I have attached a very simple VI, as well as a screen shot. Anyone able to tell me what is going on? 🙂
12-23-2013 02:05 PM
12-23-2013 02:11 PM
The code I posted isn't about finding the time elapsed or anything like that. It is simply to illustrate the problem of my timed loop running slowly when I set the period to a small value.
12-23-2013 02:25 PM
The loop cannot go to the next iteration until all code has finished. If I remember right, the get time in seconds has more overhead, so please try my suggestion.
12-23-2013 02:28 PM
Well, you do maintain original phase on missed periods (that's probably an oversight). Tossing a pair of defer panel updates property nodes around the loop will reduce the number of missed periods. But, I doubt you'll ever see i=4999 (exactly 0 late periods) Not on a non-deterministic OS
12-23-2013 02:59 PM - edited 12-23-2013 03:00 PM
altenbach: Thanks for your reply, and sorry for not making myself clear: The code inside that timed loop takes about 175 nanoseconds to execute. That isn't my problem.
Jeff: My issue is that the timed loop is SUPPOSED to be used for "... VIs with multirate timing capabilities, [and] precise timing." I assume that I am doing something wrong in setting up or configuring the loop. After all, why offer a timed loop for said precise timing if it is impossible to be accurate?
12-23-2013 03:17 PM
Your missing the point. Timed loops let you do some things that while loops won't but they cannot turn a Windows OS into a deterministic system. Now, drop one of those beasties onto a Real-Time target and you can expect less "Jitter."
12-23-2013 03:43 PM
The OS also has other things to do.
If I count the number of "finished late (i-1)" occurences, it is between about 2 and 130 here, so clearly your hardware/OS combination simply cannot handle it. This is not the fault of LabVIEW. As has been said, you neet LabVIEW RT on a dedicated system.
What are you actually trying to do? Maybe a hardware timed acquisition would be more reasonable.
12-23-2013 04:02 PM
Timing is definitely possible in Windows. In fact, calling the kernel32.dll library directly gives accurate timing down to the microsecond (or less). I'll just redo my project and call the C library directly. Thank you both for your help.
12-24-2013 07:23 AM
I don't get it, what is the difference to your LabVIEW code?
Your first vi ends at 5.00029 Sec and the second vi with the dll call takes 5.00429.
So what have you verified?
In your second vi, what is the value of I from the outermost loop when it complete?
Do you really think that the innermost loop can only wait for 1 µs ? (microsecond)
Your second vi does not prove anything about that the timing is better when you call a dll on a windows operating system.