LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

1MHz Software Timing

Without LabVIEW Real Time it is not possible to create a 1MHz loop. You can either have a really fast (unthrottled) loop that depends on your system or one with a period that is some multiple of 1mS. I understand that Windows is not deterministic and the loop will be extremely jittery.

 

I am contemplating proposing an idea on the Idea Exchange for something like a Wait uS function. It seems like it should be possible since the Get Date Time in Seconds returns a sub mS precision value. I would like to have a 1MHz loop, jittery as it would be.

 

Is there something inherently wrong with that idea?

=====================
LabVIEW 2012


Message 1 of 22
(5,104 Views)

I'd like to see something similar, but I think it'd be very confusing to a new user.  I can imagine the questions now "Why is this here if it doesn't work!?!?!!"

 

On top of that performance would vary greatly from one machine to the next - so something that worked on your nice development machine might not work on your deployment machine.  Plus other programs running in the background could cause your frequency to vary pretty wildly.

 

Could you roll your own with the high precision timer VI thats in vi.lib?

0 Kudos
Message 2 of 22
(5,082 Views)

On the subject of an unthrottled loop, are you aware that you can wire 0ms to the Wait function?  While the loop will still not be throttled, it will yield time to other loops, unlike a loop with no wait in it at all.

Message 3 of 22
(5,078 Views)

What I would like is something that is about 1uS. The jitter is undesirable but I suppose I can live with it. As for performance varying between systems, this is the reason I would like the wait. With an unthrottled loop the variation is extreme. With a 1uS wait it would be less hardware dependant. I know about the 0mS wait yeilding. I will have to look at the high precision timer. That makes it seem like the idea could be implemented.

 

As for new users, well I hate to see LabVIEW designed for new users. It is definately good to make it as unconfusing as possible to help with the learning curve. But I would not like to see features that would be useful to experienced users sacrificed simply because new users might get confused.

=====================
LabVIEW 2012


0 Kudos
Message 4 of 22
(5,070 Views)

Sounds like a good idea to me. For RT systems anyway. I do not know about Windows 7 (I have not found a spec), but in Windows XP, MS did not guarantee timing less than 10 ms. And I had LabVIEW programs that would take unexpected pauses. So, I'm not sure 1uS timing on Windoze would work very well.

 

     Rob

0 Kudos
Message 5 of 22
(5,058 Views)
You are correct, 1uS timing definitely will not work well. But neither does 1mS. It is not a good idea for RT because RT already has this. I just want a 1uS timer that works as badly as the 1mS timer.
=====================
LabVIEW 2012


0 Kudos
Message 6 of 22
(5,054 Views)

Steve,

 

From my observations it appears that one of the items which gets NI to take notice of suggestions is a good description of a use case, preferably one which is hard to accomplish without the suggested idea.

 

I agree that improving the resolution of the timer is probably a good idea even if the OS jitter remains a problem.  But I cannot think of a good use case. Simply releasing a loop so that a parallel node can execute sooner is probably not sufficient.

 

Lynn

Message 7 of 22
(5,048 Views)

@Steve Chandler wrote:

Without LabVIEW Real Time it is not possible to create a 1MHz loop. You can either have a really fast (unthrottled) loop that depends on your system or one with a period that is some multiple of 1mS. I understand that Windows is not deterministic and the loop will be extremely jittery.

 

I am contemplating proposing an idea on the Idea Exchange for something like a Wait uS function. It seems like it should be possible since the Get Date Time in Seconds returns a sub mS precision value. I would like to have a 1MHz loop, jittery as it would be.

 

Is there something inherently wrong with that idea?



Unfortunately yes.  Well not the idea per se but you made one bad assumption about the Windows OS.  To wit: the displayable precision of the timestamp has nothing to do with its granularity (how often the OS will update the value)

 

I created two vis (attached) that you should play with and convince yourself (I may be wrong so really play with em, tear em apart, reconstruct ad nausem.

 

timestamp res.vi is the "ideal" low resolution wait just a greedy loop comparing timestamps. Caution: it is setup benchmark ready (!debugging, !error handleing reenterant and inlined) stufff a long delay into it and poof- that'll use a core for a while.

 

BM TimeRes.vi is a benchmark for timestampRes with statistics to prove the granularity of timestamp updates by the OS.

 

BUT, a great discussion


"Should be" isn't "Is" -Jay
Download All
Message 8 of 22
(5,041 Views)

I find this discussion interesting, since it comes up here and there all the time. And it is not limited to LV by any means.

 

In C#, a product of MicroSoft (MS), you have a Timer class in the Windows Forms you can use to time your application. Reading through the documentation, you will find two very important key constraints (link😞

a) It is for use in single threads only

b) Limited to an accuracy of 55ms

 

Quote:

"The Windows Forms Timer component is single-threaded, and is limited to an accuracy of 55 milliseconds. If you require a multithreaded timer with greater accuracy, use the Timer class in the System.Timers namespace."

 

 

Following the suggested "workaround", you will find this:

Quote (link😞

"The Timer component is a server-based timer, which allows you to specify a recurring interval at which the Elapsed event is raised in your application.[..]The server-based Timer is designed for use with worker threads in a multithreaded environment."

 

But you will have to define an Interval for your Elapsed event:

Quote (link😞

"The time, in milliseconds, between Elapsed events"

 

 

So you see that MS does not provide any tool to achieve any timing with increased accuracy than 1ms. Nevertheless, you CAN achieve higher accuracy since the timing source is indeed running faster than 1kHz. But keep in mind, that timing is always a difficult issue.

Spoiler
Most CPU architectures do not supply a clock which can devided down to exactly 1ms. Therefore, simply counting those clock ticks will not result in exact timings with a 1ms resolution.

 

There are projects to supply higher resolutions on Windows (e.g. read this) but you have to admit that it does not seem to be simple.

 

Now coming back to LV:

NI does supply access to a higher resolution clock if you have LV RT installed. Even though the API is working on Windows as well, you most probable will run into the question Wart already stated in his answer in this thread. It would only introduce confusion for unexperienced/unaware users. It is already often enough an issue that users do not believe that 1ms is not a determinism on Windows OS......

For all interested in a discussion about timing accuracy, you might find this community entry very interesting.

 

hope this helps,

Norbert

Norbert
----------------------------------------------------------------------------------------------------
CEO: What exactly is stopping us from doing this?
Expert: Geometry
Marketing Manager: Just ignore it.
Message 9 of 22
(5,015 Views)

I looked into this a while back, for a project that required sub-ms timing.

Theoretically it's possible, but it looks like it would require a large amount of work. Any recent motherboard has a hardware timer buit in, the High Precision Event Timer, running at greater than 10MHz, which is mostly used for accurate system timing.
Access to it is possible on recent versions of Windows via the QueryPerformanceCounter function, but this simply returns the count. To provide a high resolution timer, you'd need to call this so frequently that the OS would probably grind to a halt.

It's possible to set the timer hardware to provide an interrupt, but I think you'd need to write your own driver for that as there doesn't seem to be anywhere in the Windows APIs where  this functionality is exposed.

On Linux, the situation might be a little easier since there is a more comprehensive hrtimer API which has, among other things, a callback for when the timer expires.

You'd still have to write an interface to LabVIEW of course, and I'm not even sure if it would be useful in the end, since any time the OS is busy doing something you'll miss an indeterminate number of timing intervals until your LV process is granted CPU time. You might end up with a counter with microsecond accuracy that has jitter several orders of magnitude larger. It would be an interesting experiment though.

A side benefit is that it solves the rollover problem, unless you plan on running your computer for more than 2 billion hours between reboots 🙂

Message 10 of 22
(5,006 Views)