LabVIEW Idea Exchange

cancel
Showing results for 
Search instead for 
Did you mean: 
Pathfinder101

A micro second wait VI

Status: Completed

Available in LabVIEW 2018 and later with High Resolution Polling Wait.vi.

NI Community,

I have developed some applications where it was desirable to have a Wait, but 1 millisecond is just too long.

 

I came up with a method using the High Resoution Releative Seconds.vi, to create a delay in the microsecond range (it's attached).  This works for the particular application I need it for, because I am waiting on an external buffer to be ready to accept new data (its rate it can process is 60 nanoseconds).  Waiting for an entire millisecond in this case is just too long.

 

The downside to this method is it is tied to your processor speed and current workload.  It would be great if NI supported a 'Wait Until Next us Multiple.vi  (it doesn't Sleep).

Attached is my work-around.  I'd love to see other ideas on this topic. 

Thank you, 

Dan

12 Comments
crossrulz
Knight of NI

The problem is that on Windows, you cannot count on microsecond timing.  You can't even count on millisecond timing!  If you need something that accurate, you really should be moving to an RT system.


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
X.
Trusted Enthusiast
Trusted Enthusiast

Your VI's settings are:

 

Screen Shot 2016-08-31 at 16.35.52.png

I'd be surprised they are optimal for speediest execution, but as the previous comment pointed out, this is probably the least of your concerns. You cannot rely on software timing.

dadreamer
Active Participant

Pathfinder101

As crossrulz already said you cannot rely on timing capabilities of non-RT OS. Even if the frequency source of your PC (RTC or HPET) is able to give you high resolutions, Windows timers inaccuracy still limits it all at 1 ms in theory and at 10-15 ms in practice due to OS own overheads (say, background processes with higher priority than your app or thread / context switching, which also requires some time). You can easily check which resolution the timer of your device has:

 

timeGetDevCaps.png

 

But I'm no doubt that you won't get a precision better than 1 ms.

 

In addition to that, your code does not use Sleep or some other mechanism to suspend a thread execution, that loads CPU (one core on multi-core devices at least) to 100%. It could degrade the performance of your app and other runnings programs and could make the OS unresponsive on long periods of time.

AristosQueue (NI)
NI Employee (retired)

Pathfinder101: As others have said, you'll need real-time hardware running a real-time OS to get the tolerances you're looking for.

 

For $425, this kit includes a real-time and FPGA target hardware and the LabVIEW Real-Time module:

http://sine.ni.com/nips/cds/view/p/lang/en/nid/205722 

Your application may need heavier computing power or more I/O capacity, in which case you'd need a beefier hardware option.

 

AristosQueue (NI)
NI Employee (retired)

If you end up needing the full Real-Time module for your project, you can purchase it here:

https://www.ni.com/en-us/shop/product/labview-real-time-module.html

Contact an NI Field Sales Engineer to discuss hardware options. That's outside my area of knowledge entirely.

JÞB
Knight of NI

This appears possible withou adding to the LabVIEW enviornmanet

As seen in the code project in this community nugget

 

Although the nugget did not explore the concept, a waitable timer based on the Precision OS Timer also seems possible.  CAVEATEs: Resolution of the presision timer is hardware dependant so results may vary from target to target.  There is not requirement for a Precision OS Timer to exist at all (though most computers these days seem to have one) So, you cannot garuntee it to work just because a specific OS is the target.  NI should not be adding features that require specific optional features on target OS's  and it is "roll-your-ownable" at your own risk.

 

In no case is a sub mSec timer suitable to replace a RTOS where determinism is needed


"Should be" isn't "Is" -Jay
dadreamer
Active Participant

Jeff·Þ·Bohrer

I tried and tested waitable timers from kernel32 in LabVIEW. Indeed, they may be used instead of traditional Sleep delay (from WinAPI or LV's native one). But they are also affected by OS jitter effects. Not as much as standard Sleep but this is it (say, interaction with UI thread (some actions on FP) causes waitable timer to return approx. 1 to 10 ms later than it should). And one cannot escape from it on non-RT OS'es.

JÞB
Knight of NI

 Daydreamer.  I would love to see an example of that.  Since you can't attach anything here, kindly add it to the sub-mSec nugget linked above.  Thanks

 


"Should be" isn't "Is" -Jay
dadreamer
Active Participant

Jeff·Þ·Bohrer

There's nothing extraordinary or complex if you have worked with CLF nodes and Windows API earlier. You may take a look at this example, where waitable timer is used to pause the program for 2 ms. I did very basic tests with it but FP interaction also influences on the timer's accuracy in both IDE and EXE modes. You could do your own more thorough benchmarks.

TomOrr0W
Member

Is the High Resolution Polling Wait vi of LabVIEW 2018 sufficient to close this idea?