From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

BreakPoint

cancel
Showing results for 
Search instead for 
Did you mean: 

"Windows Jitter" posts - high precision timing in windows the stupid way

So this post the other day:  http://forums.ni.com/t5/LabVIEW/Timed-while-loop-is-not-executing-fast-enough/m-p/3303920  got me thinking about Windows timing.  I found myself wondering "If I was willing to sacrifice ALL of one of my CPU cores to timing, could I get Windows to actually do high resolution timing?"

 

In short the answer is:  Almost.  I ran the trial, thought it interesting, and figured I'd share.  I just didn't want to share in the LV forum for fear someone would actually DO this thinking it was a good idea.

 

 

The code:

cpu wait.png

 

The results:

wait histogram.PNG

 

Obviously, even if you wanted to stress your CPU like that there is nothing saying it will ever truly be real time.  Windows will always have the ability to pull that processor for anything it wants.

Message 1 of 7
(8,231 Views)

Thanks for sharing.  I know it probably doesn't matter with floating point math, but part of my brain whats that greater than, to be a greater than or equal, but I doubt the results would change.  I also disabled debugging, and automatic error handling, cause that's something that is recommended when doing a performance test, but in this case I don't think it matters.

0 Kudos
Message 2 of 7
(8,223 Views)

There are many configuration options which help in optimizing Windows to put computation power into your application. This could lead to a more stable behavior and reduced peaks (count and maybe value).

However, experience shows that meddling with those items could also induce a VERY negative behavior for the application or even the whole system (hang, crash, ...).

 

The following items come to mind:

- Core assignment (Windows/LV and other processes)

- OS process priorities (and LV internal ones on top)

- Disable OS services (HD indexing, firewall, anti-virus, ...)

- Performance optimized implementation (potentially CPU type specific)

- using Windows SDK to ensure specific execution behavior

 

Norbert

Norbert
----------------------------------------------------------------------------------------------------
CEO: What exactly is stopping us from doing this?
Expert: Geometry
Marketing Manager: Just ignore it.
0 Kudos
Message 3 of 7
(8,139 Views)

So on the subject of precision timing within Windows I have a thought discussion.  So within Windows (a non-deterministic OS) I can have any number of virtual machines using various software like VMWare, VirtualBox, VirtualPC, HyperV, DOSbox, etc.  What this does is put a computer in my computer.  

 

What if the OS I load is a deterministic OS?  Is it possible to dedicate a core, or some amount of hardware on my PC, to a OS that runs in a way that makes a more reliable timing mechanism for doing things.  Now at some point you'll need to send data back to the host over a network controller so maybe that's where your timing will be messed up again.  Like if I had a 500us wait on my RT OS, then returned to my host telling it the wait is over, then the amount of jitter getting the "Done" message back in the non-deterministic OS might bring us back where we stared.

 

This partially stems from this discussion where loading the RT LabVIEW OS can be done in a virtual machine.

0 Kudos
Message 4 of 7
(8,130 Views)

Norbert:  I've spent some time playing around with those settings.  Recently I was working on a VeriStand project running on a windows machine... I was very surprised that setting the VS process priority to "High" dropped the normal jitter to almost nothing.  That having been said, it was not nearly as accurate as hogging up the entire processor 🙂

 

Hooovahh:  Try it and let me know!  I suspect that since it is still a program running in Windows it will still be subject to Windows timing, but it would be interesting.

 

I also wonder if there would be some way to exploit some of the hardware clocks to do timing.  Microphone input or something...

0 Kudos
Message 5 of 7
(8,111 Views)

In this thread I posted some things that help with performance in a Windows environment.

 

Additionally...

 

You can get some amazing determinism using a timed loop if you have a hardware timing source available. Back in LV 7.1 or so I managed to record about 5000 channels of data at 100Hz with NO "finished lates" being returnded from all of the timed loops (I was using about 30 timed loops). In that case I was reading from ScramNet (reflective memory shared via fiber across a bunch of nodes) which is very fast (just a read from mapped memory). As a quick test I did try reading at 1000HZ using the timed loops and it worked for my short test. My application only required 100Hz so my testing was limited.

 

I do not know if any of that helps but maybe someone will learn something from that brain dump.

 

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
Message 6 of 7
(8,066 Views)

How did I miss this thread?

 

Do a search for my community nugget sub-mSec timing.  My tag cloud tag MyNuggets.

 

I demonstrated a core burner example of just exactly how to get average loop speeds at near precision timer resolution.  


"Should be" isn't "Is" -Jay
0 Kudos
Message 7 of 7
(7,385 Views)