LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

1MHz Software Timing

Yes the inherent flaw is the OS.

 

In Windows etc we are working with an OS designed to update a mouse moving as if it is RT so 1 KHz is just fine.

 

It could take a while to fully explain the details but it comes down to the process scheduler in windows deciding which of the process threads taht are in the compute queue get the CPU next. This decision making is being triggered by hardware that invokes a trap to get the scheduler running again and swap process contexts.

 

That is if we rely on what the OS provides.

 

NI offers another option that is often over-looked because... well they can sell more RT targets.

 

The other option is a hardware timed Timed loop running under Windows. I have not been asked to test it recently so I can say it has not bee buggered up but..

 

In LV 7.1 I was able to run hmmm I think it was about 30... hardware clocked timed loops at 2KHz with none of them finishing late (provided I was NOT using a Global as the stop flag).

 

I suspect this works because the hardware is trigering interupts to drive the loops instead of relying on the OS.

 

If you have access to a hardware device that can provide the clock, try out the Timed loop for yourself! I'd love to hear if it is still working as well as it did in LV 7.1.

 

SO to close:

 

The short-coming is the OS.

 

It has been years since I looked atht e source code for OS's so if anyone out there has taken a more recent look plese feel free to correct me and get me up to date.

 

My 2 cents,

 

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
Message 11 of 22
(1,352 Views)

Thanks for all the replies. I was about to decide that this was just not possible until I read what MaxCrunch said combined with what Ben just said.

 

@MaxCrunch:

"It's possible to set the timer hardware to provide an interrupt, but I think you'd need to write your own driver for that as there doesn't seem to be anywhere in the Windows APIs where  this functionality is exposed."

 

@Ben

"The other option is a hardware timed Timed loop running under Windows. I have not been asked to test it recently so I can say it has not bee buggered up but..

 

In LV 7.1 I was able to run hmmm I think it was about 30... hardware clocked timed loops at 2KHz with none of them finishing late (provided I was NOT using a Global as the stop flag).

 

I suspect this works because the hardware is trigering interupts to drive the loops instead of relying on the OS."

 

That would indicate that it is theoretically possible to drive a hardware timed loop by writing a driver that utilizes interrupts off of the built in High Precision Event Timer. Norbert, I would be very interested in your thoughts on that.

 

I just quickly scanned over all of the information and links provided and will have to look at it more closely later.

 

It is just sad. Moor's Law seems to still be in full force. I have eight cores running at over 3GHz and I cannot get a 1MHz loop. I understand that Windows is to blame but there must be a clever workaround. A millisecond is an ice age! Smiley Frustrated

 

Does Windows have an Idea Exchange? :Smiley Happy

=====================
LabVIEW 2012


0 Kudos
Message 12 of 22
(1,337 Views)

Steve,

 

since you ask me directly about my thoughts:

It will be a cumbersome process if you want to implement something on your own.

First of all, something working with interrupts creates a ton of CPU load and will most probable be limited to something of 10kHz (with remarkable high, but not single core 100% CPU load). So this might not be a good solution, but it will supply something < 1ms.

If you had a DAQ device available to fake some clocking < 1ms, the approach Ben mentioned seems worthwhile.

The third approach is to use LV RT "Tick Count" for this (requires LV RT module to "instantiate").

The fourth approach is to suggest having "Tick Count" in LV (without RT module), but this is some distant future solution if at all.....

 

Nevertheless, i admit that i doubt any "standard programming language" to adopt timer functionalities < 1ms in the default tool suite for Windows systems in the future. This because of the bottleneck OS. So unless the expected jitter of the OS will not drop to less than 1ms, (i normally suggest on current Windows OS an expected jitter of about 10ms with sporadic peaks to >1s) it simply makes no sense to me to supply a more accurate timer on the broad basis.

 

That being said, you could start off doing something like this:

Create an OS service running on very high priority as a time keeping task. In fact, this service has to have the ability to preempt other high priority OS services.

This service has to keep track of timer ticks and synchronize to known time bases (you want timestamps or simple delta t?). The service must not create high CPU load on the other hand.

And taking this requirements into account, you are coming down to write a real-time kernel running in parallel to Windows. And suddenly, you realize, that you are going to implement something similar to NI Hypervisor.....

 

Sorry, if that post reads something like a marketing newsletter, but i concur in this point with NI marketing.....

 

Norbert

Norbert
----------------------------------------------------------------------------------------------------
CEO: What exactly is stopping us from doing this?
Expert: Geometry
Marketing Manager: Just ignore it.
Message 13 of 22
(1,328 Views)

Just an option to go <1ms on a Windows machine.

 

As described in this old page:

http://zone.ni.com/reference/en-XX/help/370622D-01/lvrtconcepts/lvrt_module_plats/

 

it seems possible to rely on the Real-Time Extension scheduler, being faster and more reliable that the one provided by Windows...

 

Marco

 

 

 

Message 14 of 22
(1,323 Views)

MarcoMauri, thanks but I was looking for something that didn't require buying RT. If determinism was necessary for what I want to do then I would look into it.

=====================
LabVIEW 2012


0 Kudos
Message 15 of 22
(1,312 Views)

@Steve Chandler wrote:

MarcoMauri, thanks but I was looking for something that didn't require buying RT. If determinism was necessary for what I want to do then I would look into it.


 

 

To help make your attempt as fruitful as possible, I suggest you review this old post by myself from years ago.

 

Some of those setting may have gone away or moved but it is still a good place to start.

 

Ben

 

PS I am glad I tagged that one. Imagine trying to find it with tagging!

 

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 16 of 22
(1,308 Views)

Smiley Very Happy


Jeff Bohrer wrote:

I created two vis (attached) that you should play with and convince yourself (I may be wrong so really play with em, tear em apart, reconstruct ad nausem.

 

Hah! Just noticed the eyecon for timestampRes Smiley Very Happy

=====================
LabVIEW 2012


0 Kudos
Message 17 of 22
(1,291 Views)

Jeff,

 

I have been playing with your VIs, but am not sure I understand what I am seeing.

 

When I run it with your default values the mean is typically 00:00:00.000900640487670898425000 or about 600 ns high.  The standard deviation is about 320 ns. I added Array Max & Min to the Actual Delay array.  About 80% of the time the min and max values are exactly the same: 0.000900269 and 0.000901222.  The maximum is sometimes a bit larger.  I have never seen the minimum change.  A histogram shows three values: 900.269 @ 35%, 900.745 @ 50%, and 901.222 @ 15%. Difference is 476 ns.

 

Lynn

 

 

0 Kudos
Message 18 of 22
(1,277 Views)

@johnsold wrote:

Jeff,

 

I have been playing with your VIs, but am not sure I understand what I am seeing.

 

@when i run it with your default values the mean is typically 00:00:00.000900640487670898425000 or about 600 ns high.  The standard deviation is about 320 ns. I added Array Max & Min to the Actual Delay array.  About 80% of the time the min and max values are exactly the same: 0.000900269 and 0.000901222.  The maximum is sometimes a bit larger.  I have never seen the minimum change.  A histogram shows three values: 900.269 @ 35%, 900.745 @ 50%, and 901.222 @ 15%. Difference is 476 ns.

 

Lynn

 

 


NOT the numbers I saw- but close...

 

I greatly susspect that those numbers are specifice to OS, Motherboard, And any "high priority services".  

 

Or, essentially they ARE hardware dependant-  and CANNOT be reasonably reproduced in any  "softare" environment"

 

 

A shame really, Time is the SI unit that has been both studied the most an calibrated to the least error.  Maybe, in a whlie, prevalent OS's will adapt.  (but, as noted above, OS's are really a human's method of accessing the power of transistorized switching- humans are slow!)


"Should be" isn't "Is" -Jay
0 Kudos
Message 19 of 22
(1,273 Views)

Hi,

I was browsing the net for a sub ms timer (optionally with an interrupt) and found this post.

I can suggest that in at least one situation such a timer would be useful to implement hardware control without going RT. I have a slow control setup reading temperatures, pressures, etc. using a USB based I/O chassis. Now I want to control a four axis stepper motor system using a parallel port interface on the same computer. I don't really want to go through programming another system or spending money on specialized hardware. I don't really need to do that except for the sub-ms clock reference (by the way my slow control PC spends less than 5% of its resources reading and writing about 50 channels). Here goes the description of the problem and how a sub ms timer would be useful for me.

To drive the motor(s) I send a rising TTL edge to the driver(s). Looping without a timer or with "zero" timer is too fast and the motor(s) slide. I made each step of the loop pretty long by reading and writing the parallel port multiple times (checked with an oscilloscope and I have at least 15 us) but the repetition rate is still to fast at about 14 kHz. With the LabVIEW timer I get close to 1 kHz but it is too slow for the task I need. I tried computing crap in the loop but then it really affects the overall PC performance (like reading those 50 channels). By the way, claiming that jitter is the killer for a useful sub-ms timer is rather unfair as, first, the ms timer is far from being jitter free and, second, there are lots of applications were jitter can be tolerated. In my case, I only need to insure that two consecutive TTL edges are separated by at least a certain amount of time (100 us lets say). The fact that the OS may introduce large delays won't make my stepper motor slide. I only need to insure that the motors do precisely a N number of steps. The OS related jitter would affect the smoothness of the mechanical displacement but this is a different matter I can solve with the appropriate coupler.

Cheers!

0 Kudos
Message 20 of 22
(1,234 Views)