We are deploying a low end vision application on a production line. The system will need to acquire, process, and refresh a digital output on a 5Hz basis.
Running the vision application on the VBAI benchmark tool, it showed a capability of 35 FPS (way beyond our need).
We are considering to deploy the system on a Windows machine, and my thoughts are:
1- Will we still run the risk of not following the real-time constraints (looking at the huge capability overlap).
2- I have seen systems delpoyed on Windows Professional targets (not CE or IoT), has anyone had an experience with such systems, how to strip them down to the lowest cofiguration possible (reducing recources overhead and vulnerability to external factors like updates, etc...)
Solved! Go to Solution.
Windows is no Real Time OS. That means that it does not guarantee any timing requirements. It is possible that Windows keeps a task at a specific timing for some time, however, at one moment, it is possible that Windows does other stuff for several seconds.
Experience shows that a basic Windows installation commonly has, in average, delays in range of several 10th to 100th milliseconds. In rare cases, this exceeds 1s or even 10s.
By disabling specific processes and services, these delays can be reduced in extend ('time length') as well as in numbers ('risk'). However, both will never be zero.
A dedicated RT OS promises that these delays, if occurring at all, will not exceed a specific amount (unless the delay occurs due to your own custom application!). This amount usually is less than 1ms for most RT OS.
So from the reading, you are not concerned about delays with less than 10ms. But how significant is it for you IF there is a spike with bigger delay?
Good info Norbert_B.
I will try deploying a prototype with an external hardware watchdog timer. If the system fails to shake ahnd with the witin a specific pweriod after trigger, a failsafe reject would be triggered.
Thank you for the reply 🙂