I am running a time-sensitive application and have noticed that the timesync service seems to add significant network jitter when it executes every X seconds according to the interval defined in ni-rt.ini. I could probably set the source.sntp.interval (per http://digital.ni.com/public.nsf/allkb/F2B057C72B537EA2862572D100646D43) to the maximum value (18 hours?) but I was hoping there might be a cleaner way to do the timesync only at pre-defined times such as when my RT target reboots. Is there any way to accomplish this?
The use case for NI-TimeSync and what you are describing may have a bit of a difference. In a PC or real-time system there is a component that oscillates very close to a certain frequency that system time and CPU cycles are based on. Since these are usually crystal-based, they are not always perfect and there could be more or fewer oscillations per time period than a perfect time source. This causes a system clock to skew and time to become off by a small number of nanoseconds within a time period. NI-TimeSync is meant to run as a corrective measure against this type of timing skew, checking periodically how far the local system clock has drifted from the chosen master clock on your network. For extremely time-sensitive applications, NI-TimeSync allows your system to consistently stay within a skew range that is much tighter than NTP, SNTP, or any of the other ethernet-based time synchronization utilities.
What kind of system are you working with? PXI, cRIO? If you want to give some details about your application, we might be able to look into minimizing the jitter a bit. Can you quantify the amount of network jitter you are experiencing and what are the implications of this jitter on your application? Let us know!
I'm using a cRIO running as a server with a RT loop at 200hz, potentially moving up to 500hz in the future. The requirement is that UDP jitter recieved by the client should stay within 10% of the loop rate (+/- 500us currently) but when [i assume] the SNTP service activates the jitter significantly exceeds this requirement. Nanosecond precision is greatly in excess of the timing requirements that I have, as I could live with several tens or possibly even a few hundred microsecond drift, as the only real interest is measuring command/response latency between client and server.
It sounds like NI timesync may be excessive for my application's requirements. If I could only sync the system clock with an SNTP server when my runtime executable starts up, I believe that would be acceptable.
With cRIO, we have some limited options on what we can do to interact with the system outside of LabVIEW (unless you are working with a cRIO that operates on NI Linux Real-Time). Synchronization to an SNTP server may be your best option here, and I have provided a couple of links below that will give you the options you can work with to enable time synchronization. Anything outside these options is likely unavailable on the system unless it runs NI Linux Real-TIme instead of PharLap ETS or VxWorks.
Jitter across a network is something that cannot be controlled by a Real-Time system, however. That is all controlled by the network gateway, so any optimizations you make on the cRIO will not have an effect on network transfer speeds, when messages are actually received (you can control when they are sent!), or delays due to high network traffic from another source. For a deterministic networking solution, the EtherCAT standard is a solution for ensuring data arrive on schedule, but that requires an EtherCAT network.
I actually am using EtherCAT in the rest of my system, but am required to use UDP to communicate between client and server, which I am aware is inherently non-deterministic. I have found that UDP is, in general, adeqate for our demands, but has issues when the SNTP service activates.
In your first link, "How Do I Configure My CompactRIO Real-Time Controllers to Synchronize to SNTP Servers?", one of the options is source.sntp.interval. Right now it looks like my best bet may be to put that value to maximum (65536?) seconds to at least minimize the jitter. I was hoping I could disable the interval counter so that the SNTP service only activated at bootup, but I'm not sure if that would work.
It appears that your best option is to set the source.sntp.interval to its maximum. There does not appear to be any other method for allowing the SNTP synchronization to run once or at a greater interval. Unfortunately, we are limited by the options present in the operating system for time synchronization. Another option you may consider is to set the time on the device programmatically on initial run. You would not be able to have very great accuracy using this method, but if +/- 1-2 seconds is within your acceptible error in time, this could work. You would need to send the current time from your master time device and programmatically update the time on your target. This is not a common practice, but those seem to be the only options that could reduce your network jitter.