NTP implementation status

Posted on May 23, 2016

I am continuing my work on making an NTP client and server implementation in OCaml for the Mirage unikernel. Unfortunately due to health issues I have had to take a break from working on this, but I am now back at work. I have studied the literature about the timekeeping quality of the NTP reference implementation and about alternative methods of processing NTP data. The NTP reference implementation is filled with a whole lot of extremely old code and support for archaic platforms and lots of bad cryptography1.

More importantly, the math involved in processing data from packets and generating statistics and converting those statistics to parameters to discipline the clock is suspect at best – it’s full of ad-hoc algorithms with an utter menagerie of hand-tuned constants. The reference implementation calculates a set of estimators for incoming data but then does statistics on the values of those estimators – the whole design feels rather unsound and is not accompanied by any sort of formal analysis. The only justifications are in the source code as comments and are just informal arguments and refer to (usually unpublished) results of hand-tuning. For example, part of the ad-hoc tuning involves throwing away data so as to not upset the statistics that are run on the estimators. The reference NTP implementation only keeps the last 8 samples when there is no good reason to limit ourselves to just 8 samples – perhaps better results can be achieved if we keep more samples around so we can run whatever calculations we want on them (rather than just keeping perhaps-dubious values of estimators).

While doing my search of the literature to find better algorithms for processing NTP data I came across RADclock, which is a radically different NTP implementation. First of all, it takes a feedforward approach – rather than adjusting the parameters of a clock that is used to compare our timekeeping with a trusted server’s timekeeping, RADclock lets the CPU’s timestamp counter2 run free, and measures all the times in terms of counter values. This reduces a tricky dependency – if our NTP client only works with counter values, our adjustments of the local clock will never affect past or future local time measurements. RADclock estimates the rate of our CPU counter against a trusted server’s time and therefore on a RADclock host we can define a difference clock based on the estimated counter rate. To define an absolute clock we need to ask RADclock for the offset (which, again, it measures against the server it queries) from our counter-derived-time with an absolute timescale such as UTC.

There are no complicated, hard-to-tune PLLs or FLLs or feedback loops or clock discipline algorithms as there are in the reference NTP implementation – RADclock has a simpler task of estimating the rate of an already-good oscillator (and the offset of the counter driven by that oscillator) against a reference server. Unlike the reference implementation, RADclock doesn’t needlessly throw away data or prematurely compute estimators over data. There are much fewer parameters to tune and the algorithm is, in fact, quite insensitive to changes in those parameters. The design decisions are justified with actual data taken over months of operation. There are many, peer-reviewed papers about RADclock3 that reliably show a significant improvement of performance over the NTP reference implementation. Most significantly, the data in those studies compares RADclock’s estimates of time against a gold standard – a data acquisition card with ~200ns nanosecond accuracy connected to both the RADclock host’s network interface and a timing-grade GPS receiver. Unlike most ad-hoc experiments with the reference implementation (which tend to just quote the NTP software’s own estimates of timing variability), the data showing that RADclock’s simpler algorithms are an order of magnitude better was measured with an independent and extremely accurate system.

Unfortunately, RADclock is not a drop-in replacement for the reference NTP implementation (other NTP daemons such as Chrony are). The host’s kernel needs to be significantly patched: the kernel needs to directly expose the current timecounter, it needs to expose an API by which a userspace RADclock process can plug in estimates of timecounter rate/offset into the kernel, an API by which any other process can use those estimated parameters and the current timecounter value to generate a time, and code to attach the value of the timecounter to incoming NTP packets (so a userland RADclock process can get a timecounter estimate with the least amount of latency). The invasiveness and complexity4 of the kernel patches means that outside of speciality applications, the use of RADclock has to my knowledge been minimal.

Fortunately, in a unikernel environment, all these issues with RADclock are immaterial. I might have to write a few lines of C in the xen mini-os to provide a clean interface to the timecounter, I haven’t looked into that yet – but that is nowhere near the complexity of the radclock patches for freebsd or linux. Similarly, timestamping incoming/outgoing NTP packets with the timecounter value is not a difficult change to make, and presenting a new clock API that generates time values based on the current timecounter value and the last estimated parameters is not too difficult – there isn’t a difficult userland/kernelspace boundary to work around. The actual RADclock algorithm is not fundamentally complicated or difficult to implement. While the source file it’s in is long (2273 lines of C), a lot of the lines are taken up by things such as fussing with indices into the ring buffers used to store samples, which can be simplified with better and more powerful data types. Indeed, I finished working on an OCaml implementation of the ring-buffer data type that RADclock uses to store samples and today started working on the code that queries the NTP server and generates samples (and passes them to the RADclock algorithm proper). Once I can generate NTP queries and receive and validate5 and timestamp replies, I will implement the RADclock algorithm proper.


  1. While trying to understand the NTP reference implementation, I accidentally discovered and tweeted about a timing side-channel vulnerability in the NTP reference implementation on 10 March 2016 – more than a month before this vulnerability was announced publicly (on 27 April 2016, CVE-2016-1550 was announced publicly. I had no knowledge about any research or findings related to CVE-2016-1550 until the 27 April public release). I discovered the same timing oracle vulnerability in the Chrony NTP implementation on 10 March 2016: the text in the screenshot is the function where the bug is introduced – in the function UTI_CheckNTPAuth in util.c. This bug has not yet been fixed.

  2. Or if the CPU TSC isn’t suitable (as in, if it changes rate when CPU changes power state), the HPET or ACPI counter (or whatever equivalent counter exists on the host system).

  3. Some of those refer to TSCclock, RADclock’s old name.

  4. The parts of the kernel that the patches are against regularly change, which means that the patches need to be revised often

  5. I have code for validating reply packets left over from my abortive attempt at implementing the reference NTP algorithms in OCaml