[ntp:questions] .1 Microsecond Synchronization

Terje Mathisen "terje.mathisen at tmsw.no" at ntp.org
Fri Jun 5 08:34:03 UTC 2009


ScottyG wrote:
> Thanks everyone for pointing out the, let call it silliness of this 
> requirement. Also thanks for all your quick responses.
> 
> I went back to the traders who defined this requirement. They do 
> seem to think that they know what they want, it's just not what they 
> are asking for. From my talks with them, the main goal is to be able 
> to unravel what happened when a set of trades fail.
> 
> To do this the order in which market data was received and trades 
> transmitted need to be maintained. I do know from their current log 
> files that 1 ms is not fine enough for this and that on occasion .1 
> ms is not good enough. They currently are using a feature of the 
> processors that seems to return clock tick on the microprocessor 

OK, so they are using RDTSC in a single-server environment:

This is fine!

Now they want to make the same kind of algorithm work in a distributed 
environment, right?

The first thing is to define what they mean by 'data was received'?

Is this the point in time where the ethernet packet containing said data 
was received by the network hardware, or the point when the trading 
application had collected the packet, parsed it and was ready to update 
the shared DB and log the fact?

In the latter example, which actually make some sense, you can indeed do 
this at the us level using a good OS (i.e. not Windows) and a GPS-based 
stratum 0 reference clock directly connected to each server.

(BTW, CDMA (EndRun technologies) would allow you to stay within 10 us or 
so even without roof/outside wall access.)

> What do you think you can achieve with let say 5,000-10,000 USD 
> budget for each data center? Could we get 1 micro, 10 micro, 100 
> micro, 1 milli?

Sharing the PPS signal between all servers in a given location would 
probably allow you to get 10 us for 99% of all timestamps.

Using a NTP appliance is right out though, and so is Windows!

You _might_ be able to do it with windows by using a dedicated FreeBSD 
machine as your GPS-hosting NTP server, and then use the (local segment, 
gbit, not routed) network to query the server on every timestamp, i.e. 
use the ntp server time instead of the application server time for all 
log file time stamps.

This would allow you to write the T2 ntp timestamp to the log file, 
which means that the actual (legal) ordering of events would be based on 
when the ntp server received the time query, not on whatever the local 
(windows?) server time happened to be when the application asked for 
that timestamp. This would establish a unique ordering within each 
location, and the ~10 us precision would be used to establish ordering 
across multiple sites by merging of the logs.

If you want a full design document please contact me directly, I can 
probably write it for you.

Terje

-- 
- <Terje.Mathisen at tmsw.no>
"almost all programming can be viewed as an exercise in caching"




More information about the questions mailing list