[ntp:questions] .1 Microsecond Synchronization

Unruh unruh-spam at physics.ubc.ca
Fri Jun 5 00:22:56 UTC 2009


ScottyG <scottg at pepex.net> writes:

>Thanks everyone for pointing out the, let call it silliness of this 
>requirement. Also thanks for all your quick responses.

>I went back to the traders who defined this requirement. They do 
>seem to think that they know what they want, it's just not what they 
>are asking for. From my talks with them, the main goal is to be able 
>to unravel what happened when a set of trades fail.

>To do this the order in which market data was received and trades 
>transmitted need to be maintained. I do know from their current log 
>files that 1 ms is not fine enough for this and that on occasion .1 

I think they are wrong. They do not want to unravel what happened, they
want to be able to claim that a certain sequence happened whether it did
or not. There is just no way that .1ms is of any relevance to the
ordering of trades. There simply is NOT the accuracy in the whole system
of entering a trade. It is like timing walkers to a millionth of a
second. there is simply no way you can define the crossing of the finish
line to that accuracy. Thus all you get is noise. they may want to fool
themselves that the noise makes some sense, but it does not. 

>ms is not good enough. They currently are using a feature of the 
>processors that seems to return clock tick on the microprocessor 

Sure. That is what Linux does, or did. ( which gets screwed up when the
processor slows down for power conservation). 

>(Some assembly language instruction). They have an algorithm for 
>controlling the skew that occurs using this method. This seems to 
>meet there needs in a single server scenario but when going across 
>machines this will obviously not work.

And they obviously have no idea about significant figures or errors or
accuracy. They would flunk any first year physics course. 


>What I would like to do is go back to them with reasonable 
>expectations. 

>What do you think you can achieve with let say 5,000-10,000 USD 
>budget for each data center? Could we get 1 micro, 10 micro, 100 
>micro, 1 milli?

You could make the clocks on the system run in such a way that the time
reading was accurate to about 5 microseconds on average on a machine
connected to a GPS clock. You could do that for $200. For 10000 you
could get that down to 2 microseconds. But again, the reading of the
clock is NOT where the error is. The lines which feed in the information
which defines the trade, the entering of the trade, the signal delays on
the network, the signal delays within the software, the preemption of
the system by the operating system to read the disk, etc. all will
produce noise that is far greater than this. It is like asking someone
to measure the length of a room by pacing it off with their feet, and
then demanding that they give you the anwer to micron. 


>One catch is the not all the data centers have access to roof space 
>for us. One company claims that they can use CDMA as a source time.  
>Does anyone know the implications of this? It seems that the time 
>would be sourced from GPS and retransmitted via the cell towers. To 
>me this brings up more potential delays but I am not an expert.





More information about the questions mailing list