[ntp:hackers] [Gpsd-dev] Single-writer/many-reader consistency

Terje Mathisen terje at tmsw.no
Sat Mar 26 14:58:46 UTC 2011

Eric Raymond wrote:
> In previous mail Dave Hart wrote:
>> 8.  Come to agreement on the consistency protocol for one writer,
>> multiple readers, including use of memory barriers where available,
>> but do not require memory barrier support from the compiler, as (IIUC)
>> x86/x64 do not need them for volatile atomic accesses to be coherent
>> across processors, and the ovewhelming majority of current
>> refclock_shm users are on x86 or x64.
> I'm writing to point out that NTP and GPSD are facing near-identical
> problems here.  We too have a one-writer/many-readers shred-memory
> interface we're trying to design with good consistency guarantees.
> As noted in that other thread I started: If your premise is that you
> can't require on memory barriers, you may *have* to go with a
> semaphore, because in their absence instruction reordering and strange
> memory-controller strategies can cause serious problems for spinlocks.

I've done some research, it turns out that both gcc (from v4.something) 
and MS Visual C (from before V2005) have builtin support for memory 


In the latter there's a nice chart showing all the ways various 
architectures relaxes the strict program order memory consistency.

Quoting the second link above:
GCC since version 4.1.0 and intel c++ compiler have special builtin for 
calling full hardware memory barrier:


This means that we can simply #define a macro which in supported 
gcc/Intel compiler versions will turn into that intrinsic.

For the Windows port we have


so the usual #ifdef WINNT stuff can easily select one or the other.

> I myself am seriously considering abandoning the spinlock variation I
> invented in favor of a semaphore.  I invented my mechanism because I
> wanted to avoid contention on a semaphore, but in thinking about it
> further I think I may have been more worried about that than I should
> have been.
> Only one process (the single writer) would ever change the semaphore
> state; the readers would just block or spin until it goes to zero.  In
> GPSD writes are infrequent, now normally 1 per second and highly
> unlikely to exceed 100 per second even with survey-grade GPSes we
> don't support yet.  Writes are also small, less than 8K.

A shm segment has to be supported in hw by a separately mapped memory 
page, right?

This means that the smallest possible block is 4KB, while the current 
gpsd-ntp interface uses just 88 bytes, including 40 bytes of padding.

Each timestamp block will require 24 bytes in the new aligned/fixed-size 
layout, so going to dual buffering (which means the clients will _never_ 
have to wait before reading, just check afterwards that the count 
variable is unchanged) will just reduce the padding to 16 bytes.

I suggest we extend the buffer size to 4 entries, making the entire 
block 120 bytes, since this will fit inside the same two 64-byte cache 
lines as the current layout needs.

The final 8 bytes (to make the total 128) is where I want to locate an 
endian-detecting marker: "GPSD-NTP":

struct timeStamp {
         int64_t clockTimeStampSec;
         int64_t receiveTimeStampSec;
         int32_t clockTimeStampUSec;
         int32_t receiveTimeStampUSec;


struct shmTime {
	int64_t marker; /* 0x50544E2D44535047 == GPSD-NTP in LE */
         int32_t mode;   /* 2 + TIMESTAMP_BUFFER_SIZE * 256 */
         int32_t count;

         int32_t leap;
         int32_t precision;
         int32_t nsamples;
         int32_t valid;

         timestamp ts[TIMESTAMP_BUFFER_SIZE];

> Given this timing pattern, I think readers blocking on the semaphore seem
> unlikely to take a performance hit that is much above the level of
> profiling noise, if that.

They don't need to do even this, four timestamp buffer slots will 
effectively remove _all_ client-level contention.
> If NTP thinks it's looking at a similar timing pattern, you might want
> to seriously look at using an explicit semaphore.
> (I'm having big fun with all this.  Eric gets to think like an OS implementor
> again, yay!  I've missed that, I have.)



- <Terje at tmsw.no>
"almost all programming can be viewed as an exercise in caching"

More information about the hackers mailing list