[ntp:hackers] [Gpsd-dev] A lightweight synchronization mechanism for shared memory
f96hajo at chalmers.se
Fri Mar 25 08:33:22 UTC 2011
On Fri, 25 Mar 2011, Terje Mathisen wrote:
> Eric Raymond wrote:
>> tz<thomas at mich.com>:
>>> Anyway it won't work if memcpy is interruptable.
>>> Reader starts the memcpy, copies the start sentinel and half the payload.
>>> Writer increments the start sentinel. and writes into the payload.
>>> Reader continues the memcpy where it left off through the end sentinel.
>>> Both sentinels are equal, yet the bottom part of the payload is invalid.
>> Hm. You're right. The copy has to be uninterruptible.
>>> The only way I can think of is a single writer sets a bool to safe to
>>> read - unsafe to write, and a single reader sets it back to safe to
>>> read - unsafe to write.
>> Doesn't help the multiple-readers case.
> I've been reading the source code for this shm interface, and to me it
> seems almost like a no-brainer to go to a circular buffer of 2^N
> entries, along with a single (write-update only) counter at the front
> end which indicates where the latest valid entry is located.
Going to a more-than-1 entry buffer does not immediately give any further
guarantees, but it does open another loophole:
If the writer crashes in a mid-update, say entry 3, then until the next
alive writer has overwritten that entry, say when it just claims to be
writing next entry 4, the reader may get the idea that the previous entry
(number 3) is now good and usable.
Perhaps fixable by part of the count (or another variable) holding also a
bitmask of the valid entries, but such an approach requires the analysis
of a lot more contrieved edge-cases to (if at all possible) make
guarantees, as opposed to the 1-entry method. Where we know it cannot
always allow a successful correct read within finite time, but we know how
to make any claimed successful read also be that.
> The readers (one or more) can then always read the last valid entry, it
> might be a re-read of what they read on the previous iteration, but that
> is something which it is easy for the reader to guard against:
> unsigned cnt = shm->count;
> if (cnt == prev_cnt) return;
> When the counter is valid you mask it with the buffer count mask to
> bring it into range:
If I'm only allowed to mask the counter when it is valid, the time during
any writer update makes all entries unusable anyhow, so 2^N is no better
> prev_cnt = cnt;
> cnt &= shm->count_mask; // Bring it into [0..7] range?
> Copy the current entry into local vars:
> This is of course still not absolutely bulletproof, in that a reader
> process which starts to read an entry, then gets put to sleep for N(=8?)
> seconds, can wake up again and read the last half of an updated record.
> This is easy to guard against by re-reading the shm->count value: Since
> this counter is explicitly allowed to wrap around at 2^32 or 2^64, you
> cannot sleep for that many seconds.
> a) You can always read the last valid entry.
> b) The only sync barrier needed is in the producer (gpsd) which should
> have a write barrier between the last write to the new record, and the
> increment of the shm->count variable.
Read barriers too. Please see the linux kernel documentation:
Also note how the word 'volatile' is absent.
Another point to note is that the way in which the memory copying is
implemented - inlined, function call, for loop, even
local-cpu-irq-disabled - does not help against the interplay of the memory
subsystems of the two cpus involved, which - without write/read-barriers -
may provide the read data as new and old intermixed.
> c) The protocol is _very_ similar to the current shm->mode == 1 code.
> - <Terje at tmsw.no>
> "almost all programming can be viewed as an exercise in caching"
> Gpsd-dev mailing list
> Gpsd-dev at lists.berlios.de
More information about the hackers