[ntp:hackers] [Gpsd-dev] ntpd shm changes
davehart_gmail_exchange_tee at davehart.net
Sun Mar 20 20:03:44 UTC 2011
On Sun, Mar 20, 2011 at 17:42 UTC, Jon Schlueter wrote:
> On Sun, Mar 20, 2011 at 10:17 AM, tz <thomas at mich.com> wrote:
>> On Sun, Mar 20, 2011 at 8:39 AM, Jon Schlueter <jon.schlueter at gmail.com> wrote:
>> I don't think double-checking can give any advantage over single-checking.
>> You either need a locking mechanism, a way to disable interrupts
>> (across all cores!), or use some kind of queuing/serialization
> I reference it since there has been a decent amount of information about what
> can go wrong if you don't use some sort of locking to share a resource
Thanks for the pointer. You seem to have overlooked the one bright
spot in that sea of "not reliably", namely, the section mentioning one
can portably implement Java lockless synchronization that is reliable
using 32-bit loads and stores. That reveals an assumption no longer,
true, that Java is always itself a 32-bit platform, but I suspect it
is safe to assume int-sized loads and stores are atomic in general,
and on many but not all 32/64 biarch, 32-bit loads and stores are also
atomic for 64-bit code. I retain hope a strategy of storing valid &
count (or similar) in at least 64-bit storage, perhaps 128, and
manipulating them using the native int size. I believe count can be
safely used similar to how it is now with a 32-bit producer and 64-bit
consumer or vice-versa, because the low order bits will change and be
visible to both, and the loss of a carryover into the upper 32 bits
doesn't hurt this use.
> The following are places shared access without the use of a
> synchronization mechanism will break
> * Instruction Ordering (compiler may do things with what you wrote to
> change the ordering)
> code generation/ compiler optimization/ hardware optimization
> * volatile keyword may/may not do what you want it to do... see
> section 5 of the above document
> * memory caches on with a multi core processor
> If you have not read this article or are familiar with this issue I
> recommend taking the time to read
> through Scott Meyers article before dismissing this as not being an
> issue for working with a
> shared memory segment between two apps wanting to exchange information.
Thanks again for the pointer, but the discussion of volatile there is
not relevant to C. There's lovely historic perspective on the
introduction of volatile by Gordon Bell, and discussion of the C++
semantics of volatile, but not C. C _does_ prohibit reordering
accesses between volatiles. Along with careful use of atomic sizes, I
believe on x86 that is all that is needed. x86 multiproc/multicore
systems have rational cache coherency for atomic reads and writes.
I'm not as comfortable with other systems to comment, but I do note
the existing lockless refclock_shm driver hasn't as far as I know
hasn't been observed to have any partial update issues in the wild
with the newer mode 1 (count and valid used).
It may be best to use pthread_mutex_trylock() on the ntpd side to
ensure nonblocking operation, and restrict the hypothetical
refclock_shm.c new POSIX named shm mode with the new shm layout to
systems building ntpd with pthreads. With any luck that routine is
itself implemented via atomic load/store and not a syscall.
Thanks again to everyone, I hope we can get it right on the first try
as a result.
More information about the hackers