[ntp:hackers] [Gpsd-dev] Single-writer/many-reader consistency: Test program written & verified

Terje Mathisen terje at tmsw.no
Mon Mar 28 14:55:07 UTC 2011


tz wrote:
> On Mon, Mar 28, 2011 at 8:21 AM, Terje Mathisen<terje at tmsw.no>  wrote:
>> Terje Mathisen wrote:
>
>> All threads done!
>> Thread 0 did 1020921970 iterations
>> Thread 1 did 1939160384 iterations
>> Thread 2 did 924938960 iterations
>> Thread 3 did 2002669713 iterations
>> counter_update = 1047208131, counter_retry = 365502934,
>> timestamps_inconsistent = 0
>>
>> I.e. 1e9 write operations, nearly 4e9 reads.
>>
>> Of the reads 1e9, i.e. about 25% happened in the middle of an update and one
>> third of these required a retry due to the counter variable getting close to
>> wrapping around.
>
> Which architecture, Alpha is apparently the most intersesting one?

Alpha will work as well, as long as the compiler obeys the rules about 
not moving load/store operations past the barrier.
>
> Also, would it be possible to try higher and higher update rates until
> we detect inconsistency instead of saying 100Hz seems safe.

That's exactly what I did! The writer managed 13 MHz, the readers 
significantly more.
>
> Others have been pointing out that different architectures guarantee
> different things, so if the architecture you tested on guarantees no
> (undetected) errors at 400% cpu even without a write barrier the test
> is not of the algorithm.
>
> I'd be curious how fast you could do things on an alpha without a
> write barrier.  Or if you are going to use a write barrier for the
> architectures, the code that implements it - the long #if ARCH chain.
> Or will sync_synchronize() be enough?

As long as the compiler supports it, the sync call will indeed be 
sufficient to make the algorithm safe at any speed.

Terje
-- 
- <Terje at tmsw.no>
"almost all programming can be viewed as an exercise in caching"


More information about the hackers mailing list