[ntp:questions] 500ppm - is it too small?

Joseph Gwinn joegwinn at comcast.net
Fri Nov 13 16:47:12 UTC 2009


In article <hdi4j2$fpe$1 at news.eternal-september.org>,
 David Woolley <david at ex.djwhome.demon.invalid> wrote:

> Joseph Gwinn wrote:
> 
> > No, 8 bits isn't arbitrary.  
> > 
> > Computer hardware is simplified if the various word lengths are all 
> > powers of two.
> 
> Not significantly.  Early machines commonly did not use 8 bit multiples, 
> and they would have been much more sensitive to efficient use of 
> hardware.  

The problem was that logic hardware was very expensive, the primary cost  
driver was word length, and so many corners were cut.  I assembly 
programmed a reasonable sample of those oddball computers.  I am not at 
all nostalgic about them - may they rot in museums forever, gawked at by 
bored children.


>  About the only place where it might be of advantage in modern 
> systems is if the machine instructions allow addressing individual bits, 
> because it wouldn't waste bit offset codes.

If I recall from the late 1960s, the IBM 1401 had variable-length words, 
where the length granularity was one bit or perhaps a 4-bit BDC digit. 
(I never programmed the 1401, but I think it was bit-variable.)

 
> I suspect a major factor in IBM using 8 bits was binary coded decimal 
> arithmetic.  Some very early machines worked in BCD rather than binary, 
> as they were intended for doing commercial arithmetic, and this results 
> in a 4 bit unit.  8 bits is the smallest multiple of this that handles 
> characters well.

This sounds plausible to me, and does not conflict with the reasons 
discussed below.

 
> The Manchester University Atlas architecture used 6 bit sub-units of its 
> words for characters (with shift codes).  The Digital PDP7 used 18 bit 
> words - that was definitely a discrete transistor design, so efficient 
> use of hardware would be particularly important.
> 
> Making memory sizes powers of two does have real advantages.

This is closer to the reason.  While individual parts of the processor 
could be separately optimised, and were, this led to a lot of strange 
glue logic and difficult to program architectures.

If all the various widths were powers of two, everything fit together 
nicely.  

At the same time, the original happenstance instruction sets were giving 
way to instruction sets designed to be "orthogonal", which leads to a 
small set of simple instructions having wide application and few 
exceptions.  Having everything be a naturally aligned power of two 
length greatly helps with orthogonality, so these co-evolved.

The prototypical example of an orthogonal instruction set was the 
PDP-11.  The Motorola 68000 family was an outgrowth.

In summary, there were multiple reasons all pushing one to the 
present-day architectures, where everything is a power of two.

And it had nothing whatsoever to do with timekeeping.


Joe Gwinn




More information about the questions mailing list