I'm a relative newcomer to this game, to the extent that I've never used
a machine with a non-power-of-two wordsize.  Since I've always used 8 bit
bytes, 16 or 32 bit integers, and 32 or 64 bit floats, I've always seen
hexadecimal as the natural computer-oriented base, and octal as a bit of
a brain-damaged perversion.  As I learn more about the history of
computing, though, it's becoming apparent that the switch from octal to
hex was a fairly recent phenomenon.

Gill's "Machine and Assembly Language Programming of the PDP-11" (1978,
revised 1983), for instance, uses octal extensively and doesn't mention
hex at all.  This despite the fact that the PDP-11 has 16 bit words and
addressable 8 bit bytes.  So you get sentences like "For example, the
word whose actual (binary) contents is 1010011100101110 has the octal
contents 123456; its low byte has the octal contents 056, and its high
byte, 247."  Gill doesn't mention that in hex those same numbers are
0xA72E, 0xA7 and 0x2E, which seems much more straightforward to me.
After all, if you're going to use a non-decimal base, why not use the
one best-suited for the job?

The C language supports both octal and hexadecimal constants, but uses
octal escape sequences to represent special characters.  The Kernighan
and Ritchie manual (1978) uses octal for all the examples of bitwise
operations.  The 0 and 0x conventions clearly indicate octal as the
default choice.  Did these conventions originate with C or were they
inherited from earlier usage?

Getting closer to the present, Postscript appears to prefer hexadecimal
("Postscript Language Reference Manual", Adobe, 1985).  It supports
arithmetic in any radix from 2 to 36, octal escape sequences (those
appear to be an unavoidable standard), and a hexadecimal, but no
octal, representation for strings and bitmaps.

So, old timers, do I have the history about right?  Did hex come into
common usage as recently as the late 70's?  Does anyone still use octal
for any reason other than to adhere to earlier standards?  Was it just
inertia that kept the octal system from going out of fashion along with
6 bit characters?  Comments?  Flames?  Religious denunciations?

-- 
Louis Howell

  "A few sums!" retorted Martens, with a trace of his old spirit.  "A major
navigational change, like the one needed to break us away from the comet
and put us on an orbit to Earth, involves about a hundred thousand separate
calculations.  Even the computer needs several minutes for the job."



From article <70205@lll-winken.LLNL.GOV>,
by howell@grover.llnl.gov (Louis Howell):
> 
> So, old timers, do I have the history about right?

In the days when debugging of machine code was very important, the choice
between hex and octal tended to be very machine specific, depending on
the prevalance of 3 or 4 bit fields in the machine language.  All of the
following examples are for machines with 8 bit bytes:

--------

The IBM 360, an architecture from the mid 1960's, always used hex because
the instruction was full of 4 bit fields that were aligned on hex digit
boundaries.  Here's your typical IBM 360 instruction:
    _______________ _______ _______ _______ _______________________
   |_______________|_______|_______|_______|_______________________|
   |  8 bit opcode |   R   |   X   |   B   |     12 bit offset

This made sense with 16 general registers.  R, X, and B each specified a
register, R specified an operand register, X gave an index register, and
B gave a base register.  Some instructions were only the first 16 bits of
this format.

--------

The DEC PDP-11, an architecture from the late 1960's, always used octal
because the instruction was full of 3 bit fields aligned on octal digit
boundaries.  Here's a typical instruction:
    _______ _____ _____ _____ _____ 
   |_______|_____|_____|_____|_____|
   |  op   |  M     R  |  M     R  |
   | code  |  source   |destination|

This machine had 8 general registers (3 bits for each of the two R
fields), and 8 addressing modes (3 bits for the two M fields).  The 4 bit
opcode used the leftovers.  Some address modes involved additional 16 bit
words for constants or indexing.

Because UNIX was largely developed on the PDP-11 in the 1970's, most of
the portable UNIX tools use octal, with support of hexadecimal tending to
be a bit spotty, at best.

--------

Software standards centered on the the Intel 8080 family, dating from the
early to mid 1970's (admittedly descended from the 4004 and the 8008, and
including the 8086 and 80x86 families), tends to be octal because the
basic 8080 instruction format is also centered on 3 bit fields:
    ___ _____ _____ 
   |___|_____|_____|
   |OP |  D  |  S  |

This machine had 8 registers, 8 bits each, although hardly general
purpose.  Many instructions had additional bytes, and most instructions
only specified one register, so the bits of either the D or S field were
available for opcode extension.

--------

Conclusion:  The reason you might think hex is a newer standard is
because more new architectures have 16 registers, making hex a more
natural radix for machine code debugging, and because machine-level
debugging has declined in importance during the last 20 years, freeing
people to chose hex because it works better with 8 bit bytes.

				Doug Jones
				jones@herky.cs.uiowa.edu


From dscatl!wcieng!emory!swrinde!zaphod.mps.ohio-state.edu!usc!apple!agate!usenet Mon Oct 29 11:59:09 EST 1990


In article <70205@lll-winken.LLNL.GOV> howell@grover.llnl.gov (Louis Howell) writes:
>
>So, old timers, do I have the history about right?  Did hex come into
>common usage as recently as the late 70's?  Does anyone still use octal
>for any reason other than to adhere to earlier standards?  Was it just
>inertia that kept the octal system from going out of fashion along with
>6 bit characters?  Comments?  Flames?  Religious denunciations?
>

No comments on ascendancy of hex but ...

The original preferance for octal over hex is based on the fact that
most people can quickly learn the mappings of octal digit <-> 3 binary
digits  (e.g. 5 == 101).  If you need to transcribe front panel lights
to paper this can be a very effective shorthand.  Most humans (excluding,
of course, those who can do grey codes on their toes) have some trouble
doing all 16 mappings for hex <-> binary (quickly now, ? == 1011 ).

I believe that this is covered in one of George A. Miller's essays in 
_The_Psychology_of_Communication_.

As we now boot computers by flipping a switch, a talent for quickly
copying the pattern of front-panel lights to paper rarely appears as
a qualification in misc.jobs.offered.

-------------------
	Brad Sherman (bks@alfa.berkeley.edu)
Warning to the literal minded and other computer Maoists:
The above may contain irony, sarcasm, sardonicism, burlesque, parody,
farce, cynicism, caricature or other literary forms.  If you do not have
a sense of humor please spend more time studying the speeches of the
Vice President.


From dscatl!emory!sol.ctr.columbia.edu!lll-winken!uunet!comp.vuw.ac.nz!gp.govt.nz!zl2tnm!don Mon Oct 29 12:06:18 EST 1990

howell@grover.llnl.gov (Louis Howell) writes:

> I'm a relative newcomer to this game, to the extent that I've never used
> a machine with a non-power-of-two wordsize.  Since I've always used 8 bit
> bytes, 16 or 32 bit integers, and 32 or 64 bit floats, I've always seen
> hexadecimal as the natural computer-oriented base, and octal as a bit of
> a brain-damaged perversion.  As I learn more about the history of
> computing, though, it's becoming apparent that the switch from octal to
> hex was a fairly recent phenomenon.
> 
> Gill's "Machine and Assembly Language Programming of the PDP-11" (1978,
> revised 1983), for instance, uses octal extensively and doesn't mention
> hex at all.  This despite the fact that the PDP-11 has 16 bit words and
> addressable 8 bit bytes.  So you get sentences like "For example, the
> word whose actual (binary) contents is 1010011100101110 has the octal
> contents 123456; its low byte has the octal contents 056, and its high
> byte, 247."  Gill doesn't mention that in hex those same numbers are
> 0xA72E, 0xA7 and 0x2E, which seems much more straightforward to me.
> After all, if you're going to use a non-decimal base, why not use the
> one best-suited for the job?

Aha, the pdp11 may be a 16 bit machine, but the early models used the
18 bit UNIBUS, which is a convenient multiple of 3, but not 4, making
octal more "logical" for the people putting together UNIBUS systems.

The instruction format of the pdp11 is based on three bit fields, for
example, every pdp11 hacker's favourite opcode is:

                        014747
 
where the 01 decodes as a MOV instruction, the first 47 is the source,
the second the destination.  In each 47, the 4 indicates the addressing
mode to use, in this case autodecrement, and the 7 indicates a register
to use.  The instruction therefore is:

                        MOV -(R7), -(R7)
or
                        MOV -(PC), -(PC)

Sticking this in high memory and single stepping the processor is quite
a fun party trick, but I digress.

Since the whole pdp11 instruction set is laid out like this, it is
relatively easy to assemble and disassemble octal pdp11 code by hand.
Doing so in hex would be a nightmare (you'd really have to convert the
opcodes to/from octal anyway).

Hex has only one big advantage over octal, and that is that it allows
bytes to be represented within words.  On the flip side, there are more
characters to remember, and writing a hex dump routine is more
difficult, best done with a table, whereas an octal dump can be achieved
by shifting, ANDing with 7 and adding 60(8).  Of course a table can be
used for octal as well, and the table need only by 8 bytes.

> The C language supports both octal and hexadecimal constants, but uses
> octal escape sequences to represent special characters.  The Kernighan
> and Ritchie manual (1978) uses octal for all the examples of bitwise
> operations.  The 0 and 0x conventions clearly indicate octal as the
> default choice.  Did these conventions originate with C or were they
> inherited from earlier usage?

C and Unix were developed largely on the pdp11, so pdp11 conventions
would have coloured some of these choices a bit.

> Getting closer to the present, Postscript appears to prefer hexadecimal
> ("Postscript Language Reference Manual", Adobe, 1985).  It supports
> arithmetic in any radix from 2 to 36, octal escape sequences (those
> appear to be an unavoidable standard), and a hexadecimal, but no
> octal, representation for strings and bitmaps.

The C/Unix influence is there; note the use of {} for executable blocks,
octal escapes, Ctrl/D EOF markers etc.  However, PostScript is a
relatively recent phenomenon, born when the 8 bit byte was quite clearly
here to stay.  Quite simply, someone realised that debugging octal
bitmap data was going to be Too Hard when that data originates from a
machine with an 8*n bit word size.

> So, old timers, do I have the history about right?  Did hex come into
> common usage as recently as the late 70's?  Does anyone still use octal
> for any reason other than to adhere to earlier standards?  Was it just
> inertia that kept the octal system from going out of fashion along with
> 6 bit characters?  Comments?  Flames?  Religious denunciations?

VAX/VMS uses hex for most things, except of course UNIBUS addresses and
a few other holdovers from its pdp11/RSX11 parentage (octal UICs etc).
That was released in 1978; I recall most very small micros of the period
using hex, notably the apples of ~1976.

So, octal:
        - Is the obvious choice for n*6 bit machines, eg most DEC
          machines except the PDP-11 and VAX
        - Was the obvious choice for the pdp11 when dealing with
          instructions (rather than data)
        - Was not a problem in the days of 6 bit characters
        - Is marginally easier to learn and use than hex
        - Uses fewer symbols
        - Is more difficult to use with 8*n bit computers or 8 bit
          bytes.


Don Stokes, ZL2TNM  /  /                            Home: don@zl2tnm.gp.govt.nz
Systems Programmer /GP/ Government Printing Office  Work:        don@gp.govt.nz
__________________/  /__Wellington, New Zealand_____or:_PSI%(5301)47000028::DON


From dscatl!emory!samsung!zaphod.mps.ohio-state.edu!rpi!uupsi!sunic!ericom!tnetxa.ericsson.se!eds.ericsson.se!lmebgo Mon Oct 29 12:07:56 EST 1990

In article <70205@lll-winken.LLNL.GOV>, howell@grover.llnl.gov 
(Louis Howell) writes:

> So, old timers, do I have the history about right?  Did hex come into
> common usage as recently as the late 70's?  

Both octal and hex were used already in the 50's, octal in all the binary
IBM machines (701, 704, 709/90/94, 7040/44) with 36 bits word length, hex
in various one-of-a-kind machines with 40 bits word length. 

-----
Bengt Gallmo                        e-mail: lmebgo@eds.ericsson.se
Telefonaktiebolaget L M Ericsson    phone:  +46 8 719 1940
S-126 25 STOCKHOLM                  fax:    +46 8 719 3988
SWEDEN

Sometimes a majority only means that all the fools are on the same side!


From dscatl!emory!wuarchive!uunet!mcsun!ukc!slxsys!ibmpcug!miclon!miclon!nreadwin Mon Oct 29 12:09:18 EST 1990


In article <1208@bbxsda.UUCP>, scott@bbxsda.UUCP (Scott Amspoker) writes:
|> Since folks don't toggle machine instructions in through the front panel
|> anymore, octal no longer seems very useful.

 Lest the *really* young people here get the wrong impression, it is not
 only on machines with front panel switches that this is useful - it is
 handy on PDPs with ROM debuggers. When you find some dumb error you just
 patch in a goto (some piece of free memory) and then deposit your
 corrective code into the spare memory. The PDP instruction set is simple 
 enough to make reading octal dumps and writing in machine code practical.
 It's a lot quicker than reassembling everything.
 
 Disclaimer: 818  Phone: +44 71 528 8282  E-mail: nreadwin@micrognosis.co.uk


From dscatl!wcieng!emory!wuarchive!cs.utexas.edu!uunet!mcsun!hp4nl!charon!dik Mon Oct 29 12:10:26 EST 1990

In article <MEISSNER.90Oct24145531@osf.osf.org> meissner@osf.org (Michael Meissner) writes:
 > Many of the earlier computers used wordsizes that were divisable by
 > three (36 bits in the DEC-10, 12 bits in the DEC-8, etc.).  I think
 > the System/360 from IBM was probably the machine that started us on
 > the power of two series (and 8 bit bytes).  If it wasn't the first, it
 > was certainly the head of the pack.  I also associate hex with IBM
 > (particularly with ABEND's).
 > 
Yes (although another article in this thread points to a very early use of
HEX, and let us also not forget the articles just a week ago about strange HEX
notation used in very old machines).  However, there is reportedly one machine
that had a wordsize divisable by 3 just because the designer loathed HEX
notation.  That is of course the CDC Cyber designed by Seymour Cray (see some
articles in the Annals of the History of Computing of some years ago).  And
upto this time Cray systems use octal (although they have 64 bit words).
--
dik t. winter, cwi, amsterdam, nederland
dik@cwi.nl


From dscatl!wcieng!emory!swrinde!ucsd!ucbvax!agate!shelby!morrow.stanford.edu!news Mon Oct 29 16:58:54 EST 1990

In article <70205@lll-winken.LLNL.GOV>,
howell@grover.llnl.gov (Louis Howell) writes:
>I'm a relative newcomer to this game, to the extent that I've never used
>a machine with a non-power-of-two wordsize.  Since I've always used 8 bit
>bytes, 16 or 32 bit integers, and 32 or 64 bit floats, I've always seen
>hexadecimal as the natural computer-oriented base, and octal as a bit of
>a brain-damaged perversion.  As I learn more about the history of
>computing, though, it's becoming apparent that the switch from octal to
>hex was a fairly recent phenomenon.
>...
>So, old timers, do I have the history about right?  Did hex come into
>common usage as recently as the late 70's?  Does anyone still use octal
>for any reason other than to adhere to earlier standards?  Was it just
>inertia that kept the octal system from going out of fashion along with
>6 bit characters?

To answer the last question first, yes.  (I'm sure I'll get flamed
good for this comment)

Octal originated when 6-bit characters were common.  Most early
machines (IBM and others) used 6-bit characters, and were word (not
byte) oriented.  For example, IBM 704/709/7040/7090 with 36-bit
words, 6-bit BCD character code; Burroughs B5000 with 48-bit words;
various CDC machines with 60-bit words.  Octal was an obvious choice
for these machines.

When IBM introduced the System/360, they also introduced 8-bit bytes
(EBCDIC character set), and byte-oriented machines (4 bytes to a word,
8 to a double word, 2 to a halfword).  To store the data they had
9-track tapes (8 bits plus parity) replace 7-track (6 bits...) tape.
They also coined the mixed-root word "hexadecimal" (Greek+Latin),
presumably because the correct "sexadecimal" would have produced the
contraction "sex numbers".  I remember being somewhat annoyed at this
new, obscure number system, but as you pointed out it is the obvious
way to deal with 8-bit quantities (particularly since in the IBM
system some operations [decimal] dealt with half-byte quantities.
I quickly learned to accept hex as normal, and regard octal as a
barbarism [yes, this is a religious issue].

Well, in spite of all the flames about IBM and its architectures,
the world seems to have gone to 8-bit bytes and byte-oriented
machines (cf. DEC 20, 36-bit words-->VAX, 8-bit bytes...).  Almost
everyone agrees that you need 8 bits to define a character set,
ASCII is extended to an 8-bit code, and so on.  So I can't think of
any reason, except inertia.


From dscatl!wcieng!emory!wuarchive!usc!elroy.jpl.nasa.gov!turnkey!orchard.la.locus.com!prodnet.la.locus.com!dave Tue Oct 30 10:48:08 EST 1990

In article <70205@lll-winken.LLNL.GOV> howell@grover.llnl.gov (Louis Howell) writes:
>Gill's "Machine and Assembly Language Programming of the PDP-11" (1978,
>revised 1983), for instance, uses octal extensively and doesn't mention
>hex at all.  This despite the fact that the PDP-11 has 16 bit words and
>addressable 8 bit bytes.
>...
>After all, if you're going to use a non-decimal base, why not use the
>one best-suited for the job?

You've found the answer yourself.  For purposes of understanding the
PDP-11 architecture and machine language, octal is far superior to
hex.  To understand this requires only looking at and understanding
the instruction set layout.

Approximately, from memory, speaking of most (but not all) instructions:

The PDP-11 has 8 registers and 8 addressing modes.  Instructions with
one operand (say, INC) use 6 bits of the instruction word to specify
that operand, 3 for the mode and 3 for the register.  Instructions
with two operands (say MOV) use 12 bits of the instruction word to
specify the modes and registers of those two operands.  The 6 or 12
bits always fall at the low end of the instruction word.

If you're a CPU architect (thus writing the Processor Handbook), or
someone who likes to write and enter programs from the front panel
or examine memory dumps, it's *real* easy to do in octal, and would
require much more memorization in hex.

For example, the MOV instruction is 01xxxx.  If I want to move
from R3 to R5, it's 10305.  (Mode 0 is register immediate).  If I
want to move immediate to a register, I use the autoincrement mode
of R7 (which serves as the program counter) for the source, and a
register direct for the destination.  12705, 123456 will move the
immediate constant 123456 to register 5.

By organizing the opcodes into groups of three bits rather than
groups of four bits, you only have to memorize a few opcodes,
8 addressing modes, and the special purposes of two of the
eight registers, and you can easily construct and decode machine
instructions.  In hex you would have to memorize all the possible
instructions in all the possible modes and registers, or else
end up extracting the three bits you want out of misaligned hex
digits all the time.

For similar reasons the 8080 instructions should also have been
documented in octal.

Dave
dave@locus.com
-- 
	The U.S. constitution has its flaws, but it's
	a damn sight better than what we have now!