• Please review our updated Terms and Rules here

What happens when an Altair hits an octal number higher than 377?

falter

Veteran Member
Joined
Jan 22, 2011
Messages
6,577
Location
Vancouver, BC
I've been working on a video about my Altair and goofed. I was doing the math program from the example guide and added 1+1.. which worked swimmingly. Then I thought I'd try adding 2 and 3. Unbeknownst to me, I had accidentally left the left most data switch on (from when I was examining memory address, octal 202) and deposited 202 and 203 as the numbers to be added. This would equal 405, but the Altair only goes up to 377 - so it came up with 005 on the data display. Just a curiosity question, what did it do here to arrive at that result? What happens to numbers that go over 377?
 
377 octal is 255 decimal - the highest value that one byte can hold. If you have it at 377 octal and add 1 to it, it will rollover to 0. Not sure why it came up with 005. If you need to hold a number larger than 8 bits, you need more bytes. 2 bytes can hold 16 bits - 2^16 is 65536 or 0-65535 decimal.
 
Thank you. I'm just trying to explain what happened there. That's one of the things I'm sometimes not wild about with the Altair - the dual purpose data/address entry switches. So many times I go to examine an address to enter data there, and then miss that a switch is on that shouldn't be when I enter the data.. heh. :) I'm used to machines like the OSI where data and address are separate.

I'm guessing it added the 3 and 2 and just dropped the 4.
 
Thank you. I'm just trying to explain what happened there. That's one of the things I'm sometimes not wild about with the Altair - the dual purpose data/address entry switches. So many times I go to examine an address to enter data there, and then miss that a switch is on that shouldn't be when I enter the data.. heh. :) I'm used to machines like the OSI where data and address are separate.

I'm guessing it added the 3 and 2 and just dropped the 4.
That is exactly what happened. However, the "4" is not really dropped. When the 8080 processor added your two values totalling more than 377 octal, it would have set its "carry bit". Additional 8080 instructions exist to test the status of the carry bit, which a fully conceived 8080 math program would use to show you with the correct result when adding two numbers which total more than 377 octal.
 
This is why octal sucks for 8 bit CPUs. Three octal digits can represent 9 binary digits (IE, numbers up to 511 decimal, unsigned), so it might not be as obvious when you overflow as it would with hex. (IE, with octal representation you might fool yourself into thinking that you can add any two numbers up to 377 together without special pleading because on paper the result will still be a three digit number. With hex it’s obvious that when adding any two arbitrary 2 digit numbers there’s a 50% chance you’re going to have to deal with a carry since the result is also two digits.)
 
I'm guessing it added the 3 and 2 and just dropped the 4.

Totally. If you happen to use a Mac, the built-in Calculator app has an octal mode (first View->Programmer, and then move the slider under the "display" to "8". It also shows binary at the same time, so I find this super useful sometimes for octal and hex manipulation and conversions.

When I put it into octal mode and add 202 and 203, the result looks like this:

1702780472238.png


You can see that there was a carry (bit "8"-- the lowest bit in the next higher byte-- is set), and the bits left in the low byte are 00000101, which is your octal 005.

As @hmb says above, this addition in the 8080 (or any 8-bit CPU) would "roll over" (you see 5) and set the Carry flag, which can be examined for branching or chaining math if needed. Agree with @Eudimorphodon, it's yet another reason why hex quickly took over from octal in 8-bit land.
 
Last edited:
The 8080 can also manipulate (add and subtract) natively 16 bit quantities also (consider the DAD, INX and DCX instructions, which is actually handled by the incrementer, not the ALU--but that's another topic).
What makes octal ugly on 8-bit architectures is the representation of 16+ bit quantities. For example,
2345 in hexadecimal is 0010 0011 0100 0101. How do you represent that in octal? Two systems are used that I'm aware of--the first is to simply treat the whole 16 bits as an octal number- 021505, but then, where is the division between the 2 8-bit halves? The other system treats the 16-bit number as two octal quantities, which would be 043 105, but then the relationship between the two halves is broken.
 
Last edited:
Even the Windows 10 calculator will do the overflow rotation. Isn't it wonderful that many of the informational fields are secretly buttons?
octal overflow.png
 
What makes octal ugly on 8-bit architectures is the representation of 16+ bit quantities. For example,
2345 in hexadecimal is 0010 0011 0100 0101. How do you represent that in octal? Two systems are used that I'm aware of--the first is to simply treat the whole 16 bits as an octal number- 021505, but then, where is the division between the 2 8-bit halves? The other system treats the 16-bit number as two octal quantities, which would be 043 105, but then the relationship between the two halves is broken

Yep. Octal made plenty of sense when minicomputers and mainframes commonly had 12, 18, and 36 bit words, but it doesn’t mesh that well with 8-bit bytes or multiples thereof.
 
You forgot 24, 48 and 60 bit words. (various CDC mainframes). At one time, I worked with both octal and hex simultaneously. 64 bit hex on the STAR and 60 bit octal on the Cyber. You get used to it. Oddly, the STAR was bit-addressable (48 bit addresses), with indexes being byte, halfword (32 bit) and word (64 bit). So, a word index meant shifting left 6 bits mentally and adding it to a 48 bit base address. You get to be pretty good with mental binary arithmetic. Bit addressing made sense because the STAR was a vector machine, so you had bit vectors for both control and sparse applications. The 256 word register file occupied the low address part of user space.
 
Last edited:
Of course, the CPU works in BINARY. So if you do all of the maths in 8-bit binary, you won't have this problem...

Decimal, hexadecimal, octal etc. are all 'invented' by humans...

The over way to look at it is modulo arithmetic (that you should have learned back in school). Any 8-8it operation within the CPU works in modulo 256 (decimal), 100 (hexadecimal) or 400 (octal).

I am ignoring the carry flag in the above description of course.

Dave
 
And then what about negative numbers?...

By 'convention' a byte can hold a (decimal) number between 0 and 255. But this is an unsigned value. It could be a signed value in the range -128 to +127. Or it could be a number in Binary Coded Decimal (BCD).

So, you have to define the context of the value that the 8-bit binary number represents.

This has all probably confused you too much now Falter!

Dave
 
I've been working on a video about my Altair and goofed. I was doing the math program from the example guide and added 1+1.. which worked swimmingly. Then I thought I'd try adding 2 and 3. Unbeknownst to me, I had accidentally left the left most data switch on (from when I was examining memory address, octal 202) and deposited 202 and 203 as the numbers to be added. This would equal 405, but the Altair only goes up to 377 - so it came up with 005 on the data display. Just a curiosity question, what did it do here to arrive at that result? What happens to numbers that go over 377?
Eh, whats the problem.

202 + 203 = 405

Or So 1000 0010 + 1000 0011 = 1 0000 0101 in binary so you get 0000 0101 in the byte which is 5 (in decimal or octal) and the processor would set what ever flag to indicate the overflow.

Or am i missing something.
 
I still use my well-made, but relatively inexpensive for the time, Casio CM-100 (from1986).

Setting bit size=8, octal 202+203=
casio 1 20231217_102148.jpg
Note the carry flag.

Setting bit size=16, octal 202+203=
casio 2 20231217_102229.jpg
 
So, you have to define the context of the value that the 8-bit binary number represents.
Of course. The argument that hex is better than octal is really just about what makes the binary pattern that’s actually lurking in the machine’s memory most conveniently consumable for humans, since we find it an error-prone chore to iterate across numbers with eight places. (Although if you’re poking stuff into a front panel I guess there’s a case for it, no translation needed.) Hex wins here because an 8 bit number *always* fits in two digits regardless of its type and every possible value for those two digits is a “real” 8-bit number. The problem with Octal is obvious: you need 3 digits to do the same job, but the leading digit can only ever be 0-3. 4 is a lie.

Or it could be a number in Binary Coded Decimal (BCD).

Hex especially wins here (assuming it’s unpacked BCD), because then each hex digit can just be read as decimal. A BCD value in octal is garbage.(*)

(* This is why IBM’s EBCDIC makes a lot of sense, assuming you’re willing to go all in on BCD for your numbers. If something isn’t a number it has a letter in the hex dump.) ;)
 
Last edited:
Personally I'd disagree with some of the anti-octal views expressed here. In the 8080 and Z80 it actually makes a lot of sense since many opcodes have 3-bit register-select fields which align much more logically with octal than with hex :)
 
Last edited:
And then what about negative numbers?...

Depends on whether the system had Ones' or Two's complement arithmetic. CDC and Univac stuck with One's complement for quite some time. There are some advantages--e.g., the range is symmetrical +127 to -127 for 8 bits and the negative value of a number is simply its bitwise-complement. And one big disadvantage--there were two zeroes--negative and positive. In practice this turned out to be not quite the burden that one might expect. CDC used a subtractive algorithm for addition, so the only case where negative zero popped up was adding -0 to -0. -0 + +0 = +0 and +0 + +0 = +0. Addition involved an end-around carry; that is, if there is a carry from the high order bit, the sum was incremented by 1.

We like to think of BCD as 4 bits of 8421 positions, but early systems used very different representations. e.g. 2 of 7 bits (05 01234) on the IBM 650. Exactly 2 bits set for any decimal number--makes for self-checking.
There were many other variations. Such as 5321 (Univac SS).
 
Last edited:
I've heard the argument many times that octal is better on the x80 CPUs because it reflects the grouping of opcode and operands (i.e. 8 registers). But there are a lot of instructions where that's of little use--and the common instructions are easily memorized. C3 will always be an unconditional jump to me (or a near return on an x86). If you can't mentally translate 8 bits of hex to octal, you need practice.

It's noteworthy that Intel itself didn't use octal in its instruction discussions or tools.
 
Back
Top