• Please review our updated Terms and Rules here

What if IBM didn't choose the Intel CPU!

Who knows--had IBM kicked the idea around another year or two, the choices might have been quite different. But to be very clear--IBM was not the first vendor to offer an 8086-based personal computer.
Oh, not at all - but they were the ones who made it (however unintentionally) the de facto standard.

I dunno, I guess I just get irked by this notion that the x86 architecture descended fully-formed from the heavens to the sound of an angelic choir and lo, the voice of Charles Babbage spoke: "This is my beloved CPU, in which I am well pleased." Comes from hanging around the Amiga crowd and hearing all the people fantasizing about how it could totally take back the market it never won in the first place if it just became an x86-based Linux PC like absolutely everything else.
 
If you look at the 5150 and its descendents, the peripheral chips, but for the bus controller and clock generator were all 8080- or 8085- generation chips (i.e. 8259, 8237, 8259, 8255--and that was the point. At least IBM had the common sense not to use the 8275 CRT controller.

I've long suspected that if one wanted to restrict memory to 64K, it should be possible without too much trouble to replace the 8088 with an 8085 in a 5150. You'd have to do some kludging for booting and acessing video RAM, but it shouldn't be too hard, cycle-wise.

The original 8086 assembler ran on an 8080-equipped MDS-80 dev system. One of the first products was a 8080 to 8086 source level translator. I recall that the speed of translation was phenomenally slow. A 1000-line program was enough to kill the thing. But the point was that the whole support setup was in place and ready to go long before the PC.
 
I've thought of converting some 8080 programs to 8086 and it didn't look too hard -- 8086 has more registers and opcodes, and if you set CS=DS=ES then there probably isn't very much actual work to do. Any thoughts on why the 8080 to 8086 translator you used was so slow?
 
Yeah, one of the things it tried to do was to keep the flags register the same Added is that the only conditional jumps on the 8086 are short relative jumps--and there are no conditional returns from subroutine. As our test, I used an easily-verifiable floating point math package--of course, it was 8080/8085 optimized like crazy, including plugging values into immediate instructions--the objective was speed, after all. It was about 3000 lines of very nasty code.

"Fast Eddie", our sales guy offered to run a conversion and verification test at the local sales office and buy us lunch. We had a nice meal and started the job on the ISIS-II MDS 200 series there--they even had a hard disk, which was an outrageously expensive option for an MDS. Well, Ed ended up buying us dinner too and the darned thing was still crunching when we left for the night. It still wasn't done the next morning and Eddie, sensing disaster, said he'd get back to us. About 2 weeks later, after the Intel software guys had a look at the translator, Ed returned with the translated program. It was about 50% larger in size than the original 8085 version, which sort of went counter to Intel's claims for the translator.

It could be that Intel's implementation language at the time was almost exclusively PL/M, or just sloppy implementation. We eventually decided to rewrite our code from scratch.

One thing that really bothered me about the 8086 was the awkwardness of handling large data structures that were larger than 64K. I was surprised that Intel didn't implement a "segment add" instruction. Or even a "shift 4 bits in one cycle" instruction.
 
Last edited:
One thing that really bothered me about the 8086 was the awkwardness of handling large data structures that were larger than 64K. I was surprised that Intel didn't implement a "segment add" instruction. Or even a "shift 4 bits in one cycle" instruction.

God, yes. When I'm optimizing something for speed I almost always run into these scenarios and curse them. My solution is to use a table of segments where applicable for the former, and XLAT for the latter. I usually don't have BX free so I burn up 256 bytes in the code segment and use CS: XLAT for the shift (even with the override it's still way faster than SHx REG,CL).
 
Another pain in the rear was doing DMA on the 5150 (and the 5170 and ISA successors) that's related to the aforementioned problem. The issue is that the 8237 DMA controller has a 16-bit address space, which means that a 64KB (or 128KB) boundary crossing is going to wrap to the beginning of the 64K physical memory bank. So you have to test to see of your DMA transfer crosses that boundary if you value your system's data. The 5170 implemented 16-bit DMA by shifting the 8237 addresses left one bit, so there the magic boundary is 128K.

Intel did introduce the 8089 I/O controller that could handle 20 bits of address, but it implemented only 2 DMA channels and required its own program space--and it was very expensive compared to the 8237.

I don't believe that the PS/2 Microchannel systems have this issue, as they used their own IBM-unique DMA controller.

When you get memory virtualization involved, where the logical address of a buffer in memory may bear no relationship to its physical address, things get interesting.
 
Another pain in the rear was doing DMA on the 5150 (and the 5170 and ISA successors) that's related to the aforementioned problem. The issue is that the 8237 DMA controller has a 16-bit address space, which means that a 64KB (or 128KB) boundary crossing is going to wrap to the beginning of the 64K physical memory bank. So you have to test to see of your DMA transfer crosses that boundary if you value your system's data. The 5170 implemented 16-bit DMA by shifting the 8237 addresses left one bit, so there the magic boundary is 128K.

Intel did introduce the 8089 I/O controller that could handle 20 bits of address, but it implemented only 2 DMA channels and required its own program space--and it was very expensive compared to the 8237.

I don't believe that the PS/2 Microchannel systems have this issue, as they used their own IBM-unique DMA controller.
Neither did EISA systems, I think.
 
Back
Top