• Please review our updated Terms and Rules here

Why did DOS/86 overtake CP/M-z80 ?

FWIW, here’s another place where the “Z80 = CP/M” framework you’ve adopted here breaks down. ”Proper” portable CP/M software isn’t written for the Z80, it targets the 8080, and while a few programs violated that rule late in the life of the platform for the most part it stuck. (I’m not going to count here programs that were really written specifically for Amstrad CPCs or MSX or whatever because those are obviously *not* portable CP/M programs.)

The Z80 is 8080 binary backwards compatible. Great. But it’s also not really better than an 8080 unless you take advantage of its enhancements and then, BAM, you’re not actually writing according-to-Hoyle CP/M programs anymore, are you? The 8088 is *not* binary compatible… but Intel went to significant lengths to make it Assembly *source* compatible. If you machine translate 8080 source to x86 the results aren’t as good as if you sat down and rewrote it from scratch, sure, but for a quick and dirty start of a port *it worked mostly fine*. So… this seems obvious, doesn’t it? If enhancing for Z80 breaks backwards compatibility for new programs for your computer with the legacy platform it’s supposedly compatible with, why not just rip the bandaid off and go to an even more capable CPU that makes it easy enough to bring stuff forward that a lot of software just needs to be reassembled? Because of the nature of CP/M moving to a new computer always needed patching for the video terminal and stuff anyway…

I dunno, it’s almost like Intel knew what they were doing or something.

You may be overthinking my original premise. It's not a zilog/intel thing in my mind - it's like a PC vs CP/M thing, and I would probably lump the z80, 8080 and 8085 together in a single high level architecture... And yes, I'm probably one of the only writers of such z80-only CP/M OS software since I use the Index Registers a lot. Most of my early z80 code was probably nearly entirely 8080 compatible, but when I wrote my first emulator as a part of the Loki project, it really reminded me about all of the other instructions and I spent enough time with them to want to use them. So my current code is z80 CP/M compatible and not really CP/M conforming.

The discussion has brought up some interesting thoughts.

There is a thought experiment what-if article about this at http://www.desertpenguin.org/blog/what-if-the-zilog-z180-z280.html that I think is a good read, even if it does gloss over the 68000 and 6502 and all the systems based on them. The 68000 was and is a great chip, and really should have been the basis for the IBM PC, but not to be. 8088 was just too similar to the 8085 and associated peripheral chips....but the reasons IBM went 8088 are better documented by experts with more knowledge of those circumstances than I in other places.

The thought experiment is interesting, but I think that had DRI produced PCDOS that PCs would no longer be around today... Maybe they would have held on until OS/2 came out. Then who knows what direction they would have gone.

I doubt Microsoft would have gotten into the OS business... The deal with IBM made them and set them down that path

This is the standard alternate universe line and, sure, a 68000 based PC would have been great, but the resulting machine would have been gobsmackingly more expensive, and the chip wasn’t even really available in quantity yet when IBM went shopping. If the base price of the PC had been $5,000 instead of $1,595 and it came out a year later I’m definitely thinking we’d all be using something else today. Very possibly Macintoshes, or machines descended from Atari ST-style knockoffs.

What would have possibly made a difference, though it's time was yet to come, was overclocking.

We didn't really see widespread overclocking before the 486 came out - not on any real scale that I can remember. People clocked their CPU at whatever it was rated at.
But from 1985 onwards, 30 Mhz could have been quite reliably achieved with the CMOS z80 as has been demonstrated. And that would have kickstarted the need for speed and if you add a pipelined architecture, maybe something like the ez80 would have come out around 1988.

It's interesting that this approach has only been proven in recent years. But at 30MHz a z80 would outperform a typical 12 MHz 286 and probably even a 16 MHz one. All for the cost of a cheap home computer.

Also, a good MMU strategy can easily do the same as segmenting. All you would need is a jump table that set the MMU to switch in the correct pages and make the jump and a similar return strategy. This could have been implemented in an assembler as "LJUMP and LRET" with a known routine in stable memory space to make the jump. That's what I did to move my video drivers entirely out of the TPA. I preserve the AF pair, set the MMU update byte the reconfigure the main memory space and then make the jump. I return the same way. In CP/M I have to preserve the stack as well, but that could be omitted if a shared stack segment was used. Even so, a long jump in such a table is only 8 bytes long and the return only 5-7 bytes since it also has to restore the stack and an interrupt return is a bit longer. I store the common return at 0050 and 0055 and then I can install drivers at any RST ( I use 0040 and 0048 since CP/M doesn't define this space ).

z80 would have been cheap and screamed along enabling a new wave of software. Memory of the era would have needed quite a few wait states though, but at that kind of speed, even LDIR should eliminate the need for DMA and outperformed the z80 DMA which was only around 4MHz then IIRC.

Though overclocking was talked about back then, it was still a long way from being accepted IIRC in 1985

As a better description, this is the code I use for installing a "Long Jump" dynamicaly - it's not as clean as segments, but a compiler or assemble would have no problems with it;

INTHOOK: ; Code to hook the Interrupt, Install the device driver, rebuild the handler and write it all to M: LD DE,VIDEO LD HL,INT40 LD BC,$08 LDIR ; Install RST40 hook. LD DE,$0050 LD HL,RET50 LD BC,$05 LDIR ; Install RET50 return hook. LD DE,$0055 LD HL,RET55 LD BC,$07 LDIR ret INT40: ; Standing to copy relative code to move to $0040 (VIDEO). 8 bytes. PUSH AF ; Protect the A register. LD A,$08 ; We're going to run video as Process 8 (reserved) OUT (PID),A ; Switch to the Video Process. JP $1000 ; Jump to the next segment at the 4K Boundary where the real handler lies. ; Map - $0000 to $0FFF - Handler, Install routines, Initialise routines. ; $1000 to $1FFF - Routines. ; $2000 to $2FFF - More Routines. ; $3000 to $3FFF - Video Map (Character Based ) ; RET50 is 5 bytes exactly. RET50: ; Return location for 50 where we install the "Return to previous call". OUT (PID),A ; Switch to the calling process. ( A contains the process - don't set it here. ) POP AF ; Recover A ( and maybe other registers ) ret ; And return to where we were called from. ; Note - Return programs from 50 to 5B... NOP ; So I can see a single 0. ; DO NOT CHANGE THE ABOVE....... below follows on. RET55: ; RET55 is exactly 7 bytes. OUT (PID),A ; Switch to the calling process. POP AF RETI ; And the RETI version if it's called by an interrupt. NOP NOP
 
You are ignoring one aspect--production engineering. Intel had numerous second sources for its CPU and peripheral support chips, mostly due to vendors inheriting the overly-generous terms of the 8080 and 8085 licensing agreements. So IBM would never be caught up short on supplies. Indeed, many of the first 5150s rolling off the line had AMD or NEC 8088s in them. Intel could supply just about everything but the 6845 display controller--even the (back then) hard to get 16Kb DRAMs.

Technically, the Z8000 is a better chip, but Zilog was being mismanaged by Exxon, who knew nothing about the electronics biz and there was a lot of mistrust in their ability to carry anything through. I think that AMD and Siemens were the only interested second-sources; I don't know the state of their production facilities for the Z8000 at the time.

The 68K was new, with no second sources and not a lot of peripheral support chips. Later systems, of course, made good use of it (Amiga, Atari ST, Apple), but that was later.
 
But from 1985 onwards, 30 Mhz could have been quite reliably achieved with the CMOS z80 as has been demonstrated.

Find a Z80 CPU with a 1985 date stamp that’ll handle that (along with suitable memory and support chips that were cheap enough to actually do at a “home computer” price) before getting too invested in this idea. (I mean, seriously, why in the world would Zilog have binned the first CMOS Z80s at only 2.5 and 4mhz if they could run at 30mhz?)

IC manufacturing tolerances have improved a LOT in 40 years. What you can do today on a toy SBC with SRAM was *not* reasonable back then, certainly not for a production product. There were CMOS 8088s too, why do you assume that one built on the same equipment as a contemporary CMOS Z80 wouldn’t be able to be clocked similarly? There are late CMOS 8088-compatible descents of the V40 that can also withstand clocks around 30mhz, but they didn’t come out until the 1990’s.

And, again, I guess I need to ask again: why in the world does it make more sense to have to fudge around with elaborate external MMUs when you can just buy a CPU with a bigger address bus? Unless you have some emotional attachment to the old CPU or a *really compelling* need to maintain direct binary compatibility with a legacy system (which IBM did not have) it’s really difficult to see the logic here.
 
And, again, I guess I need to ask again: why in the world does it make more sense to have to fudge around with elaborate external MMUs when you can just buy a CPU with a bigger address bus?

Price... And as you pointed out, availability and reliable function. A Z280 was the obvious choice, but they were not an option back then. As for the overclocking, I'm working off of statements made by another member of this forum who tested them and stated the CMOS variants could handle it. It's not something I've validated, and he didn't put timelines around his testing, but if overclocking was widespread back then, I imagine we would have seen them more in the 8 to 12 MHz range... And of course, some of the signals get pretty short so waitstates for memory is a given. But it didn't happen on any scale that I'm aware of.

If some could be overclocked and it was "yield" based, there were manufacturers who made a living out of doing things like that in the 80s. Especially Sinclair.

But, whether I'm right or wrong, it's simply something that wouldn't have happened back then either way. I do intend to find some older CMOS z80s at some time and give it a crack - :) More out of curiousity than anything else. I want to know too. When I do get around to making up some hardware, I'm aiming initially at 14 MHz.

I originally wanted to target the z280 for my design, but it's been pointed out to me a few times that it was only being talked about in 1985 - Not actually used. And my other project is limited to technology available in 1985 for delivery in 1986.

Also, you can do a *massive* amount more with an external MMU than an internal one, simply because the design criteria are different. I'm targetting using a 2K fast SRAM as a MMU, so I can store up to 128 sets of MMU data with 4K banks. Hence I don't have to know which bank is in use because it's dynamic, and a bank can be selected into the MMU from available banks at load time - Only the PID references the banks like a handle. A single out() will switch the memory from 1000 to up to EFFF in a single operation, for up to 56K of code and the BDOS and Zero Page are still available as per normal CP/M. Add in the "jump" table and now there's a way to dynamically load in subroutines and call them without knowing which banks they are in. Necessary for CP/M software to be able to break the 64K barrier for code without being directly aware of banks or the bank paging mode.

It's not as elegant as 86's segment system by a long shot, but it meets the minimum requirements to extend z80 code execution outside of the TPA without being aware of the banking system. It also allows for banks and paging systems of different sizes and would still work without modification since an external MMU also acts as an obfuscation layer. It would top out at around 256Mb of memory, but works with all sizes inbetween and allows up to 240 executable blocks of 56K. 13.4 Mb of executable space... And it can be emulated in a processor with it's own MMU so would support simpler architectures when the z280 and onwards did come out.
 
The Z-80 was $2.50 while the 8088 was about $20 in 1985. I think your MMU design will cost more than the 8088. 8-bit peripherals will cost same with either CPU. Unlikely that a more expensive yet slower option would be the eventual winner.
 
I get that you’re really invested in trying to post-hoc prove that your concept for some kind of super-Spectrum that ridiculously outspecs the computers we got in the real world at around the price point you’re imagining (MSX, Amstrad CPC) isn’t completely ahistorical, but… sorry, no. Amstrad was at least if not more skilled at squeezing blood out of a stone than Sinclair was, it is just not realistic to pretend that by waving a magic wand they totally could have gotten four times the CPU speed and SRAM video in 1985 for the same money, if only they hadn’t all been too stupid to see it.
 
… FWIW, checking a Jameco ad in the back of a mid-1985 issue of Byte, in the timeframe we’re talking about mail-order retailers would sell you a 6mhz Z80 for $9, or an 8088 for $15. $2.50 was a *2.5mhz* Z80. Obviously these retail prices only let us guess what the quantity wholesale price to a manufacturer would be, but… does it even matter when they’re this close? Adding even a brain dead pager like a pair of 74LS670 makes it a statistical tie once you’ve added enough RAM to need it. (256K of 200ns DRAM was around $80. So, seriously, we’re talking a couple percent difference for the price of a machine with as much RAM as a base PC compatible. Clearly the economic case for the Z80 sucks once you pass 64K..)

Let’s chase this to the logical conclusion. The same ad lists 8mhz 68000s for $40. Sure, that’s four times as much as the Z80, but 512K of RAM is $160, so put the RAM+CPU together and the 68000 is only 15% more expensive. And that’s ignoring all the rest of the bits you need to put the computer together, which likewise are going to cost about the same regardless of which you used. Figure the total cost for everything you need to finish the machine proper is another $150 bucks (I’m counting at least one floppy disk drive here), tack a decent profit margin on top of that, and… bang, there you go: you have an Atari ST that you can sell as a hybrid 16/32 bit Supermicro for around $750 with a mono monitor… or an 8-bit machine with a CPU from 1976 you‘ll have to ask $700 for. Which do you think will be the easier sell?
 
The Z-80 was $2.50 while the 8088 was about $20 in 1985. I think your MMU design will cost more than the 8088. 8-bit peripherals will cost same with either CPU. Unlikely that a more expensive yet slower option would be the eventual winner.
It wouldn't have been the winner... But it's a way to extend the operation and maintain the customer base which would have been desirable back in the 80's. A MMU is a single small memory chip and a couple of 74LS374s between the CPU address bus and the memory bus, and some decoding logic so it only shows up on the bus during memory access and not I/O access. It should be possible to fit in within the cost of an 8088 and would most likely be cheaper than an external MMU, which tended to be on the expensive side. Static rams in smaller sizes were around 45ns back then - enough to build a MMU.

And yes, it is as @Eudimorphodon said, that many manufacturers and customer bases were relucatant to give up... I'm just curious as to why no one really mounted any serious challenge though and this thread did bring up some good ideas there.

I get that you’re really invested in trying to post-hoc prove that your concept for some kind of super-Spectrum that ridiculously outspecs the computers we got in the real world at around the price point you’re imagining (MSX, Amstrad CPC) isn’t completely ahistorical, but… sorry, no. Amstrad was at least if not more skilled at squeezing blood out of a stone than Sinclair was, it is just not realistic to pretend that by waving a magic wand they totally could have gotten four times the CPU speed and SRAM video in 1985 for the same money, if only they hadn’t all been too stupid to see it.

You may be right and I certainly wouldn't bet against you. I'll find out how close possible was at the end of the path.

In any case, I can't find details of any reasonably fast z80 machines coming out during the era. z80H based machines did start appearing from 1985 onwards, but I can't find any evidence that they ever made an overclocked one... And no, I don't seriously believe they would have overclocked any CPU of the era for the reasons aforementioned. :)

But of interest, the biggest factor discovered during this thread seems to be Microsoft. Without Microsoft nothing was going to happen as it did - and it seems very unlikely another company would have done it in their place had they not.

Which does leave me curious how much Microsoft also influenced early CP/M development software-wise? Aside from MBasic, it's pretty difficult to find out what they did make for CP/M but their BASIC was hugely influential.
 
Let’s chase this to the logical conclusion. The same ad lists 8mhz 68000s for $40. Sure, that’s four times as much as the Z80, but 512K of RAM is $160, so put the RAM+CPU together and the 68000 is only 15% more expensive.

This is why, aside from superior graphics/sound/peripherals etc, the ST and Amiga did pretty well. I'm pretty confident IBM made the best possible choice at the time. Once Seagate added their 2c, there was no turning back.

But keep in mind the original PCs had just 16kb of memory, expandable to 64K. The video card was VERY slow. It cost $1500 without disk drives or monitor, and they still started shipping around 40K per month in 1982. There is nothing in that to say "Hey, we will get 640K one day...." because that's tiny... The average buyer wouldn't know what to do with 512K or even what it was. It used to boot into BASIC if there was no disk drive present.

And while CP/M machines started dying out a year later, the PC just kept on accelerating, and by anyone's standards it wasn't the specs. It was around 1986 before high end ATs started to emerge and 1988 before they were moving into the home market.

The conclusion? IBM started it. Microsoft finished it.

I need to find some early reviews of the PC and see what the reviewers were saying about it. :)
 
Most of my early z80 code was probably nearly entirely 8080 compatible, but when I wrote my first emulator as a part of the Loki project, it really reminded me about all of the other instructions and I spent enough time with them to want to use them. So my current code is z80 CP/M compatible and not really CP/M conforming.
It might be worth pointing out that the new instructions and index registers added by the Z80 are substantially slower to use. Often, you are better off using them sparingly to get faster code. Given this mindset, keeping 8080 compatibility for a commercial product isn't too expensive.

The 8088 is awkward to use with memory blocks larger than 64 KB, but the Z80 (or any other 8-bit system) is awkward to use with memory larger than 64 KB.

The original PC might only have shipped with very little memory, but it had room to grow. 8-bit systems did not, they were already at their limit. The future lay elsewhere.

Microsoft had most of their tools available for CP/M: Linker, Fortran, Cobol, Basic...
 
There are some pretty major boo-boos in that article.

There's a reason I called it a thought experiment. Z180 is not 16-bit, but in the alternate reality of this thought experiment, Z180 wasn't necessarily the Z180/HD64180 that we actually got, but the Z800 that should have been built instead of Z8000. Z8000, especially Z8002, got used in a lot of applications; I have a couple of multibus Z8002 SBCs here from an old machine controller.
(Arguably the best CPU accelerator ever made for the TRS-80 Model 4, the XLR8er, used the Z180/Hitachi 64180, and some people wrote some pretty interesting proof of concepts with it, like using the DMA controller to do extremely fast graphics updates on machines fitted with the optional graphics board.)

No disagreement there. Great to see Ian Mavric repro this as 4cellerator.
… but the Z180 *was not available in 1980*.

No, but Z8000 was. Had Zilog put out the super-Z80 customers wanted instead of the Z8000, this is the alternate reality explored here. But this lesson had to be learned over and over by companies, including Intel (Itanium, anyone? Or iAPX432? Or i860?)

Hrmph, just getting Z80 with a 16-bit ALU and the later design efficiencies of eZ80, where most instructions execute in a single cycle, would have been great.

But, as you say, Exxon was large and in charge.
This is the standard alternate universe line and, sure, a 68000 based PC would have been great, but the resulting machine would have been gobsmackingly more expensive, and the chip wasn’t even really available in quantity yet when IBM went shopping. If the base price of the PC had been $5,000 instead of $1,595 and it came out a year later I’m definitely thinking we’d all be using something else today. Very possibly Macintoshes, or machines descended from Atari ST-style knockoffs.
Or the followons to the Tandy 6000.
 
It might be worth pointing out that the new instructions and index registers added by the Z80 are substantially slower to use.
This is only true for prefixed instructions. The single most useful new instructions, in my opinion, are the relative jumps, which can dramatically speed up code. The most useful of those, to me at least, is the DJNZ instruction.
 
This is only true for prefixed instructions. The single most useful new instructions, in my opinion, are the relative jumps, which can dramatically speed up code. The most useful of those, to me at least, is the DJNZ instruction.
The Indexed registers *are* slower but what you lose on the merry go round you gain on the ferris wheel. So you don't end up juggling registers and the extra instructions that entails because you have two more, and since CP/M is pretty heavily indexed for things like the DPB, DPH etc... It makes the code VERY easy for someone to read as well.

IE, you just define the offset and then you do things like LD A,(IX+SECTOR) - Which means your code is very easy to read. Operating Systems don't always need the speed, but they do need to be simple and hierarchical.
Also, I use INDR as my "block" transfer for memory when I'm treating it like disk, and it's pretty clean and effective.
And I use the 16 bit I/O function that causes so much controversy when I mention it (OK, 8 bits and an 8 bit index.... )... I am planning on hooking up A8 to A14 to a CF drive though in direct mode.

All nice z80 stuff. Except the extra register pairs. I decided to leave them alone entirely as far as the OS is concerned.
 
The conclusion? IBM started it. Microsoft finished it.
IBM gave users the confidence that the investment they made buying their IBM-PC was not going to be lost by the OEM going bankrupt any day soon, leaving behind an orphaned platform.

Microsoft was an extremely agile company which was just ready to ride any wave they saw coming (be it BASIC for 8 bit computers, Excel for Mac, Xenix, or DOS for the PC) and they were ready to buy and resell without remorse, in contrast to DRI which was fixed on their ways with their mono-product they had developed from scratch.

But that would have not sufficed, up to this point the IBM-PC was just a successful business machine while the home market was all 8-bit micros and 16-bit Amigas and the like.

But then Compaq came and cloned the IBM-PC, thus making it the universal platform, i.e., the platform for everybody.

So my conclusion is: IBM started it, Microsoft rode it, and Compaq finished it. And, in all of that, the CPU used was irrelevant to the resulting outcome.
 
Last edited:
I'll give it to MS--they're kept a remarkable level of compatibility throughout the lifetime of the x86 platform. Not so much for Apple--at best, you'll have to resort to emulation.
 
There are some pretty major boo-boos in that article. First off, he seems a bit muddled about what the Z180 actually is, referring to it as ”basically a 16 bit CPU with 20 bit memory addressing, just like the 8086”. No, it’s not. I’d call it a badly constructed sentence where he’s trying to say it has 16 bit addressing inside a 20 bit physical space, but later he explicitly refers to an alternate history IBM debuting a “16 bit OS” for it sometime after apparently releasing the machine originally with “8 bit” CP/M. No. Just… no.

I mean, sure, that said you could probably still run with the argument that the Z180 is at least as capable as the 8088 and you could have in theory made a pretty good ”IBM PC” out of it; it’s faster per clock, sometimes at least, than the Z80, and in addition to the MMU (which, downsides, is kind of limited and confusing to use; it is *not* anywhere near as good as the 8086’s segment registers) it has a built in DMA controller that can do memory-to-memory transfers *way* faster than either the Z80’s or 8088’s string move instructions. If it’d been available in 1980 it could have made for a pretty compelling CPU option not just for IBM, but for, say, Radio Shack and other existing Z80 users…

(Arguably the best CPU accelerator ever made for the TRS-80 Model 4, the XLR8er, used the Z180/Hitachi 64180, and some people wrote some pretty interesting proof of concepts with it, like using the DMA controller to do extremely fast graphics updates on machines fitted with the optional graphics board.)

… but the Z180 *was not available in 1980*. Hitachi didn’t come up with it until 1985, with Zilog licensing it back and spitting out their version shortly thereafter. Obviously that’s way, way too late for any of this alternate universe fantasy. If Zilog had actually gotten the *Z800* out the door in 1983, when they’d first started floating it (as noted, Radio Shack believed them enough to make provisions for it in the Model 4, I would guess they were far from the only disappointed party) then, I dunno… honestly I think it still would have been too late overall to stop the PC wave, but it might have made for an interesting last crop of home computers. A Z800/Z280 based Amstrad CPC, for instance, would basically be as capable as an Atari ST; in fact, in this universe maybe the Atari ST might have been Z800 based. Again, probably wouldn’t necessarily have saved them in the end, but it would have been… interesting.



This is the standard alternate universe line and, sure, a 68000 based PC would have been great, but the resulting machine would have been gobsmackingly more expensive, and the chip wasn’t even really available in quantity yet when IBM went shopping. If the base price of the PC had been $5,000 instead of $1,595 and it came out a year later I’m definitely thinking we’d all be using something else today. Very possibly Macintoshes, or machines descended from Atari ST-style knockoffs.
gobsmackingly 👍
 
More to the point, the Z8000 was introduced only a few months after the 8086 (1979). It could address 8 MB without an MMU with the MMU, 16 MB.
Probably a better design than the 8086, but 1980 began the moribund road-to-Hell Exxon era. If I were a product engineer for IBM, I wouldn't have touched the Z8000 either. There were huge misgivings then about Zilog's future--and they were justified. I note that AMD and Siemens tried to make the Z8000 a going enterprise by co-founding AMC. From my interactions with that outfit, however, it was hard to take seriously (I had acquaintances who worked for AMC).
 
I want to say the 8086 has something like DJNZ using CX. Don't quote me on that one; I'm an x86 ASM n00b.
 
I am planning on hooking up A8 to A14 to a CF drive though in direct mode.

I think I brought this up before, a long time ago, but I think you’re going to be disappointed trying to do what you’re planning here. TL;DR, the high address lines on a CF card are a lie. You won’t be able to use this to fake 128 byte blocks.
 
Did PC quality change at all between the machines before IBM, or was the quality generally the same? It seems like back then, things may have been expensive, but they were often built to last. I am completely surprised at how robust the floppy drives with my IBM PC AT are built. They are solid with lots of carefully machines metal. The quality of everything has gone now.
 
Back
Top