• Please review our updated Terms and Rules here

Why was there never a more "IBM-compatible" 80186?

Well, DOS was still the boss, at least with the feds into the early 90's. Remember how WordPerfect 4/5 was considered the best word processor out there and even had there own magazine and training centers. The office IBM PC/XT's, along with the Wyse 286's, soon gave way to the Gateway 386's and the everyone like the speed. BTW, I was/am a sloppy typer and always liked WordPerfect's "reveal codes" feature on the DOS versions.
 
Around the 386 time, I moved to WS2000. It wasn't the disaster that many folks thought. It's biggest failing was that it wasn't compatible at all with regular WS. It did come with a version of Star Exchange, but who wants to convert documents before editing them? I have WS4 for both DOS and CP/M-80. My last purchase of WS was WS 7 for Windows. I don't think I ever got around to trying it--still have the box.
Having used WS since its very early days (I have a copy of, I think, WS 0.9), the keystroke thing is etched into my hands. On Linux, Joe is my text editor of choice--it runs just fine over a telnet connection and understands WS keystrokes.
 
It's clear they were thinking PC-first by the time of the 386SX, because it's clearly product targeted for a specific niche in the PC hardware market-- teasing "you can have 386-enhanced Windows or OS/2" to people who would otherwise be served well by an equally clocked 286.

It seems like we're suddenly mixing up technologies here.

While it's certainly no doubt true that the dominance of the IBM PC and MS-DOS software forced Intel to realize that they needed to have a compatibility mechanism like VM86 mode in the 386 to viably position the product as the natural growth/update path for the personal computer market as a whole, this actually has very little to do with what was being discussed, which is "why didn't Intel make something like the 80186 but PC Compatible".

VM86 mode is an ISA extension that exists to enable the 386 to freely mix at a "task level" legacy real-mode 8086 code with native protected mode software, IE, it provides an easy hardware-enforced mechanism for an OS designer to go hog-wild building the next generation of OS while still letting users drag their old DOS software along for the ride. The 286 was roasted repeatedly and ruthlessly for not allowing this. (Recall the famous Bill Gates quote about the 286 being a "Brain-Dead Chip?". It was the lack of backwards compatibility once switched into protected mode that he was talking about.) Some companies actually *tried* to work around the 286's issues by exploiting "Unreal Mode", IE, directly messing around with the registers of the CPU in undocumented ways, but these techniques were slow, rickety, and broke on some steppings of the 286 chip. Intel took it on the chin really hard for this limitation, and explicitly designed the 386 based on the feedback.

To be clear, maybe the 286's backwards compatibility limitations looked perfectly reasonable in 1982 for vaguely similar reasons as the 186's hardware setup seemed "fine". It's not as if the 286 is completely incapable of running real-mode code in Protected mode, you can write "well-behaved" code that can run in either Real or Protected mode, just like you can write software that runs perfectly fine on either an IBM PC or a Tandy 2000; if you do everything through documented APIs then user code doesn't need to care about the underlying hardware or OS. Maybe some of the blame should land on IBM for cursing the IBM PC with such lousy BIOS APIs for accessing hardware like the screen and serial ports that it pretty much forced software writers to go straight to the jugular and thus encouraged the development of a software base that almost uniformly broke all the rules of "good behavior", but on the flip side, well, the 80286 didn't see the light of day until people had been building the PC/MS-DOS software base for multiple years; imposing new rules after the game has already started almost never works out.

Anyway, the point I'm laboring to get to is while, sure, the VM86 extensions were an important concession to the realities of the PC software market, on a "system architecture" level the problem they're solving is nothing like the "making a CPU more like a SoC" thing. The 386 CPU is still just a CPU, it doesn't incorporate any of the chipset features that were being talked about here. Intel did start selling by this time companion support chips (IE, "Chipset" components) that largely complied with the ad-hoc PC standards in terms of hardware addressing, but the CPU itself still was just a CPU. Off the top of my head I think you have to get all the way into the Pentium era with the P54C's inclusion of an onboard APIC to really start arguing the CPU was swallowing up chipset functions, and even then it's still nowhere close to being a "SoC", or even what the 80186 is.

Bluntly I'd say again the reason Intel never bothered making an XT-class semi-SoC is simply because by the time it became so obvious that PC compatibility was such a make or break deal Intel had zero interest in perpetuating the market life of 8086-class machines. Second-sourced 8088/86s were everywhere and other companies were already doing integrated chipsets, where's the money to be made for Intel here? They'd much rather come up with reasons to upsell you on a better, more expensive (and more profitable) computer, especially if they're the only one who can sell you the chips for it. (Which is exactly what they tried to pull off with the 80386.)
 
I don't know that in 1986 or 1987 too many people wanted a really really fast DOS machine, they were mostly eagerly looking forward to something better than DOS. Which then took longer than expected to materialize, and in a way that was unexpectedly both more compromised and more expensive, which made DOS seem by comparison, well, good enough for now. Windows 3's great trick really was to allow its use (however limited) on a 1 MB or even 640k machine, and to not come out until memory prices had started to really fall.

Windows 3 was the best reason to buy a 386sx over a 286, for sure.
 
Never forget that every rev of the 286 had errata that went unaddressed .

Most famous is the real mode addressable 64k region at the start of extended memory.

1982 literature many times mentioned the 286 side by side with the 68000 as being 32bit. This misconception came from the fact that the 286 originally was to have a 32bit virtual address which became a 30bit one as the design firmed up.

The 286 was intended to be more 386 junior like as a stopgap because it was known early on the 32bit 386 couldn’t be ready but a 16 bit chip with 386 like features could.
Intel did not put in the proper resources to make the 286 a better solution because it was started after the 386 design efforts and meant only to cover until we got to the 386. The 286 is a window into what the 386 would be sort of like if it had been finished earlier before Intel realized it’s mistake in scope.

An analysis of various rev 286 dies show some unused areas that likely were abandoned functionality that were disabled but not fully removed.
If the 286 would have been intended to release in 1984, with a proper design cycle the 386 may have been less must have and the 286 would have likely been less clunky to use.
It seems like we're suddenly mixing up technologies here.


Anyway, the point I'm laboring was such a make or break deal Intel had zero interest in perpetuating the market life of 8086-class machines. Second-sourced 8088/86s were everywhere and other companies were already doing integrated chipsets, where's the money to be made for Intel here? They'd much rather come up with reasons to upsell you on a better, more expensive (and more profitable) computer, especially if they're the only one who can sell you the chips for it. (Which is exactly what they tried to pull off with the 80386.)
Intel of coarse wanted the 286 and lower to go away as soon as the first 386-12’s rolled offf the line. They wanted to corner the market IBM accidentally created.

But they gladly continued to make mass quantities of 80186’s for a long time, the 80186 is as close as your going to get to an x86 SOC in 1982, it just worked for many applications, industrial, modems, terminals and special purpose systems of all types. Like a 6502 it helped greatly reduce the cost of devices that needed a little x86 but not a whole “PC”
 
Last edited:
I don't know that in 1986 or 1987 too many people wanted a really really fast DOS machine, they were mostly eagerly looking forward to something better than DOS. Which then took longer than expected to materialize, and in a way that was unexpectedly both more compromised and more expensive, which made DOS seem by comparison, well, good enough for now. Windows 3's great trick really was to allow its use (however limited) on a 1 MB or even 640k machine, and to not come out until memory prices had started to really fall.

Windows 3 was the best reason to buy a 386sx over a 286, for sure.
Late 80's Microsoft was doing it's best to get you to jump into MS Office. They succeeded in toppoling WordPerfect, as WP was late to the party while Windows was clearly well established. I remember there were a lot of clerk/typist that refused to give in to Windows and insisted on DOS. By the time the Pentium came around, just about everyone in my bailiwick was on Windows.
 
Intel did not put in the proper resources to make the 286 a better solution because it was started after the 386 design efforts and meant only to cover until we got to the 386. The 286 is a window into what the 386 would be sort of like if it had been finished earlier before Intel realized it’s mistake in scope.

The "tragedy" of the 80286 is if you judge it completely in a vacuum, IE, you imagine it living in a purpose-designed protected-mode-only 16 bit computer, it's not a bad chip by at least the standards of the late 1970's. Its IPC was decent, it could access plenty of memory, it had a real paging MMU... if you imagine the 286 in the context of, say, the PDP-11 it looks pretty darn good, both its physical and virtual address spaces are huge by comparison. For running a mid-1970's version of UNIX it could be a real winner.

But, of course, the 286 wasn't born in 1974. If Intel had at least given it something like VM86 mode so it could have provided multitasking of existing software, or even an official "Unreal" mode switch so it could just do bigger spreadsheets without the mickey-mouse contortions that 286 DOS extenders have to deal with, maybe they would have at least had something to justify the "stopgap" effort. But, sadly, there's a reason why most 286s just spent most of their lives effectively being Turbo XTs. It really kind of stinks like Intel didn't even know what their target audience was.
 
It was the spreadsheet people that really did the driving back in those days. LIM memory, for example. Fancy graphics weren't terribly important and neither was multi-tasking/multi-user. Just godawful big spreadsheets. If you were interested in fancy publishing, you got a Mac and ran Aldus Pagemaker.
 
Anyway, the point I'm laboring to get to is while, sure, the VM86 extensions were an important concession to the realities of the PC software market, on a "system architecture" level the problem they're solving is nothing like the "making a CPU more like a SoC" thing. The 386 CPU is still just a CPU, it doesn't incorporate any of the chipset features that were being talked about here.

I guess from my super broad-brush perspective, both concepts represent an admission that x86 CPUs are basically married to the IBM PC ecosystem. I guess I'm imagining there was some day at Intel HQ where they had to go over everyone's "future CPU and support chip" wishlists and start kicking things out that couldn't be made to jive with broad PC compatibility.

Second-sourced 8088/86s were everywhere and other companies were already doing integrated chipsets, where's the money to be made for Intel here? They'd much rather come up with reasons to upsell you on a better, more expensive (and more profitable) computer, especially if they're the only one who can sell you the chips for it. (Which is exactly what they tried to pull off with the 80386.)
Given that Intel themselves were not interested in the market, wouldn't that leave the opening for those second-source manufacturers to blaze their own trail, especially if they don't have a license to the next generation CPU? Or were second-source arrangements "yeah, you can fab an 8086, but not do any custom engineering?"

I suppose the C&T and Vadem chips discussed here count, but they're obviously pretty scarce-- I don't believe you, random Aliexpress shop claiming to sell F8680s-- and the Vadem one apparently used a V30 core. Maybe the idea of "laptop on a chip" was more compelling because there was more need to do your own design-work, and higher margins than making the cheapest-possible board for white-box XT clones that were cheaper every day.
 
It's worthwhile noting that Intel oversold their licensing, which enabled competitors to come up with "improved" versions, not just equivalent copies of chips. Said licensing was the subject of a landmark lawsuit in Intel v. NEC.
The facts of the situation were not in dispute--Intel and NEC entered into a cross-licensing agreement in 1976 and began production of 8088 and 8086 chips in 1979, about the same time as did Intel. At the same time NEC started working on its V-series MPUs, using some of the x86 hardware design, which was allowed under the terms of the license. But then, NEC disassembled and studied the x86 microcode and used it to develop the V-series microcode. The giveaway was that the NEC microcode operated with the same quirks and order as did the Intel microcode.
Intel alleged copyright infringement. Read about it here. The bottom line is that Intel lost its case on several points, one of which is that it didn't require licensees to include an Intel copyright marking on their chips. Another was that NEC used a "clean room" procedure (similar to that used by Phoenix for the PC BIOS) to craft its final product.

It's a fascinating study and was a very big deal in 1990.
 
I am not sure where the Hornet fits in to this but it was effectively an 80186 paired with display controller and PC style PIC, PIT and UART. The problem was that it needed a lot of pins (208) so it was a fairly complex chip to layout a motherboard around. I can't find a listing for its total transistor count. I think SoCs had to be sold at a premium which meant those only went to the smallest of portables. Any full sized laptop or desktop could include a full power CPU and normal support chips and get a faster system that costs less to build but can be sold for a higher price.
 
I am not sure where the Hornet fits in to this but it was effectively an 80186 paired with display controller and PC style PIC, PIT and UART. The problem was that it needed a lot of pins (208) so it was a fairly complex chip to layout a motherboard around. I can't find a listing for its total transistor count. I think SoCs had to be sold at a premium which meant those only went to the smallest of portables. Any full sized laptop or desktop could include a full power CPU and normal support chips and get a faster system that costs less to build but can be sold for a higher price.
Cost to benefit

The original 80186 was n example of integration saving money in terms of the technology of the era.

If you go too far integrating more than your current process node reliability and cheaply supports it may add more cost than a bunch of ancient discrete throw away chips.

Volume is also a very important consideration of cost, your example may have been in a niche or high Margin market offering to mil spec or aerospace

Next 208 pins eventually became comparatively cheap and it’s worth noting

The 8088 SOC I mentioned in my first post was cheap as dirt but also horribly obsolete once it hit the market .

A 80186XT compatible chip could have been easily made at low cost but legal issues would have stopped 3rd parties
 
The 80C186EC (Intel) is very close, but I've not run into any PCs that used it. -EB was used in some embedded stuff (e.g. Cipher SCSI tape drive). I don't recall what variation the USR external modems used.
 
Back
Top