• Please review our updated Terms and Rules here

Protected Mode

Joined
Nov 18, 2011
Messages
23
Location
Buenos Aires, Argentina
As you all probably know, most games from 1993 onward requiere a DOS Extender, mainly DOS4GW. That program makes the microprocessor to enter protected mode and access memory beyond the 1MB limit.
Now, I tried to run this little program on a 286 and it says that it is only supported on 386 and beyond.
286 also has a protected mode, albeit not the same as the 386s. My question is: would it be possible to write a program to enter the 286’s protected mode compatible with DOS4GW? Would it be possible to run for instance Doom on a 286? I know that if it does, it would run at no more than 6 FPS! There are tons of games that doesn’t run on 286s because of this DOS4GW! Any thoughts?
 
No--DOS4GW is what's called a DPMI server--it enables programs to run in 32-bit mode, while translating calls to 16-bit real-mode interfaces (e.g. a call to INT 13 for disk services). A 32-bit program running under DOS4GW uses the extended 32-bit register set and addressing of the 386+, which the 80286 does not have.

Theoretically, it might be possible to do an instruction-by-instruction emulation of an 80386 on an 80286, but the result would hardly be suitable (fast enough) for gaming (putting it mildly).
 
There were a number of 286 specific extenders made. Some were DPMI compliant. I don't know of any DOS games that used them though a few business applications did. It is certainly possible to modify DOOM's source to run on a 286 extender but it might take several years before completion. Probably be too slow to be playable.

Check the specific game and its extender. You might find some programs that were developed for the 286 version of the extender but sold with a 386 extender as extender prices dropped. Those might work if you replace the 386 extender with its 286 counterpart.
 
But again, 286 protected mode is very different from 32-bit protected mode. To put it a different way, in 16-bit PM, the maximum amount of memory you can address without changing descriptors/segments is 64KB. In 32-bit mode, you can directly address about 4GB. And there are several variations on 32-bit PM as well, while there is only one 286 PM. To be sure, a 386 can execute 286 PM code (a backward-compatibiliy hack, the same way that 8086 real mode is. The world probably would be a better place if Intel had simply dropped those two hacks--but we're stuck with them now).

But if I were writing a game to use DOS4GW and a 386, why on earth would I write 286 PM code?
 
The world probably would be a better place if Intel had simply dropped those two hacks--but we're stuck with them now

Intel made an attempt to drop real mode with 80376 processor. It was not widely successful, maybe not because the real mode was missing, but because the product was targeted towards embedded applications.

Anyways, nowadays the real mode and support for all quirks is a very small piece of the processor, and probably micro-coded and doesn't need much of hardware support anyway (processor still got to support 8-bit/16-bit registers in 32 bit and 64-bit modes, so basically it is only the segmented address mode is what left, and that can be easily implemented using facilities needed for VM8086). Also the real mode was used by OS'es as late as early 2000's (Windows ME), and even newer OS'es will rely on legacy BIOS functions during boot until UEFI will be widely deployed.
 
But again, 286 protected mode is very different from 32-bit protected mode. To put it a different way, in 16-bit PM, the maximum amount of memory you can address without changing descriptors/segments is 64KB. In 32-bit mode, you can directly address about 4GB. And there are several variations on 32-bit PM as well, while there is only one 286 PM. To be sure, a 386 can execute 286 PM code (a backward-compatibiliy hack, the same way that 8086 real mode is. The world probably would be a better place if Intel had simply dropped those two hacks--but we're stuck with them now).

But if I were writing a game to use DOS4GW and a 386, why on earth would I write 286 PM code?

The primary reason for developing with a 286 extender instead of a 386 extender was price. The 286 extenders were about $495 in 1990. 386 extenders were several times that much. I do remember some programs that were started development with the free limited version of a 286 extender; proved to need more memory than the limited 286 extender offered; and were shipped with a 386 extender because it was cheaper than the full 286 extender from the same vendor. I never bothered to check the emitted code so I can't say if the large memory blocks being used were broken into 64k segments which the code automatically jumped between to provide the illusion of large flat address spaces on a 286.

DOOM may have started after DOS4GW was released and a decent free 386 extender was available. Took several years for the extender industry to reach that point.
 
Phar Lap, right? I remember their freebie "lite" version. I don't recall what their extender licensing price was. But I suspect that if you were a major games vendor, the price of an extender didn't matter very much.

But in fact, if a developer wanted to run his application in 286 PM, there was nothing stopping him. Extenders were convenient, but some folks rolled their own rather than use a commodity solution. But if you're a graphics programmer, having your entire VGA memory mapped as a single linear hunk rather than having to fool with 64KB chunks does save a lot of bit twiddling. While it's true that the way that Windows 3.x stored memory descriptors for large allocated areas meant that an application could calculate a descriptor address instead of asking Windows to map it, it still takes a bit of code to deal with the 64KB limitation.
 
But if I were writing a game to use DOS4GW and a 386, why on earth would I write 286 PM code?

If you target game is a 386 of course it doesn't make sense to program in 286 PM! But if I had some game programing skills I would make some games for the 286 just to demonstrate that it wasn't a "brain dead chip" as Bill Gates said. I think the 286 is a great processor. Faster than a 386 SX and almost at par with the 386DX (talking about at the same clock speed)
 
As I recall reading, once a 286 went into protected mode, it required a reboot to go back to real mode...which would have certainly limited its usefulness.

Wesley
 
BillG and I are on the same page on that one. The screwy architecture really held a lot of development back. When the 386 was released, it was like a breath of fresh air. Consider that the Moto 68K series had 32-bit registers and large addressing space existed long before the 386. Doing a 'port of Unix was comparatively straightforward to the 68K; it was like driving ten penny nails with a hammer made of Silly Putty on the 80286.

And, since at the time, running legacy DOS applications was about the only game in town, it didn't do too well at doing those in PM, either.
 
Faster than a 386 SX and almost at par with the 386DX (talking about at the same clock speed)

This is not correct, according to benchmarks on the actual hardware. 286-16 and 386sx-16 were very close but 386 was still faster.

If you have a 286 with VGA and a Gravis Ultrasound, there is a great demo called Legend by Impact Studios that shows off just what the little 286 can do. It's an impressive demo for the chip. Another impressive demo that can run on a 286 with VGA and a Sound Blaster or a Covox is Crystal Dream by Triton.
 
I blame the whole fiasco as much on Cannavino not knowing what was happening at Microsoft. To his credit, he was right about one thing--Windows was a security nightmare. Both IBM and Microsoft really screwed over their developer community--IBM by throwing them under the wheels of Microsoft and Microsoft doing its best to convince the OS/2 developer community that Windows was really the new OS/2.

It sucked. I really like developing for OS/2 and was hauled into the Windows scene kicking and screaming. IBM's support and documentation for developers was superb--something that Microsoft did not offer.
 
I blame the whole fiasco as much on Cannavino not knowing what was happening at Microsoft.
To be honest, MS tried to hide it too. I mentioned in the blog post that the email in for example PX00307 did not have Cutler or Letwin as CCs.
To his credit, he was right about one thing--Windows was a security nightmare.
Well, OS/2 2.x and Windows 9x fundamentally had similar level of security.
 
Probably that's because many 386SX systems had some cache memory, while 286 didn't have it. See
Intersil 80C286 Performance Advantages Over the 80386SX

Some instructions ran slower on the 386sx; others faster. If the compiler aimed for the fastest possible 286 code, that code would perform badly on a 386sx. More typically seen real world code ran slightly faster on a 386sx at the same clock speed as a 286. The 386sx remained considerably more expensive so the 286 retained a big price to performance advantage.
 
Re: 286 vs 386sx performance, I vaguely remember gamer-types back in the day claiming that if you wanted the maximum possible speed for your programs that didn't need EMS you should avoid running a memory manager because simply having it loaded "slowed things down". I never had a machine where I might have noticed the difference (went straight from a 286 to a 486), did running in vm8086 mode with paging enabled vs. "real Real Mode" *really* have a significant impact on performance under DOS?

(Or to put it another way, assuming the "standard" speed for 286 clones was around 12mhz when 16mhz 386SXs were introduced was it even remotely possible that running CEMM or QEMM on the 386sx could cause enough overhead make it slower than the 286? Personally I'd guess "no" but it's not like there's much of a performance delta there to start with.)
 
There was also a very basic difference in 386 (not just SX) architecture and 286 (and earlier). The 80386, unlike the 80286, has a very simple high-speed "core"; x86 instructions are decoded microprogram-style and implemented. The result was that very simple instructions are executed very quickly, with some unexpected results. Take the LOOP instruction. This instruction decrements the CX register and branches/does not branch depending on the result (LOOPE/LOOPNE variants). On the 80286, this is indeed the fastest way to end a decrement and jump loop. On the 80386, using a separate decrement, followed by a jump is faster. There are other examples of this.

So, what you have is a program that will run more slowly on the faster CPU. Writing "universal" code that will execute on anything was pretty easy on the 80286, since all that had to be accommodated was the 8086 differences. On the 80386, it's a whole different ballgame, complicated by the introduction of 32-bit registers and extra segment registers (FS,GS). It became very advantageous to determine the CPU type on computation-intensive programs and use CPU-appropriate code to handle tasks.

So we're really comparing apples and oranges here.

The 80376 was intended as an embedded CPU and did not have paging hardware, which would have limited its usefulness for general-purpose applications.
 
There was also a very basic difference in 386 (not just SX) architecture and 286 (and earlier). The 80386, unlike the 80286, has a very simple high-speed "core"; x86 instructions are decoded microprogram-style and implemented. The result was that very simple instructions are executed very quickly, with some unexpected results. Take the LOOP instruction. This instruction decrements the CX register and branches/does not branch depending on the result (LOOPE/LOOPNE variants). On the 80286, this is indeed the fastest way to end a decrement and jump loop. On the 80386, using a separate decrement, followed by a jump is faster. There are other examples of this.

I’m always impressed by all your post! You know about everything from computers!
 
Back
Top