• Please review our updated Terms and Rules here

Are there any 286's that deturbo to 4.77Mhz?

Moogle!

Experienced Member
Joined
Dec 14, 2008
Messages
120
Location
United States of America
Preferably an 8/10/12Mhz machine. I actually have a 286 powered XT board (Hedaka 919) that does this, but I was wonder if there are any proper fully 16 bit boards that de-turboed to 4.77Mhz?
 
I doubt it. All the full 16 bit systems were designed to emulate the 5170 so many slowed to either 6 MHz or 8 MHz.

Okay, I think there were S100 systems that ran a 286 quite slowly but those weren't exactly going to have an AT compatible turbo mode.
 
What would be the point of slowing an 80286 to 4.77 MHz? It can be done, but the timing still won't be the same as an 8088 XT running at 4.77MHz--not by a long shot.

I moved from a 8MHz 8088 clone to a 80286 running at 6 MHz. It was much faster than the XT clone.
 
When we were working with pre-release steppings of the 80286, we ran those at about 4MHz initially. Unlike the S100 variety; those were CLCC.
 
What would be the point of slowing an 80286 to 4.77 MHz? It can be done, but the timing still won't be the same as an 8088 XT running at 4.77MHz--not by a long shot.

I moved from a 8MHz 8088 clone to a 80286 running at 6 MHz. It was much faster than the XT clone.

It may be running faster internally, whatever that may mean exactly (register operations?). But as far as a program is concerned, as the difference in clock speeds screwed some programs up, wouldn't matching the clock speed alleviate some problems? I'm just making a wild guess, and no I never heard of an at clone with a 4.77mhz clock option. Most software vendors delivered patches to compensate for *any* issues that would result from running their stuff on an AT, as it became the future.

Would memory and I/o operations automatically be fastef on a 4.77mhz at? I guess that may depend on the specific speed of the Ic's. Don't know if the at's peripheral chips, many identical to a pc or xt, needed a higher rating as the clockspeed was 6, then later 8mhz. But a simple crystal swap to bring an at tho 8mhz didn't necessitate changing any chips.
 
The 80286 instruction timings were very different from those of the 8088. Add to that, the 16 bit bus and you had a good recipe for performance.

Clock speed isn't everything. I recall that one of the development team for the 6809 said something to the effect of "If we knew that people rated processors by clock speed, we would have added a waveguide".
 
I've seen written the claim that the generational leap between the 8088/8086 and the 80286 (and 186, it's basically a real-mode-only 286) was the largest per-clock performance jump in the history of x86 CPUs, and it's probably true. The old "Landmark" benchmark used to scale its results so it would say your machine was as fast as an IBM PC/AT running at "X" mhz; It's a pretty useless and inaccurate benchmark, but it is *probably* about on the nose when it claims that a PC/XT is roughly equivalent to a 2mhz-ish AT.
 
We used the 186 and 286 on the same board. The 186 could run MS-DOS or serve as the I/O processor for the 286. The 186 and 286 were developed in parallel; so some features are bound to be shared.
 
Using the data in the TOPBENCH benchmark, a 6 MHz 80286 AT performs roughly 2.75x faster than a 4.77 MHz 8088 (both with an EGA card installed, if one suspects video memory timing differences affecting the benchmark). That's not quite apples-to-apples, so a closer comparison would be: A 6 MHz AT is 1.83x faster than a 7.16 MHz 8088 clone (I used a Tandy 1000 EX).

So a nearly 2x jump is indeed impressive, IMO.
 
What would be the point of slowing an 80286 to 4.77 MHz? It can be done, but the timing still won't be the same as an 8088 XT running at 4.77MHz--not by a long shot.

I recall more than one an old review of earlier 286/386 machines (mid-late 1980s), claiming that they could automatically clock down to 4.77MHz when the floppy disk drive was accessed. Supposedly that was done to play nice with older disk-based copy protection schemes.

I wonder how effective this really was, because the timing would still be wildly different. Maybe the actual slowdown went even lower? Or was it mainly bus cycles that mattered?
 
That was done on a number of "turbo" 8088 systems, but I don't recall seeing that with a 286 system. Given that 5170s and the like were pretty popular, I fail to see the point of it.
 
Given that 5170s and the like were pretty popular, I fail to see the point of it.

I pretty much second that. IBM made zero pretense that the 5170 was supposed to in any way be "cycle-exact" compatible with the 8088 models, and it would pretty much be mission impossible to achieve that anyway because not only is the 286 faster across-the-board than the 8088, how *much* faster varies a lot based on what instructions/operations you're talking about. Simply clocking it down to the average 1.9-or-whatever Mhz that would make it run some particular benchmark in the same time as a 5150 would just be a tablecloth loosely draped over a giant, ragged pile of individual differences.

That said, I guess I vaguely remember some instances where old reviews of a given AT compatible would claim it was "more compatible" than IBM's because it would happen to run some particular piece of software the reviewers were using as a touchstone that the 5170 had trouble with, but I would probably chalk that up to BIOS or other differences, not speed-matching.
 
I assume that those protection methods didn't need a *precise* 5150/5160 system speed, so a moderate slowdown was enough. Perhaps 4.77 MHz just happened to do the trick on a 286, and that rate was cheap to throw in (because it's a divisor of ISA's 14.32MHz OSC signal). I could be wrong about the exact rate of course, but I'll see if I can dig up one of those articles.

I did find one about a 20-MHz *386* system, which optionally goes down to *8* MHz for floppy access: https://archive.org/details/PC_Tech_Journal_vol06_n08/page/n97 (bottom left). A little later it mentions a 4.77 MHz setting too, but says that the slower speeds are faked with wait-states, without changing the clock rate at all. Weird.
 
Anecdotally:

The Leading Edge Model D2 (6/8/10 MHz) does require the 8 MHz clock for the onboard FDC, but it doesn't change the CPU's operating frequency when the FDC is active. Mine had a fault that caused the 8 MHz clock to be missing. The onboard FDC didn't work, and the machine would lock up if the 8 MHz CPU frequency was selected. But it didn't lock up if you tried to use the onboard FDC when either of the other CPU frequencies were selected.
 
One of the issues relating to CPU speed and the FDC used in the PC line (µPD765A/8272A) is that the chip doesn't respond immediately to commands or state changes (there are really only 2 I/O ports, a data and a command/status port). The datasheet quotes something like 50 µsec delay required between doing something and expecting a response. In the 5150 BIOS and later, the delay is handled by CPU loops, which are, quite naturally, calibrated to the expected CPU speed. Speed up the CPU too much and those timing loops go out the window. This is why a lot of floppy copy-protection schemes break down with souped-up PCs and XTs--the code performing the copy protection check apes the basic BIOS FDC code.

One way out of this was to slow the CPU to 4.77MHz during floppy accesses.

Later code calibrated itself to the CPU speed--or used the refresh status in port 61h--which amazingly hasn't changed over many years (toggles at about 50 µsec intervals, if memory serves).
 
Back
Top