• Please review our updated Terms and Rules here

You know what seems to be extra rare? Pentium II and III Xeons.

8450... Octal Pentium III Xeon processors. More processors than the 6x6. I forgot they built that beast.

the 8450 is at the top of my "wish list", but got too many practical things I need first. Still, owning a system with 8 distinct processors is some serious nerd cred.
 
true, but the 8-way slot-2 xeon is a supported config.
the ppro 6x6 was not supposed to work afaik. as i recall, it was supposed to cap out at quad processors. the fact ALR dropped the mic on everyone and made a 6-way that worked with the overdrives... nothing short of awesome. I really should get my alr system up and running, but i'm waiting on getting some parts.

I recall having a dell 6450 a long time ago. Its a fairly common machine, and the one linked is a bit crusty, but probably works.
 
Asci Red ran 9,000 ppros in SMP, so limits are frungible. (Though even among my collogues who worked on it I can't get a consensus as to whether it was one distinct system or a beowulf cluster).

486s were similarly not supposed to be supported in SMP at all, but dual and up to eight-way systems did exist. Information on them is vanishingly rare.
 
I have one or two dual socket 7 boards that expected you to run multiple Pentium chips. It always made me wonder what bizarre hacks they were pulling off to make the multiprocessor 386/486/586 systems work because those were never expected to be used in the configuration and lacked many of the instructions to make it efficient.
 
I assume the 386 and 486 SMP systems were custom made hardware running custom made OS (Unix of some kind) and custom coded software. I wasn't until Windows NT 3/4 that you could buy a dual Pentium setup and run commercial software on it.
 
Asci Red ran 9,000 ppros in SMP, so limits are frungible. (Though even among my collogues who worked on it I can't get a consensus as to whether it was one distinct system or a beowulf cluster).

It was more tightly coupled than a Beowulf cluster, but it was still a message passing architecture; each compute node had its own memory, there was no global memory pool, and each compute node only had two CPUs.


I have one or two dual socket 7 boards that expected you to run multiple Pentium chips. It always made me wonder what bizarre hacks they were pulling off to make the multiprocessor 386/486/586 systems work because those were never expected to be used in the configuration and lacked many of the instructions to make it efficient.

386 multiprocessor systems and (I would wager) most 486 multi-CPU systems were proprietary asymmetric architectures that relied on cooperative message passing or other hacks (a late, more consumer-focused example of this would be the ill-fated BeBox with its multiple PPC603s), but some late 486 SMP systems complied with the Intel Multiprocessor Specification, the first public version of which, 1.1, came out in 1994. MPS was originally implemented in the form of the 82489DX APIC, which worked with the 486 and early Pentium processors, while later "P54C" Pentium CPUs and all Pentium Pro CPUs have APICs integrated.

Anyway, all 90mhz and faster Pentium socket 5/7 CPUs technically have MPS-enabled onboard APICs, limited to a pair of CPUs. (My vague recollection is that the MPS standard as implemented on P6-family CPUs maxed out at 8, but it depended on the exact CPU type; IE, consumer CPUs like the PII/PIII could only do 2, Pentium Pro and Xeons could do up to 8 but, likewise, it may have depended on the CPU, some topped out at 4?) But obviously P5 SMP boards were pretty rare; none of Intel's mainstream consumer chipsets supported it so you had to use one of the older high-end chipsets that Intel mostly orphaned when the P6 came out, and they had compatability issues with OSes other than NT and whatnot. Strictly speaking more modern MPS compliant OSes like Linux *should* run on these, but I don't think anyone's tested it in over 20 years.

The fact that Intel put APICs in the P54Cs kind of implies Intel might have at least considered SMP as a consumer feature at some point, but I would guess they canned it because it was clear that NT just wasn't going to be mainstream anytime soon.
 
Last edited:
I assume the 386 and 486 SMP systems were custom made hardware running custom made OS (Unix of some kind) and custom coded software. I wasn't until Windows NT 3/4 that you could buy a dual Pentium setup and run commercial software on it.

The assumption part has been my problem. We can speculate till the cows come home, but thusfar I've not been able to find proof. If someone out there ever gets therehands on one of these systems, it'd be a real treat to learn about it!

Strictly speaking more modern MPS compliant OSes like Linux *should* run on these, but I don't think anyone's tested it in over 20 years.

I know Linux only dropped support for the 386 some years ago specifically because maintaining its SMP library was a hassle. There are(or have been) linux systems with thousands of distinct processors under a single kernel(so non-clustered).

It was more tightly coupled than a Beowulf cluster, but it was still a message passing architecture; each compute node had its own memory, there was no global memory pool, and each compute node only had two CPUs.

This is one of those things that I hear very hotly debated at work, EG "where it stops being a cluster and starts to be one giant SMP system". The folks who put it together couldn't even tell me for sure(although that may have been a combination of failing recollections over 20 years and not being quite sure what they can and can't talk about).
 
FWIW Unisys did a 12-CPU PPro, the Aquanta XR/6. Very little info out there. EDIT: There is a series of CPU Shack articles on the subject, which is more focused on the ALR 6x6: https://www.cpushack.com/2019/01/12/mini-mainframe-at-home-the-story-of-a-6-cpu-server-from-1997/

I had an HP dual Pentium at one point, but it's long gone.

As far as distinguishing between a cluster and an SMP system, NUMA single-inage architectures stretch the definition here; I have a 30 CPU Altix 350 here (Itaniums at 1.5 GHz) with 54GB of RAM, but the non uniform memory access architecture makes things interesting. Message passing techniques are used, even though a single kernel image is running. Altix 3700BX2 systems could go to 512 CPUs and 6TB of RAM in a single system image running the IA-64 Linux kernel.

(further FWIW, NASA did a study on message-passing on clustered versus single-image systems.... https://www.nas.nasa.gov/assets/nas/pdf/techreports/2006/nas-06-012.pdf)

ALR certainly made a beast of a system and pushed the 450GX chipset beyond specification.
 
Last edited:
It was pointed out to me that a PC and gaming shop in North Vancouver at one point had an ALR 6x6 on display in the cafe. That would of been around 2009 or 2010 but they were not interested in selling it. I think the shop silently closed around 2013 or 2014 and nobody in my circle of friends at least ever saw where it and other artifacts from the shop ended up. I think if you dig really deep into the forums here you can see me asking about that system at the time and there was quite a few people asking why bother because 6x6's are very power hungry and were (not yet) all that interesting.
 
It was pointed out to me that a PC and gaming shop in North Vancouver at one point had an ALR 6x6 on display in the cafe. That would of been around 2009 or 2010 but they were not interested in selling it. I think the shop silently closed around 2013 or 2014 and nobody in my circle of friends at least ever saw where it and other artifacts from the shop ended up. I think if you dig really deep into the forums here you can see me asking about that system at the time and there was quite a few people asking why bother because 6x6's are very power hungry and were (not yet) all that interesting.
The only 6x6 I personally laid hands on was at a PC shop on Lexington Avenue in Asheville, NC, way back in the day, maybe 2000-2001. I needed a power supply on a Sunday afternoon for a radio station's automation computer, and this shop was the only one open. Found a good deal on a new Antec that day, and got that radio station back on the air. Never went back to that shop (at the time, Lexington in Asheville had a bit of a reputation), but never forgot the 6x6. (The shop was gone the next time I had opportunity to go on Lexington a few years later, and the street is much improved from those days).
 
This is one of those things that I hear very hotly debated at work, EG "where it stops being a cluster and starts to be one giant SMP system". The folks who put it together couldn't even tell me for sure(although that may have been a combination of failing recollections over 20 years and not being quite sure what they can and can't talk about).

Here's a PDF that goes into pretty great detail about the ASCII Red architecture. The compute nodes are fairly conventional dual Pentium II systems with their own relatively standard northbridge setup, PCI bus, RAM, and boot support. The secret sauce is the crazy-fast crossbar NIC, that allows transferring traffic at close to RAM speed between nodes. So... effectively it's *close* to a NUMA single-image computer in terms of capability, but as described in the PDF it relies on the compiler interfaces to explicitly coordinate the data passing between nodes, it's not a real shared-memory architecture.

A Beowulf cluster effectively does similar things, but because the bandwidth is much lower between nodes it's a little harder to pretend it's all one machine.
 
I know Linux only dropped support for the 386 some years ago specifically because maintaining its SMP library was a hassle.

There wasn't explicitly SMP code *for* 386, but my vague understanding is that maintaining compatibility with it was a hassle because the 486 and up added mechanisms to streamline getting into Protected Mode and enhancements to the MMU to help the CPU mark (for other CPUs) that a cached memory location needs to be invalidated. Here's the DIFFs removing it.
There are(or have been) linux systems with thousands of distinct processors under a single kernel(so non-clustered).

My comment was simply that I don't think anyone's tested Linux on an SMP system using less than Pentium Pro CPUs for a very long time. SMP support was added with Linux 2.0 with two architectures, Sun Hypersparc and MP1.1/1.4 compliant 486 and higher Intel x86. (The OOOOLD Linux SMP-HOWTO has a quote from Alan Cox saying it had been tested with up to 4 CPUs, and notes that in *theory* MP supported up to 16, so if you had such a monster it *should* work on that.) For laughs here's a 2001 post on the linux kernel mailing list where someone responds to Alan Cox voicing doubts that anyone actually ran Linux on an MP-compliant 486 by saying he knows of two in the whole world, who "mostly do it to be awkward", which gives you a pretty good idea how rare hardware like that was.

Support for really huge NUMA architectures might have started appearing in Linux as early as 2.2, but I don't think it was really completely baked until 2.6? The AMD Opteron was one of the first "prosumer" x86-adjacent architectures that were cache-coherent NUMA. (Intel MP relies on all CPUs having equally shared access to common memory. Opteron had each socket controlling a bank of memory, so if a process running on one multi-core socket needed shared memory from the other socket it had to jump through hoops, which was a good motivation to integrate a smarter scheduler that optimized processes to use their more local memory.) I'm pretty sure the multi-socket Xeons are also NUMA today? Need to refresh my memory on that point.
 
Last edited:
Whee, brings back memories.... Ran Red Hat of some vintage (7.2, maybe?) on an HP Netserver LH Pro dual Pentium 166. It was old even then.

Also ran Debian on a DEC Alpha server 2100, four 275MHz Alphas and 1GB of RAM, not bad for a 1990's system.

Sequent made some really large multiprocessor systems. Here's the Wikipedia paragraph on the largest:

"Symmetry 2000/700

The S2000/700 ran either DYNIX/ptx or DYNIX 3. It featured up to 30 25 MHz Intel 80486 processors, each with a 512 KB cache. It also supported up to 384 MB of RAM, up to 85.4 GB of disk storage, and up to 256 direct-connected serial ports."
 
Not to change the subject, but the cost of p-pro overdrives is nucking futs. I still need to get two more for my system, and at epay prices, i'll need to win the lotto to get them.
 
According to this archived LinuxToday article from 2004, SGI's ProPack was running on Altix with up to 256 CPUs in NUMA SMP with Linux 2.4.21.

I had some contacts working at SGI during that period; the scuttlbutt was all about how they were going all-in on Linux and contributing a lot of work to the "scaling it up to work with really big things" bucket.

(There was also a lot of sour grapes about MIPS being left to die on the vine, since all that big scaling stuff was essentially being ripped off of IRIX/Origin to be bolted onto Linux. Kept hearing rumors about an amazing low-priced MIPS-powered dingus that was totally going to turn the price/performance ratios around but management was refusing to run with it, but never got to see this animal with my own eyes. At this point I probably can't get anyone in trouble by vaguely scratching my head and volunteering the code name was something like "Banana 2000"?)
 
I don't know whats up with the prices on ppro overdrives, were they just rare or what?

Also can't remember for sure i the 6x6 will take them. I recall people trying in the early 2000s but cannot recall the results.
 
Back
Top