• Please review our updated Terms and Rules here

Running PowerMac G5 Without Radiator Fans

I liked the G5's. They were a mechanical refresh of the plastic cases Apple ran up until then and understanding that the liquid cooling and aggressive fan control was another one of Steve's potential resume-generating demands, the option of either cooling system was neat. Very expensive, but neat. I used one at a former job until 2012.
it's just incredibly fragile. I can't speak for when they were new but by the 2010's they had a horrible mortality rate. I can't blame the RoHS on Apple too much because that was an industry-wide issue (thanks, Europe) The O-ring and fungal issues in the LCU's were also more or less engineering mistakes (I highly doubt the O-ring size issue was intentional) and the capacitors, again, was something not just exclusive to Apple at the time.
 
Liquid cooled machines happened after I had left, and they WERE a kludge.
G5 was the first Apple machine with aggressive PID cooling designs. A lot was learned.
There must have been but I'm not aware of anything like it in the Intel space, they
just threw fans at these things running hotter than a lightbulb.

Today, power/thermal/clock management is all tightly integrated
 
The statement that "These were machines that were built out of deperation" is something someone with no knowledge of their
design process and a Apple hater would say.

That's an interesting thing to say in light of:

Moto had left the market for embedded PPC, IBM would do the same...

I have zero inside knowledge of the situation, but I've read multiple articles that have described Apple as being an absolutely terrible customer for pretty much the entirety of the AIM Alliance's existence and ultimately created this situation where nobody wanted to invest in creating a performance competitive deskop/laptop focused CPU when there was only one customer for it. I'm not going to go scrape up a full account of the drama to paste in here, and I'm sure there are counternarratives out there, but the long and short of it is the PPC 970 is very commonly depicted as a rush job to glue Altivec (which Apple's marketing had gone all in on, forcing the company to rely soley on Motorola as the supplier for all their high-end machines) onto IBM's server CPU, which IBM had never intended to be installed in affordable desktop-friendly cases.

Sure, I know the Pentium 4 was also a huge power hog and on paper might look just as bad, but Intel actually invested a lot in things like incredibly aggressive power management and packaging strategies that made it a lot easier to build "acceptable" machines out of it without the level of heroic, high priced engineering the G5 needed. Maybe if the performance had really justified the investment the case for them would be easier to make?

For a while I had an Xserve G5 in my care, and from an engineering standpoint I have no problem at all saying it was an incredibly impressive machine; my comparison to a supercar was not overheated in the slightest. The *huge* bank of fans, the intricate monitoring system, etc, all very impressive. But... it didn't actually run any faster than a Supermicro made out of plastic and tinfoil that probably cost a quarter as much to stamp out. (Nor was it any quieter, really, but it gets a pass because it was meant to live in a server room.) Now it's perfectly possible Apple was still managing to get as much profit out of each sale or more than the cheaper one because their base prices were so much higher, but... eh. You can still by a tinfoil Supermicro.
 
but Intel actually invested a lot in things like incredibly aggressive power management and packaging strategies that made it a lot easier to build "acceptable" machines out of it without the level of heroic, high priced engineering the G5 needed. Maybe if the performance had really justified the investment the case for them would be easier to make?

Intel didn't invest almost anything in power management in the Pentium 4, that came extremely late in the life of the Netburst architecture. They were too caught up in trying to push Netburst to 10 GHz and the wailings of literally everyone about the power consumption and heat output fell on deaf ears until the Prescott and Cedar Mill when clock modulation was added, but that really didn't change the power consumption much because of the grossly inefficient design. They also never added clock modulation or power saving features to *any* Celeron in the Netburst era, nor in the Celeron M, leaving users with horrendous battery life as low as 30 minutes.

My dad bought me an Inspiron B130 laptop new in 2006, it came with a Celeron M 1.5 and new out of box, the battery life just barely squeaked by 30 minutes. It wasn't until I installed a Pentium M 2.13 that my battery life went to around 2.5 hours because it could ramp down its core clock. I also recently bought an old HP Compaq NX9600 laptop that has a Pentium 4 in it. It has no clock mod support and it uses a horrendous amount of power. I unfortunately can't test the battery life because one of the cells vented from shorting out when I tried to charge it, curious the pack has no lithium battery protection. I only knew something was amiss when the laptop started smelling of the characteristic green apple candy smell that I noticed one of the cells was thermonuclear hot. I ejected that battery and threw it out in the yard in case it wanted to explode, it never did fortunately.

It wasn't until the Core 2 era and the abandonment of Netburst did Intel finally get clock ramping and proper power saving features. Thank their lucky stars that a small team somewhere in Israel kept working with the Pentium M core that got them to squeak by during the disaster of Netburst. Too bad they don't seem to have anyone like that around now to save them again.
 
I work now for a long time on Apple. Started on a 7100/80, later on G3, G4, G5, IntelPowermac, and now M2 chip. I still have al those machines. Only 1 G5 model I own did not survive it was also liquid cooled, Apple Power Macintosh G5/2.5 DP (PCI-X) - A1047 ... It was the first Apple I owned ever that broke... What was the problem, the liquid cooling started to leak, and they were above the psu, so the psu got on fire cause of the liquid leaking into it. Glad this happend when I was working and not in the night.
 
Intel didn't invest almost anything in power management in the Pentium 4, that came extremely late in the life of the Netburst architecture.

I wasn’t talking about efficiency, I’m talking basic fault tolerance. Even the first models of the Pentium 4 could not only survive, but throttle enough to keep going *if you ripped the heatsink off while it was running*.

I’m by no means a fan of NetBurst, yes, it was a wasteful design *on purpose* just to push up the maximum clock speed regardless of efficiency, but… the fact that Intel was able to build these firecracker hot chips into packages that could tolerate the kind of treatment they did does in fact demonstrate a level of engineering competence that needs to be acknowledged. If you stuff a Pentium 4 into a cramped, poorly ventilated case it’ll run like garbage, but it probably won’t be the first thing that melts. It is *very easy* to inadvertently destroy a PPC970, and the engineering of the G5 machines is as complex as it is as a result.

… anyway, why are we talking about laptops? G5 laptops were never a thing. Pentium 4 laptops could at least exist. (And yes, they’re almost uniformly huge bricks that Steve Jobs would never *allow* to exist, but there were a few marginally not terrible ones, like the Thinkpad T30. That’s a 2002 machine and its power management skills are frankly miraculous.)
 
Fault tolerance has nothing to do with efficiency lol. Even the late Coppermine PIII chips were able to flood the CPU pipeline with NOPs to stave off destruction, but not before that. Katmai and older chips WILL self destruct if the CPU fan fails and/or no heatsink. There's nothing miraculous about adding support for that, AMD did it with the Athlon 64 in 2003. Motherboards since years before that had the option for thermal shutdown in the BIOS, but that was just it, an option, and it was almost always off by default. I always set it to as low as possible and it saved many Duron/Athlon/PIIIs over the years.

That old article is erroneous to state that clock speeds are lowered to save the CPU, they are not. All the CPU is doing is flooding the pipeline with NOPs, or "do nothing" to reduce the heat being generated. Pentium III and Willamette Pentium 4s were multiplier locked and could not under any circumstance lower their own clock speed.

The G5 also most definitely could have been implemented in a laptop, it was entirely Jobs's fault it never happened. The 970FX pulled 11W at 1 GHz and 48W at 2 GHz, which is almost HALF of the Willamette core at the same speed. He couldn't have his modern wafer thin laptop design that is so thin that it compromises the structural integrity of the laptop, and the cooling. All Macbooks with Intel chips basically run at tjmax all the time (100C) and throttle to hell because Jobs doesn't believe in vents.

The Thinkpad T30 used the Pentium 4-M, which was one of the few Pentium 4s that had actual clock ramping support (the Mobile Pentium 4 being the other.) It required a driver and OS support, and it was notoriously buggy. But such was early clock ramping support, AMD had similar issues with their Cool'n'Quiet that was notoriously difficult to get working properly. It also required a driver and OS support. Neither were miraculous, they were kludges at best.
 
Is the implication here that there was a quad socket G5?

The quad used two 970MPs, which was a dual-core chip. This chip was also used in the last dual-core PowerMac G5s in a single socket. (All of these machines had PCI-E slots instead of the PCI-X slots of the earlier models.)
Fault tolerance has nothing to do with efficiency lol.

Which is what I said? Your nitpicking about NOP generators verses clock scaling misses the point entirely, which was the G5 was missing consumer-friendly disaster prevention features present in chips predating it by two to four years.

And, frankly, maybe it doesn't matter, because the G5 was only intended to be used by a *single* customer which exclusively built highly engineered sealed boxes with no user-serviceable parts inside, so IBM could leave it up to the integrator to protect the chip from catastrophic accidental damage with additional hardware. The point of this thread was asking if it was safe to run a G5 with parts of that safety net disabled.

The G5 also most definitely could have been implemented in a laptop, it was entirely Jobs's fault it never happened. The 970FX pulled 11W at 1 GHz and 48W at 2 GHz, which is almost HALF of the Willamette core at the same speed.

So... are you being intentionally misleading in what you're choosing to compare to what, or is it just happening accidentally? Willamette is Pentium 4 circa 2000. The 2002 Northwood CPUs used in early Pentium 4M laptops like the T30 have TDPs of 35 watts at 2.2Ghz, and they go as low as 20.8W TDP for the 1.40Ghz model. And, let it be noted, all Pentium 4Ms have Enhanced Speedstep; it is, alas, very true that Intel used to screw you *very hard* if you bought the Celeron version. You never buy the Celeron version. Ever.

Anyway, even pointing out that Intel had lower power draw than the 970FX two years earlier still misses the even *bigger* point that in the laptop market Apple wasn't competing with 2002 Northwood CPUs with 35 watt TDPs, it was up against Dothan CPUs with a 27 watt TDP at 2ghz that, real world, could run on about 7.5 watts at ~1.5Ghz. A Dothan at 1.5Ghz will run rings around a PPC970 at 1Ghz at just about everything, and there actually remains another interesting question which is: what supporting chipset would this theoretical two-years obsolete-out-of the-gate G5 laptop even use? Apple never completed a northbridge suitable for a laptop; the U3 used in all the non-MP desktops was a toasty little firecracker that barely worked in the iMacs, let alone even a laptop as chonky as a late 2002 Pentium M machine.

So... yeah. I get it, if Apple were Alienware and didn't mind releasing a completely laughable product they could have shoved the logic board from an iMac G5 into a case with a handle and called it a laptop, but as much as I hate to admit it Steve Jobs called this one right.

FWIW, I have one of the very last gen 1.67Ghz G4 Powerbooks, with the DDR2 RAM, higher res screen, etc; a pretty beautiful machine but an absolute doorstop. Nonetheless it's pretty entertaining to note that according to EveryMac.com its Geekbench 2 score of 843 is only about 20% slower than the fastest score they have for a 2Ghz iMac G5. What can we take away from this? It seems to me if you do the math a PPC 970FX laptop that's running its CPU at a wattage in the ballpark of that 2002 vintage brick of an IBM laptop is going to be running substantially slower than that G4's 843 Geekbench score. Or heck, let's be super optimistic and say we get it to run a little faster than that; maybe through some absolute miracle we're able to get it up to 1.8Ghz, giving it a Geekbench score of 1039. Unfortunately we still need to deal with the fact that a Thinkpad T42 that's actually lighter, and arguably sexier, than the G4 laptop, let alone this hypothetical G5 monstrosity, is scoring 1172 with the near base configuration 1.7Ghz Dothan.

In short Apple was completely screwed as long as the G5, as it actually existed, was the only tool in their toolbox and no amount of nitpicking about the Pentium 4 changes that. Maybe P. A. Semiconductor's PWRficient CPUs might have miraculously saved the day but Apple would have been stuck with nothing to offer for at least another year and a half. I guess we will never know if P.A. Semi's CPU was even worth waiting for, given so far as I'm aware the only machine it ever shipped in was a piddling number of Amiga X1000s, and just *try* to get benchmarks that translate in any comprehensible way to the rest of the world out of an Amiga fan, especially one that spent thousands of dollars on a thing they really want to believe in.
 
Back
Top