• Please review our updated Terms and Rules here

IBM 1401 - UK

Never seen one in the UK. You could try TNMOC (Bletchley) or Cambridge Computer museum - they have some systems from that era.
 
It's going to take a budget and some dedicated workers to keep a 1401 going these days. Few museums have the budget for that. The old machines had what we'd consider to be a terrible MTBF these days. A system that's run for 20 years without maintenance of any sort isn't particularly unusual today, but back in the day, it was unthinkable.
Consider the work put in on CHM's 1401 to get it able to spin a tape...

1410, 1440 and 7010 systems must be even less common.
 
Last edited:
Team (would say guys but no idea of gender so team),

thanks for all the replies. Chuck I would agree in terms of the time and effort in getting these things working and its a shame that those who had worked on them cant pass on pratical history to the next lot. Having watched more than a few videos from CuriousMarc you can see its a big job and a large group effort to even get the thing running for their tours. It was after watching his videos as to why the question at the start was directed at 1401's vs just early IBM's. Its also the spare space needed to move around test and get it going. Also a test "spares" from other machines to get one working is another consideration.

Dave, great website I had not come across it before. I had heard about the 1130 at tnmoc but pleased to see some progress had been made on getting it fully back up running.
 
Think of what it would take to run some of the really big iron. e.g. CDC 6600. MG sets with wiring, chilled water supply...
Mixed 400Hz 3-phase power, 208V, etc. And lots of cables.
And we're not discussing peripherals.

I recall that when the power went down (usually some construction outfit hitting a live cable) at SVLOPS, it generally took the CEs most of a day to bring a system back up--if they were lucky. Those old systems were made to never be turned off.
 
The Living Computer Museum had a CDC 6500 running for a long time. They spent a lot of time just getting the cooling systems to work.
 
What are we thinking the kw usage a day would be for this things?
Well if you run it 24 hours a day, 24 times the kw/hour. There is a Physical Planning Manual here:


which list the power needed for various configs. Looks like from 2Kw up to 10kw for a big system. So for a medium system, say 5Kw, at current UK electricity prices around £1.50/hour, £36/day or something like £1000/month.
 
It's going to take a budget and some dedicated workers to keep a 1401 going these days. Few museums have the budget for that. The old machines had what we'd consider to be a terrible MTBF these days. A system that's run for 20 years without maintenance of any sort isn't particularly unusual today, but back in the day, it was unthinkable.
Consider the work put in on CHM's 1401 to get it able to spin a tape...

1410, 1440 and 7010 systems must be even less common.

I only just noticed this comment. On this occasion it's purely a coincidence that I'm mentioning the Honeywell 200 on a thread about the IBM 1401 but the chap who was probably the last working H200 field engineer in the world told me that the H200 machines that he kept running up until year 2000 needed almost no maintenance except for replacement of cooling fans when their bearings wore out and cleaning of the air filters. In fact the machines were only replaced then because their software wasn't year 2000 compliant.

Perhaps natural selection has an effect if one keeps enough machines running for long enough and use the parts from all the scrapped machines to maintain the ones still running as his company did, buying up every unwanted H200 on the market for years. Eventually the last machines in use are likely to contain the components that are the most durable of their kind. If Darwin and Babbage had got together they could have come up with a theory of survival of the fittest computers although most Victorian constructions were so over-engineered that they seem to last virtually for ever anyway.

In contrast yesterday we had a service engineer come to check over a microwave cooker we bought just three weeks ago because it wasn't delivering the specified power. He was mystified by the fault given that there are so few components making up the magnetron circuit in these devices, but on reflection the reduction of power and erratic power levels remind me of the behaviour of the old thermionic valves (er, tubes) when the oxide coatings on their cathodes started failing. Maybe it was caused by a faulty magnetron cathode then. We're getting a refund anyway.

Early computers such as Colossus using many thermionic devices, whatever you choose to call them, were likely to fail often because of the MTBF of each. Tommy Flowers, the designer of Colossus, said that failures could be substantially reduced by never turning the machine off, but I don't think that's a practical solution with the magnetron in a microwave cooker.
 
My old boss of many years ago worked with SAGE during his Army years. He said the vacuum-tube implementation had a couple of advantages over solid-state logic. One was fault-finding looking for heaters not glowing, the other was a place to keep your bag lunch warm....
 
Early computers such as Colossus using many thermionic devices, whatever you choose to call them, were likely to fail often because of the MTBF of each. Tommy Flowers, the designer of Colossus, said that failures could be substantially reduced by never turning the machine off, ...
In addition to avoiding thermal shock to prolong MTBF I believe that ENIAC pioneered the idea of lowering the heater voltage.
 
The 1401, 1620, etc. were small-scale systems. My experience with the large-scale systems (mostly CDC) was that if the power went out, restarting the system and peripherals could take much of a day. Small blips in the power, of course, were ironed out by the inertia of the large MG sets powering them. But then, these systems were designed to squeeze every last FLOP out of the hardware, which was often quite complicated.

MTTR for later large systems could be huge--the ETA 10 required that the liquid nitrogen be drained and the CPU brought up to room temperature slowly before repairs. Minimum MTTR was something on the order of 8 hours.

Things are much different today--an inexpensive widget the size of a Raspberry PI can run continuously for years, if cooling is observed. Even on desktop and laptop PCs, memory and storage failures are rare. The main reason for discarding a system today is obsolescence, not non-functionality. If a system fails to operate, the first suspicion is a software problem.
 
Things are much different today--an inexpensive widget the size of a Raspberry PI can run continuously for years, if cooling is observed. Even on desktop and laptop PCs, memory and storage failures are rare. The main reason for discarding a system today is obsolescence, not non-functionality. If a system fails to operate, the first suspicion is a software problem.
And second. Possibly third as well. Ye olde surreptitious Microsoft "update". I'm holding on Win10 (no more BSOD these days) as long as possible because I'm still wedded to MS Access and Outlook. Otherwise I'd move to some Linux flavor. October 14, 2025 (end of Win10 support) is going to be a painful moment.
(But I've gone way off-topic!)
 
Back
Top