• Please review our updated Terms and Rules here

What are the biggest operational vintage computers?

Roland Huisman

Veteran Member
Joined
Mar 24, 2011
Messages
1,464
Location
The Netherlands
I was a bit triggered by the message from Tom Hunter to wonder, what are the biggest vintage computers which are still in operation in a museum? Or maybe still in use in its original application? Or are fully complete including cooling systems so they could run when fixed? With vintage I mean approximately 25 years and older.

Just wondering :)

Regards, Roland
 
Well, CHM, I believe has a fully functional 1401; Paul Pierce may have something older, unknown about operation. There was a German fellow who had an operational Univac Solid State 80, but I can't find him on the web anymore. There are a couple of Royal-McBee LGP-30s still alive.
 
I thought that the Bletchley Park Colossus was a replica of the original. Maybe you're thinking of the Harwell Dekatron at BP, which is the original and still working?
 
Last edited:
I thought that the Bletchley Park Colossus was a replica of the original. Maybe you're thinking of the Harwell Dekatron at BP, which is the original and still working?
It is a replica more or less but it works. A few of the real old timers were rounded up to reconstruct the thing. Some of the original notes were on toilet paper. Most of the work was done from memory. I have a book on this.
 
Last edited:
I was a bit triggered by the message from Tom Hunter to wonder, what are the biggest vintage computers which are still in operation in a museum? Or maybe still in use in its original application? Or are fully complete including cooling systems so they could run when fixed? With vintage I mean approximately 25 years and older.

Just wondering :)

Regards, Roland
Oh well "biggest" is relative. Biggest in terms of speed, or physical size or number of units sold?

As to biggest in terms of speed the CDC 6000 and CYBER series where "supercomputers" in their day (mid to late 60s and early 70s). Later the CRAY computers ruled in terms of raw speed (70s).

John Zabolitzky of cray-cyber.org in Munich/Germany still has a working CDC CYBER 180/960 (https://cray-cyber.org/systems/cdc-cyber-180-960/) and a CRAY Y-MP EL (https://cray-cyber.org/systems/cray-y-mp-el/). I haven't talked with John for a while, but I believe his museum is still in existence but stalled due to Covid-19.

The original CDC 6000 and nearly identical CYBER architecture was designed by Seymour Cray and was architecturally amazing for its time (1964). It had a 60 bit CPU for computation, between 10 - 20 Peripheral Processors (PPs) for operating system functions and I/O operating on 12 bit data and with full access to CPU memory, 60 bit words of up to 256K CPU memory which could be accessed in a 16x overlapped fashion. It processed CPU instructions in parallel with multiple functional units using a "scoreboard" to schedule the instructions and delaying instructions if necessary to wait for inputs from previous instructions. It had a 16 word cache - enough to run small loops entirely from the cache. It also implemented what would be now called a RISC type instruction set in both CPU and PPs. These machines where powerful. Add to this raw power an operator console which was driven by a dedicated PP with access to CPU memory which allowed you to monitor and modify operating system data structures like jobs, scheduler and I/O queues, PP assignment, disk activity, tape activity, CPU memory contents etc. The US government labs dealing with nuclear weapons research loved the 6000 and CYBER series machines.

Sorry - I think I got a bit carried away by my enthusiasm for the old CDC CYBER and 6000 series, but they where the highlight of the first 10 years of my career. :)

Nevertheless I will never own, operate and maintain a CDC CYBER. Apart from availability (zero), the cost of power, air conditioning and maintenance is prohibitive (unless your name is Bill Gates).

Paul Allen's LCM aquired and restored a CDC 6500. Sadly the LCM has gone down the toilet since Paul died and his sister Jody took over and Covid-19 hit. There are not many interested in old mainframes who have the type of resources like Paul Allen. With Paul and all the employees gone the LCM is unlikely to start up again.

Tom Hunter
 
As a one-time CDC employee, I have to say that while the 6000/7000/Cyber series were "largish", really large would describe the CDC STAR-100, a vector processor. 256 64-bit general registers, 512 bit data pipes, pipelined functional units, bit-addressable with virtual memory. I spent a couple of years developing for that system, then about a decade later, for the liquid nitrogen-cooled ETA-10.
Attributing the 6400 and 6500 (dual CPU) to Seymour Cray isn't precisely correct. IIRC, they were largely the work of Jim Thornton. The 6600 was Seymour/s baby. The 6400 and 6500 were the "economy" versions of the architecture without the fancy speedup features. I can remember hand-scheduling code for the 6600 (Issue, read operands, result available sort of thing); the 6400/6500 was strictly serial execution.

Seymour was a bit strange. One of my co-workers at CDC recalls spending a January in a car parked outside of Seymour's lab in Chippewa Falls (WI--Cray was afraid that the Twin Cities would be a target for a nuclear strike) passing code snippets to Seymour's daughter at the door. Cray didn't want anyone inside his lab, apparently.
 
Last edited:
Seymour was a bit strange. One of my co-workers at CDC recalls spending a January in a car parked outside of Seymour's lab in Chippewa Falls (WI--Cray was afraid that the Twin Cities would be a target for a nuclear strike) passing code snippets to Seymour's daughter at the door. Cray didn't want anyone inside his lab, apparently.
If you are a genius, then you are allowed to be a bit strange. :)

Seymour Cray hated the corporate suits who interfered with his work. This was the only reason for moving the lab to Chippewa Falls with the full support of Bill Norris. There were a lot of fairy tales about Seymour, only some were factual. :)

He designed and built the 3 fastest computers for their time: 6600 (1964), 7600 (1967) and Cray-1 (1975). Nothing else came even close in terms of speed.
 
The STAR-100 (1969) could easily beat a 7600 for real-world math-intensive problems--and did head-to-head benchmarks with the Cray I. The problem was that scalars were performed as vectors of length 1 in the hardware, so lots of latency. That was fixed in, I believe the Cyber 203. In any case, the STAR was much more expensive than a 7600, by the time you were done with the I/O peripherals. There weren't many 100s installed: Lawrence Livermore, NASA Langley--and of course, Arden Hills come to mind.
 
I was a bit triggered by the message from Tom Hunter to wonder, what are the biggest vintage computers which are still in operation in a museum? Or maybe still in use in its original application? Or are fully complete including cooling systems so they could run when fixed? With vintage I mean approximately 25 years and older.
Roland's question was biggest and still in operation or in a good enough condition so it could be in operation. Neither the STAR-100 nor the ETA-10 qualify as non exist anymore.
Sadly most old mainframes have fallen victim to the gold extracting mob.

Operating any of the large old mainframe systems is a serious challenge unless you are a tech billionaire. The physical size, weight, power and cooling requirements of just the mainframe are a huge hurdle. Add to that the power and cooling requirement for the peripherals and their controller, false floor, motor generator etc.

Even if you have the system fully working today, it will require ongoing maintenance, repairs and spare parts.

Realistically it is near impossible to keep the big iron running long term unless you have very deep pockets like Paul Allen had where you can call up a manufacturer and ask them to re-make something that hasn't been produced for half a century no matter what it costs.

The sensible alternative is to run an emulator of the old system. For IBM System /370 that is "Hercules" and for CDC 6000 and CDC CYBER series that is "Desktop CYBER".
 
I'm not aware of any emulators for the STAR/ETA systems, are you?

This does bear down on the issue of what constitutes a large machine. For example, the IBM Z-systems are quite large and infinitely more computationally powerful than any of the old iron--and they're very much in use. The Cray Trinity at LANL is modern and large, as is LLNL's Sierra. Of course, none of these are uniprocessor systems; those pretty much died out in the 1980s.

For "oldest, still operating and not a replica", my vote goes to the Harwell machine.
 
Last edited:
There were very few STAR-100 or ETA-10 computers shipped which means there is not a large audience who would appreciate an emulator. Also unlike the /370 or CYBER there is very little documentation and no software available. This all makes it difficult and not overly rewarding for anyone considering writing an emulator. So as far as I know there are no emulators for either.
 
I’m not all the way through the article (online mini-book?j, but this information is an awesome look into the business and engineering challenges of a fledging supercomputer company. Thank you for posting that!
You're welcome. The table described in Addendum 4 was indeed at ADL in Arden Hills when I visited during the STAR days. The comments about the various executives were spot-on, as I had worked with many of them in CDC SSD. When I was a CDC employee, CDC sported no less than 128 vice-presidents. It's not hard to see what went wrong in retrospect.
 
Back
Top