• Please review our updated Terms and Rules here

Behold ATLAS, the fastest computer of 50 years ago

My interest in computers was fuelled by reading the book Faster Than Thought by B. V. Bowden, which was published in 1953. It covers the relationship between Ferranti and Manchester University at that time but obviously not the Atlas. I acquired the book in 1961 and it still has pride of place on my bookshelf. The frontispiece is a portrait of Ada Augusta, an appropriate starting point for the history of English computing. In the period covered by the book high speed memory consisted of CRT stores, mercury delay lines and magnetic drums, so Atlas was an enormous leap forwards as the video states.
 
I think if you check your sources, the IBM 7030 "Stretch" was the fastest computer of 1962.

To quote Wikip:

"The first Atlas was officially commissioned on 7 December 1962, and was considered at that time to be equivalent to four IBM 7094s[1] and nearly as fast at the IBM 7030 Stretch, then the world's fastest supercomputer.The first Atlas was officially commissioned on 7 December 1962, and was considered at that time to be equivalent to four IBM 7094s[1] and nearly as fast at the IBM 7030 Stretch, then the world's fastest supercomputer."

At any rate, neither distinction lasted long. The CDC 6600 arrived in 1964 and blew them both away.
 
I think its open to debate. Certainly in straight line execution STRETCH was faster, but for matrix manipulation where there was a lot of looping Atlas was faster. Apparently STRETCH pre-fetched instructions on the basis that code would continue in a straight line, and so when a branch occured it lost the content of the pre-fetch, whereas ATLAS assumed any conditional branches would be taken, adn so in looping code it was faster.

Later MU5 had a heuristic pipeline where it started filling the pipeline after branch depedning on what happened last time the branch was executed.
 
Branch prediction in the Stretch changed during its production. The first two systems assumed untaken branches; subsequent units looked at the current set of condition codes and used them as hints as to the branch being taken or not.

The 6600 didn't have condition codes; branches were made depending on register contents--the branch on register contents used the same reservation and shortstop mechanisms that other instructions used. It also helped that the 6600 is a 3-address architecture.

When you were hand-optimizing code, you put some instructions (if possible) between the calculation of the result that would determine the branch and the branch itself. So, pre-fetching the operands for the next iteration of a loop wa a very natural thing to do before the loop branch. You were very proud of working out short loops that you could write to have an issue every cycle and keep entirely in-stack (instruction cache).

Those were the days...
 
Good point. The headline statement was made in Britain and relativity demands that time and space have to be considered together, so the speed of the IBM 7030 in Britain would have to take into account the data (or physical computer) transfer rates across the Atlantic in 1962. According to the film Forbidden Planet there was a much faster computer on Altair 4 but that is presumably outside the scope of the subject, being far far away, so why not the IBM 7030 as well? Excuse my indulgence in reductio ad absurdum but that's the direction of this debate I suspect.

I think that Chuck is right about computer speed being defined ultimately by the skill of the programmer, not just the hardware, though. I recollect a suggestion at Honeywell to programmers using the Honeywell 200, that they could give an output instruction to a peripheral device before putting any data in the output buffer because it would take a while for the peripheral device to wake up and realise that it was supposed to be doing something. Yes, those were the days ...
 
Back
Top