• Please review our updated Terms and Rules here

Best obscure mainframe OS?

MattCarp

Experienced Member
Joined
Sep 5, 2003
Messages
279
Location
Atlanta, Georgia (USA)
I was born a decade or two too late to really experience mainframe computing. I briefly did some IBM 3090 and System 390 stuff, and Tandem if that counts.

However, I'm more curious about the competing mainframes from the Seven Dwarves or BUNCH.

Specifically, I'd like to understand more about:

Burrough's MCP
Univac's EXEC (now OS2200)
Control Data's NOS/VRE
NCR's VRX
RCA Spectra's VMOS


I found that Unisys provides desktop (Windows) emulation environments for modem versions of MCP and OS2200.

I've also found the Desktop Cyber emulator for Control Data.

I haven't seen anything out there on the NCR or RCA machines.

I've also left off the GE > Honeywell > Bull GCOS operating system for no good reason. It just doesn't interest me that much.


Does anyone have any interesting comments that contrast these different environments?


Or, should I just recognize that these things get the market share that they deserve, and just spend more time with DEC / Unix OSs ? :)
 

Al Kossow

Documentation Wizard
Joined
Sep 1, 2006
Messages
2,816
Location
Silicon Valley
I think VMOS became Sperry OS/3 when they bought the product line. Charlie Gibbs just sent bitsavers a bunch of manuals for that.

SDS/XDS CP-V wasn't a bad interative timesharing system https://en.wikipedia.org/wiki/Universal_Time-Sharing_System

I never liked UNIVAC EXEC-8 very much, it was a card-oriented operating system that barely tolerated terminal use.

People that came up through the PDP-10 et. al. interactive terminal world were spoiled.
 

Chuck(G)

25k Member
Joined
Jan 11, 2007
Messages
39,421
Location
Pacific Northwest, USA
There are/were probably more OS-es than you could shake a stick at
Just for CDC upper-level mainframes:

NOS, SCOPE, KNRONS, MACE, COS
ROVER, ZODIAC, TCM
STAR-OS/ RED

I've probably forgotten a few.
 

whartung

Veteran Member
Joined
Apr 23, 2020
Messages
585
Well, I went from using PETs in High School to working with NOS on the school Cyber 730 in college.

Talk about a different computing experience.

There was a bunch of arcane stuff in NOS that just never seemed to apply to me, and NOS itself was never "explained". We were basically told how to use the individual programs (editor, compilers), but as far as architecture, there was no mention of that. The teachers didn't know the system, so they couldn't explain it to you.

There was some concept of "direct storage" and you would "attach" things to you session. Dunno if that was a sharing thing, or what that was.

There were proc files (essentially batch/script files), and entire libraries you could "attach" to make available to your session. There was a utility core maintained by students that we all shared.

Much to the chagrin of one student who was targeted. There literally was "If BOB then" code in these files.

There was the DAYFILE, which logged all of your commands. (The shared proc file kindly shipped Bobs dayfiles to the admins everyday.)

One group wrote a COMPASS (the 60 bit assembler for the Cyber) simulator, and used it to run all sorts of system programs that were ostensibly "privileged". Found numerous security holes that way.

It was an interesting system, I'm so grateful that this was my second major computing experience. Later, the school adopted PCs. I felt those students missed out a lot not working on a multi-user shared system. There's a lot of keen fundamental concepts you get working in the kind of environment that you'd never get on a standalone PC.

In the end, I prefered the RSTS/E PDP, it was just a bit friendlier.

I do wish I was able to better understand at the time what I was working with though.
 

Chuck(G)

25k Member
Joined
Jan 11, 2007
Messages
39,421
Location
Pacific Northwest, USA
It helps to know the genesis of some of the CDC stuff. I'll try to be brief without being obscure (Brevis esse laboro, obscurus fio)

COS (Chippewa Operating System) was the first 6000 series OS. As the name suggests, it was the product of Seymour Cray's Chippewa Falls, WI (therein lies a bunch of stories) lab.
Next came SCOPE (never did figure out what it stood for), a more full-featured standard product.
At about the same time, a chum by the name of Greg Mansfield, along with Dr. Dave Callender (the "bat" guy) created a simple operating system suitable for customer engineers; it was compact and much simpler than SCOPE. It was dubbed MACE (Mansfield's Answer for Customer Engineers).
Because of its simplicity, it was repackaged and made to be more-or-less SCOPE-compatible from a user standpoint and found to be suitable for time-sharing applications. It was thus dubbed KRONOS.
There arose a bit of a east-west rivalry--the West Coast people maintained and developed successive SCOPE versions, while the Twin Cities folks worked on KRONOS. Because PLATO was a time-shared application, naturally, it was hosted on KRONOS.
However, KRONOS really sucked at big resource-intensive jobs, while SCOPE excelled at them.
So KRONOS was relabeled NOS (Network Operating System) and SCOPE relabeled NOS/BE (Batch environment).
Eventually the two were merged and simply called NOS.
TCM (Time Critical Monitor) was a Special Systems bootleg project whose goal was response to an event taking no longer than 100 microseconds. To do this, a lot of code was moved from the peripheral processors to the central processor. It was long forgotten about until:
ZODIAC (probably so-called because the Zodiac killer dominated the news at the time) was done to satisfy a contract from the USAF Logistics Command (and peripherally, UBS--the Union Bank of Switzerland). It was built on the TCM base, but instead of user programs being serviced by PPUs, user jobs were designated as "modules" all under the umbrella of a lead module constituting something called a "chain". Such chains could reside in any machine of a multi-machine configuration, communicating through shared bulk core (ECS). The residency of a chain in any machine could be measured in milliseconds. The goal was near-instant response for a transaction-oriented system. UBS re-christened ZODIAC as TOOS, but it was the same basic system under the covers.
ROVER was strictly concrete-block copper shielded walls cold war spook stuff. Probably still highly classified. People working on it had top secret clearances with appropriate endorsements and were told never to disclose to outsiders what was going on under penalty. So I won't discuss it here.

For the STAR super computer (also known as CYBER 200/201/203/205) and eventually the ETA-10 liquid nitrogen-cooled super, there was RED--a simple operating system developed within CDC and STAR-OS--mostly developed in cooperation with the folks at Lawrence Livermore. Very complex message-passing style OS, mostly written in a dialect of LRLTRAN.

One complication is that if you were a CDC customer, you could request the source to the operating system and utilities free of charge. So there were many local variations of the above.
 

MattCarp

Experienced Member
Joined
Sep 5, 2003
Messages
279
Location
Atlanta, Georgia (USA)
Chuck, if you were a computer, you'd have an exabyte of storage and processor clock that never misses!

I suppose in today's world, unless you're actually working with large scale systems, it's hard to appreciate the batch nature of the older systems and how you computed with them.

It seems like now, everything is about the web/web application frameworks, SAP, mobile, or some other small/near instant Python program. Back then, there was "A" computer, versus now you lose track of how much computing you actually have access to (RPi and clouds!)
 

Chuck(G)

25k Member
Joined
Jan 11, 2007
Messages
39,421
Location
Pacific Northwest, USA
For me, the disquieting thing is that there has been a huge amount of information lost, probably forever. I've just covered what I recall from the CDC 6000 and STAR architectures. I know almost nothing about the 3000 series software or the stuff that ran on the 1604, much less any "government" projects. Like a lot of aerospace work done, when the project is over, the funding stops and the work pretty much heads to the shredder and landfill. I recall retrieving firmware source for some Lockheed missile project that had long been ended--someone saved a bunch of 8" floppies from a FutureData development system. Sadly the collection was incomplete. This situation was not unusual.

Isn't it true that we're only as good as what we can remember and learn from?
 

Al Kossow

Documentation Wizard
Joined
Sep 1, 2006
Messages
2,816
Location
Silicon Valley
Isn't it true that we're only as good as what we can remember and learn from?

I think most people working in the field have no interest in researching the past, just like companies have no interest in hiring anyone unless they
are fluent in the latest shiny.

The days of MIT-style learning from first principles are very, very dead.
 

resman

Veteran Member
Joined
Jan 1, 2014
Messages
511
Location
Lake Tahoe
It helps to know the genesis of some of the CDC stuff. I'll try to be brief without being obscure (Brevis esse laboro, obscurus fio)

...

At about the same time, a chum by the name of Greg Mansfield, along with Dr. Dave Callender (the "bat" guy) created a simple operating system suitable for customer engineers; it was compact and much simpler than SCOPE. It was dubbed MACE (Mansfield's Answer for Customer Engineers).

...

I think the 6600/6500 in the Purdue C.S. department ran MACE (kind of remember it being M.A.C.E. in the brief documentation I saw). I used it to teach a summer class (Super Saturday) for high school students, but we didn't do much with the actual OS as all our time was spent in the interactive BASIC environment. This was all for a brief few weeks 37 years ago, so I could be making this up. Amazing that you remember so much history of these machines.
 

Chuck(G)

25k Member
Joined
Jan 11, 2007
Messages
39,421
Location
Pacific Northwest, USA
My recollection was that it was Purdue's own variation of MACE; for their 6500 heavy on the CP monitor functions. You need only have looked at the SYSLIB source code to see Greg and Dave's names in every deck. I think it also had a provision for communicating with the 7094s that ran PUFFT as well. Very CEJ-oriented; the "save registers" routine was done via an XJ, rather than the litany of generating a table using RJ for the last B-register. Memory is dim on that one.

Egad, I'm showing my age, aren't I.
 

whartung

Veteran Member
Joined
Apr 23, 2020
Messages
585
One of the truths of the old machines is that abstraction is expensive, so the realities of the architecture don't just "leak" (as we like to say) through the abstractions, but just tear large holes in them and march through.

So, theses ideas and these concepts (liked ATTACHed files, or whatever) were there for a reason, and I never quite understood the genesis behind these kinds of things.

For the simple school projects that we did, obviously, none of this stuff ever really impacted us directly. Everything we did, we just did by rote without understanding it.
 

Chuck(G)

25k Member
Joined
Jan 11, 2007
Messages
39,421
Location
Pacific Northwest, USA
So, theses ideas and these concepts (liked ATTACHed files, or whatever) were there for a reason, and I never quite understood the genesis behind these kinds of things.

Okay, I'll explain it. On all 6000 operating systems, a job starts with only a single file, called INPUT. For obvious reasons, this consists of your input card deck or terminal keyboard data. There are no other files accessible from your job. If you wanted printed output, you wrote to a file called OUTPUT, which was disposed of in batch jobs at the job conclusion (or with a DISPOSE directive during the job). Similarly, a file called PUNCH would be sent to the card punch. All these are "local" files and cease to exist after the job or session terminates. There was a system library, specially maintained for system programs, such as compilers and libraries, but not readable or writable by normal I/O commands. A very secure system and one where what you did was pretty much invisible to other jobs running at the same time.

For "permanent" data, the system maintained a separate file system, which was owner-qualified, password-protected and even included versioning (called "cycles). You could gain access to one of these files if you knew its name (IIRC up to 30 characters), its owner's name, as well as the passwords for the access you desired. You "attached" the file to a locally-named file of your choosing. That is, a permanent file named "MYLASTWORKTHATIDID" could be "attached" to a local file named "WORK", which then could be accessed via normal I/O. You could explicitly DETACH the file, which broke the association or just let the system do it at job/session conclusion.

Unlike other systems, you weren't even aware of the existence of a permanent file unless you knew its name and the name of its owner. You couldn't access the file unless you knew the appropriate passwords.

Quite secure for the time--and I'm omitting a fair amount of detail.
 

Agent Orange

Veteran Member
Joined
Sep 24, 2008
Messages
6,141
Location
SE MI
When I was with the feds in the DC area back in the 80's, I worked on a so-call H/P Model 1. This behemoth took up half of a floor and was was used solely to support a huge plotter, which in itself was larger than a standard billiard table. The thing was basically TTL and the boards were about the size of cookie sheets. The input was tape and the addressing was paddle style input. These things were usually kludged together under contract and built on the spot. Most of the warranties usually were only good for 90 days and documentation was very sparse. One of the prerequisites for employment was the ability to work with little or no supervision and/or documentation.
eh.gif
 
Top