• Please review our updated Terms and Rules here

How were VAX/VMS systems deployed during their heyday in the 1980's?

In my case, the employer's VAXcluster supported users all over the USA with manufacturing sites in California, Tennessee and New York. I don't know if the European subsidiary used that system or not (probably for MRP II). By the time I left the company had a frame relay network tying all the sites together.
 
I was at DEC starting in 81, in the CXO site, working for Disk Drive Manufacturing. Here's what I remember:
Initially our departmental machine was a PDP-11/70 running RSX-11, and we all time-shared using VT-100's. Other departments were similar.
The terminals gradually became VT-200-series and 300-series (the VT-240 was my favorite in those years). I did HW and SW work based on (typically) a PDP-11/23 running RT-11 V5. My code was done in OMSI Pascal. For me, the desktop VT was only used for things like e-mail on the internal network, and access to a departmental printer - we could actually send e-mails to other DEC sites all over the world, which was quite amazing at the time.
When the "Tandy-DEC" PC clones became available (16 MHz 386, 1MB RAM, 40MB SCSI drive), they sold them to employees for personal use, but I don't remember them being used in the business. At that time, the Business had no SW that ran on anything other than a PDP or VAX.
At some point every engineer had a VAXstation on their desk. Many (most?) of us had a 3100 and a big, heavy, CRT color monitor. Since we were a Storage team, some of us also had an external SCSI storage shelf populated with multiple 1GB or 2GB 3.5" drives, such as RZ26 and RZ28 (basically top-of-the-line storage at the time). The VAXstations were DECnet nodes, of course. My first one was named SAIPAN.
Each department had 1 or more VAXclusters. I don't remember which type of VAX they were. I would assume a mix of classics like 11/780 and 8000 and 9000 type machines. Our clusters had cutting-edge storage attached, such as HSC-50.
For several years I wrote manufacturing SW apps in VAX Pascal, using my desktop 3100 as the development platform. The app (running 24/7 on a VAXcluster) accessed an Rdb database of manufacturing process data and such. The end-users of my app were on the manufacturing floor, some in cleanrooms, using VT-200/300-series terminals.

Pete
 
Mid-80s I managed an IC design department with around 80 engineers running simulation, layout and verification software and software development on a cluster of 3 VAX 11/785s (DECnetted with PDP11-based Applicon systems).
 
Vax bits. When I came to the IEEE Computer Society in 1994, one of my original roles was to move the financial systems from ROSS on a 6220 CVAX (with VaxBi, 3rd party SCSI disk controller and a bunch of disk drives) to Solomon Financials running on Netware 3.0. We also had something far more interesting: A NCR 3550 system with 4 486/50 CPUs, a Microchannel bus, and a lot of 9.1gb SCSI disks that was gifted to us from AT&T.

In late 1994 I came in to find that our Web/Gopher/WAIS server was down. Turns out the drop ceiling had collapsed from the weight of RS232 cables up there from the Vax. We rebuilt the computer room (small but impressive), removed the CVAX (which required removal of the door where we found that someone had written on the studs the date the had to tear the door out to get the VAX *IN* and upgraded the NCR 3500 to Windows NT 3.51 (then 4.0) with a specific NCR HAL to allow all 4 CPUs to run.

That NCR3550 was TALOS, the system that we developed the Computer Society Digital Library and CSLSP (Library subscription) systems. What made it work was that it could support multiple processors which our SGML to HTML converter with DynaWeb for TEKMath to gif conversions on the fly really worked well with multiple CPUs in parallel. We finally upgraded it to 4 Pentium 200 boards with 2 processors per board, a second Microchannel bus (for disk and network duplexing) and I think 512mb of memory running a software package developed by this Anderson guy over at University of Illinois (who had Duncan Laurie as the Dean, who was very interested in the digital library concept)

That system was a truck, and ran the digital library alone until I left in 2000. At last count it had been upgraded to the Pentium Pro overdrive chips with 8 of those at 333mhz. And yes because of the way the NCR boards were designed, each "pair" of CPUs accessed memory through their own channel and it was up to the Level 3 cache and main memory busses to allow 4 sets of them to wham away at once. It was a hell of a system, and was the birthplace for the Computer Society (and later IEEE after that massive fuckage in 2000) Digital Library initiative.

Fun memories. I should write a book about that. We had E-commerce, E-accounts, aliases, document delivery, subscriptions, additional subscription sales, and individual article sales way before everyone else. We just went ahead and did it....

Chris
 
Hey, new member here, better late than never to this party!

I worked with VAX/VMS systems from early 1980s through mid-2000s as a software engineer and system admin. My first exposure was at a defense company developing software for real-time embedded systems (think aircraft avionics). We’d do software development (code, compile, link) using 3rd-party tools with VAX/CMS (Code Management System) and MMS (Module Management System, like UNIX ‘make’) on a VAX 11/750, then download to an EPROM burner and move the chips to a target system for debugging. In this department we’d typically have 8-10 engineers sitting in one room using VT100 terminals. While waiting for code to compile we’d chat with each other via the VMS Phone utility – even though we were sitting right next to each other!

I moved to healthcare and developed software for in-vitro diagnostic systems (e.g., putting test tubes in an analyzer). At the earliest opportunity I brought in VAX/VMS and similar development tools as before to a group of a dozen developers, running on a MicroVAX II. We expanded the platform to several instrument system development teams and multiple MicroVAXen, clustered together. I also wrote software that allowed lab instruments to send output via RS-232 cables to terminal servers and back to the cluster, sort of a primitive lab information system (LIMS) application. Over time we migrated from VT220s to VAXstations as costs came down. We also added other VMS tools (remember DECwrite?) for documentation. A MicroVAX II could easily handle a dozen users on VT220s, or a couple dozen instruments scattered in several labs, connected via terminal servers over LAT/Ethernet.

At the same time, other divisions (R&D, manufacturing, MIS) within the corporation were using the big machines (VAX 8600 and 8800) to support their entire organization for specialized functions (research, molecular discovery, manufacturing floor, databases) as well as general office productivity (ALL-IN-1).

I changed jobs within that company in the mid-90s and moved away from VAX/VMS systems but returned a couple years later to a smaller division. They were using pedestal-sized systems (can’t remember models) for Oracle database batch-type job processing as well as networked storage for PCs (DEC PATHWORKS). As database computing demands increased over time, we eventually migrated to an Alpha 4100 with StorageWorks RAID arrays. Those machines could support many (30-50?) interactive users running terminal emulators on their PCs, plus offer remote storage to several hundred PCs in the home office.

Hope that’s useful for you.

Chris
 
we eventually migrated to an Alpha 4100 with StorageWorks RAID arrays.
In the latter part of my career, I wrote firmware for the StorageWorks EVA platform. Specifically the HSZ40, HSG80, EVA 4400, etc. The EVA series was great - we sold many 10's of thousands of those systems.

Pete
 
In the latter part of my career, I wrote firmware for the StorageWorks EVA platform. Specifically the HSZ40, HSG80, EVA 4400, etc.

Pete

Interesting...
Would you still be able f.e. to write patches for the HSG80 ?
There has been a limitation in max. scsi drive size (74 or 146 gig) and also in max array size around 1 terra if i remember correctly.
And do you eventually remember how much extra space has been reserved by the different drive firmwares ?
If i remember it correctly the useable size of the Compaq/HP scsi drives was a little bit smaller than the ones with the standard factory firmware,
and one has not been able to use those drives in the array bc of more reserve blocks to allocate by the original Compaq/HP drives...
 
Back in the 1980s the college I work at had three Vaxen. An 11/780 running a classroom full of CAD workstations for the architectural students, an 11/750 doing tool path calculations for the CNC machining students, and a second 11/750 for the electronics students to use learning microprocessor programming and ICE systems.
 
Would you still be able f.e. to write patches for the HSG80 ?
No, sorry. I don't have the knowledge (any more), nor the development tools, nor the source code. When I was doing that kind of thing, I worked on version builds, not patches.
In 1997/98, I was part of an "OEM Firmware" team that made customer-custom builds for large customers such as Seimens-Nixdorf (SNI). I still have a list of those custom features. Here are some of the changes that were requested by SNI:
SHOW_BATTERY_STATE - Display the state of the battery when the 'cache_ups' switch is ON.
BAD_BATTERY_EMU_ALARM - Turn on the EMU alarm when the battery is FAILED while cache_ups is ON.
MULTI_BUS_INQUIRY - When in multi-bus failover, and the Unit is 'inop', do not show Unit as 'accessible' in the inquiry data.
Etc.
Reading the doc today brings back a lot of memories... In those days, SNI was our biggest customer for custom FW builds. I remember our contact at SNI was a fellow named Juergen.
There has been a limitation in max. scsi drive size (74 or 146 gig) and also in max array size around 1 terra if i remember correctly.
And do you eventually remember how much extra space has been reserved by the different drive firmwares ?
If i remember it correctly the useable size of the Compaq/HP scsi drives was a little bit smaller than the ones with the standard factory firmware,
and one has not been able to use those drives in the array bc of more reserve blocks to allocate by the original Compaq/HP drives...
I don't know/remember much about that. I can say that our team did not do anything related to drive FW - only things that were available via SCSI Mode Pages. But here is feature that we considered doing in the Spring of 1998. I believe we never implemented it, and a few months later we quit doing custom FW builds:

BIGLUN_SUPPORT​

Function: Make 2 or more units appear to be one big virtual unit, from the host’s view. For example, if there is a 2GB unit and a 4GB unit configured in the controller, then the host would see the first unit with a 6GB size, and the second unit with a 0GB size.
Purpose: The controllers currently have a limitation on the maximum size for a single LUN. The limitation comes from 24 drives (max) in a Stripeset, or 14 drives (max) in a RAIDset. This option would make it possible to create a single virtual LUN that was more than twice the size (about 1.2 terabytes) of the current largest LUN (about 500 GB).
Hardware Platforms: SC6650, FC6650

Pete
 
Microsoft was using VAXen for their "server" stuff in the 1980s according to period videos. Heck, most of Bill Gates' early history of computers involved access to a university PDP-10, so there is a bit of DEC heritage there too.
 
I'm late to the discussion, but at one time I worked for a large three-letter broadcast network, and there was a VAX at each company-owned station.

It was used by the Accounting department and to schedule commercials (the "Traffic" department).

I never used it directly, but was the daily recipient of its green bar printouts telling me which commercials to run when.

As each commercial ran, someone in Master Control would mark it off on his copy of the printout, and it he next day someone would pick up the printout and take it back to Traffic to be entered into the computer, which presumably did stuff to the Accounting department.
 
Interesting...
Would you still be able f.e. to write patches for the HSG80 ?
There has been a limitation in max. scsi drive size (74 or 146 gig) and also in max array size around 1 terra if i remember correctly.
And do you eventually remember how much extra space has been reserved by the different drive firmwares ?
If i remember it correctly the useable size of the Compaq/HP scsi drives was a little bit smaller than the ones with the standard factory firmware,
and one has not been able to use those drives in the array bc of more reserve blocks to allocate by the original Compaq/HP drives...
It is common for scsi drives to be resized down so that different manufacturers products could be used as direct replacements, as they often differed by a cylinder or two.
The sizing can be set when formatting, so an easy change to do and undo.
 
I worked on DEC VAX/VMS for several years. They had a first class Cobol compiler and were used in many businesses e.g. Xerox and Banks. Many companies were very slow to encompass this 'new' technology. DEC more or less invented clustering. Microvax was very popular as it could be run from a 13amp socket and did not require air conditioned computer rooms. VMS was a wonderful OS. I think OpenVMS may still be around.
 
As a DEC field engineer in the day, I recall you could find Vax/VMS in almost any setting. From manufacturing or accounting roles in mom and pop shops up to large VaxClusters in telecom (usually for email via All-In-One product) and absolutely everything in between.

Almost every major hospital in the area had Vaxen, from 117xx, Vax4000-xxx, and even into the larger Vax8000 range. A lot of them ran Cerner software.

A large motorcycle company used an 11750 in the engineering department (as well as an PDP11/34 for frame vibration testing data acquisition).

Two large healthcare manufacturers did software and hardware engineering of their products on various sized systems, including Vaxstations clustered in. One of them was an early adopter of the "Ultrix connection" product which eventually became TCP/IP services for VMS.

(Hah, just reminded me of a funny situation/fix where the Vaxstations at this site utilizing thinwire ethernet kept dropping connection to the main VaxCluster systems. We eventually put a TDR (time domain reflectometer) on the line and noticed a distinct blip somewhere in the middle of the thinwire run. As we zeroed in towards the curious blip by removing Vaxstations from their T-coax connectors and moving the TDR towards the blip, we narrowed the problem down to the office cubicle in the next row. As we proceeded to inspect the T-Coax connector presumably inserted into the back of a Vaxstation, we were surprised there was no Vaxstation on this desk. Instead there was only a VT220 dumb terminal. The recent new-to-the-company user had apparently seen the T-coax connector just laying around and proceeded to connect it to the rear of the VT220 via its coax BNC (video output). No more connection problems once we removed it!)

A large brewery used Vaxstations and home grown apps to monitor those humungo copper beer brewing kettles. Always loved the smell when I had to service those!

There was also quite a bit of Pathworks DOS file sharing on VMS for the burgeoning PC market connectivity needs.

Dale
 
Back
Top