• Please review our updated Terms and Rules here

Mainframes Today

lyonadmiral

Veteran Member
Joined
Jun 3, 2009
Messages
2,430
Location
Peru, New York
For those who might still work with the big iron, what do you say/think is the advantage of running mainframes/big iron today?

Thanks,
Daniel
 
Near 100% availability and lowest cost virtual machines.

MFs today however are physically tiny relative to their ancestors.
 
Near 100% availability and lowest cost virtual machines.

MFs today however are physically tiny relative to their ancestors.

I worked in a school district once that had an AS/400 (looked like a giant tower, small versus an AS/400 I saw once that needed a whole room but the school opted to dump the AS/400 for Windows. :(
 
For those who might still work with the big iron, what do you say/think is the advantage of running mainframes/big iron today?

Thanks,
Daniel

It depends on what you call a mainframe and what you run on it.

So firstly as far as I can see the only "Mainframe" left in production is the IBM "zSeries" boxes. AS/400 and the "ISeries" were never described as Mainframes only mid-range boxes.

I would say the big advantage of mainframes are that they allow legacy code to run un-changed. The cost of moving off and the disruption it would cause are often un-acceptable to the type of industry still using mainframes, which as far as I can see are mainly Banks and Airline reservations, although I guess some large utilities and government departments stull use them.

We removed the mainframe from the place I was working at five years ago, and it was very expensive. It was IMHO probably not really worth the effort but management wanted to look modern..

On the other hand I started on using VM/370 on a mainframe so to me desktop and server virtualization is really providing the same sort of facilities we had back in the 1970's so things like VMWare ESXi ....
 
Vblock systems are in some respects like mainframes - a single system can be 30 cabinets or more.
 
The Rhode Island Computer Museum received an IBM S/390 donation a few weeks ago.
http://www.ricomputermuseum.org/Home/equipment/ibm-system390

We have been doing a little research on this system and so far are amazed at the amount of design work that was done to maintain backward compatibility. To start the migration from a real mainframe to the S/390 you could configure the internal SCSI disks to emulate a DASD controller and disks, cable it to the mainframe, and then move your data to the disks in the S/390. Just this part of the process would significantly increase the reliability of the mainframe and reduce maintenance costs.

Next, you would configure a VM on the S/390 to host the operating system from the mainframe and setup the same development and operational environments as on the mainframe. Once your applications were working on the S/390, you would configure the communications controller attached to the S/390 to connect to your terminals and printers and complete the transition. Most of the peripherals from the mainframe would still work with the S/390. You could then start Linux in another VM, and transition your applications to WWW versions.

There is no way you could do this kind of step-by-step transition on anything but a mainframe.
 
The Rhode Island Computer Museum received an IBM S/390 donation a few weeks ago.
http://www.ricomputermuseum.org/Home/equipment/ibm-system390

There is no way you could do this kind of step-by-step transition on anything but a mainframe.

Having spent the last few years virtualizing Windows Server onto VMWare that's no longer true. With the VMware standalone converter, or the bootable converter you can take most any real Windows server and migrate it onto a virtual server, with no change to the applications. Once the server is running under the VMWare Hypervisor you can live migrate it from server to server. The VMOTION process is generally transparent to the running applications and OS and most workloads can be moved from server to server whilst the OS is live and running.

VMWare will load balance the system, and moves the Virtual OS instances round to respond to changes in workload. At the council I have just left we had 24 VMWare hosts running about 250 physical servers. We have upgraded both the VMware Hypervisor and the Physical Hardware without any re-boots of the virtual operating system instances running on the VMWare farm.

In many ways it offers more scalability than a modern "z" box as you can simply add more hosts as required and VMware will use them as they come on-line. With a modern Z box IBM fill it full of CPUs and then charge you to turn them on. But if you run out it won't live migrate running VMs to a new host (or it didn't last time I looked. IBM were working on it).

Our system was built with discrete servers (IBM x3650) with 2x4 or 2x6 core CPUs and between 96 and 128GB of RAM, connected to a gigabit LAN and Hitachi Fibre network. The new CISCO UCS blades

http://www.cisco.com/c/en/us/products/servers-unified-computing/index.html

allow you to cram more bang into one box, but I am not sure that the unified comms always delivers the SAN (Storage Area Network) performance one needs.
 
Ah...the good old days. My version is what I called a "miniframe"..about 3 cabinets or racks about 6' to 7' tall --technically it is a 'mainframe'. I worked on 2 - McDonnell Douglas (Microdata) systems and actually turned OFF finally in November of 2005. The one thing I always brag about it when I show this to the young generations is that it had only 32k System memory yet it has about 150 - 200 users. It was running "Reality" PICK OS and was used to support 911 Dispatch System. When it was turned off (replaced by MSSQL 2000 system), we had to pay a recycle service to haul all the hardware (tape drive, diskdrives and cabinets) and terminals. the only thing I saved was system Nameplate / logo.
 
I did some minor administration on a mainframe two jobs ago. I liked it just for the legacy feel but I'm not sure I could get behind the pricing. If I remember right once ours was just EOL the support price jumped to something in the 40-60,000/yr, but I wasn't really "in the know" so I could be making that up from my perception. I just remember it seemed pretty ridiculous pricing vs our other environments which were some Compaq/HP-UX systems, some sun servers, and some vanilla x86 boxes. I guess we had a mainframe, x86 servers running Novell, x86 servers running BSD, Sun servers running Solaris and Oracle which they were trying to port the mainframe stuff to and a few Tru64 and HP-UX systems (maybe a legacy VMS box but that was on it's way out) and of course Windows NT/2000 for desktops.

So comparatively I thought the mainframe was interesting in it's stability. Didn't really crash or crap out on us like other servers. It also had interesting access controls where you could really delegate what users could control and access, down to what program they could run. I think it also had a self-service type support engine so it could call for hardware replacement on it's own and they would show up before our operators even knew there was an issue to begin with.

The downside wasn't really related to the mainframe but the tape drives we had .. ermehgerd .. those were old. Interesting but old. Huge tapes (yeah at least they weren't replacing platters lol) but the magnetic heads would fail pretty often and it took up a large portion of the data center with all the washing machine sized tape units. It was difficult for the support folks to find parts so I tended to watch if I could while they did tests on the coils and other parts of the drives. Old school and was entertaining as I was interested in that stuff at the time.
 
I worked on IBM mainframes for 25 years as both an application programmer and a systems programmer.

The major advantages of using a mainframe are, I believe, significant. The disadvantages, outside of software licensing fees, did not offset those advantages.

First, there is a single set of hardware and software to maintain, and they are maintained (hopefully) by experianced veteran experts. Of course this is not always true as many individuals working in the mainframe computer world were not very good and some were downright dangerous (these people usually got promoted to management).

There was no chance that any workstation (terminal) had slightly different software, especially when using dumb terminals. You updated software once, not on every workstation.

The actual "box" was located in a secure room and physical access was restricted. NO worries about keeping track of potentially hundreds or thousands of PCs which may or may not hold critical, proprietary data.

When something failed on the mainframe, it was not a case of "try it again" (although I did have a mantra that "if it didn't happen twice, it didn't happen"). Errors indicated there was a problem that needed to be looked into, corrected and documented (or explained). There was no such thing as saying "reinstall the software" or simply shrugging your shoulders. A user couldn't "break" a dumb terminal.

And maybe the biggest advantage of all - software written on day 1 for the earliest machine could run on the latest machine with no changes. Backward compatibility was key. You never had to throw away software or stay on an old machine to keep it running.

When I heard you could not upgrade to Windows 7 but had to do a clean install and then reinstall all your software (assuming it still worked), I knew we had reached a tipping point in losing control of the PC. Could you imagine such a thing happening on a mainframe? It's unthinkable. Yet somehow in the PC world it's accepted.

Another key element and just as critical as legacy software - and probably more so - knowledge was an addative process not a replacement process. By that I mean whatever you learned in the past was still good even when you learned the newest material. You never had to throw away your old knowledge. The guys who were around the longest were the most important and knowledgeable - not the guy who only knew the latest and greatest. If you ever had a problem, it was the old guy you went looking for - the guy with the punch cards and pocket protector.

What makes me laugh is when I hear about "The Cloud" as if it is some wonderful new idea. It's nothing more than a centralized location for programs and data. Hummm....what does that sound like?

The downside? Without question it is the cost of mainframe software which could run into millions per year. You might also include the cost of the people needed to maintain the software, although I would argue that if a company had a few good technical people and listened to those people, the staff could be pretty small. In every company I worked in, there might have been large staffs, but you knew who the core people were and who was just taking up space.

Joe
 
I remember getting a tour of the new CDC Cyber 205 at Colorado State University sometime in the late 70's/early 80's. Now that was what I'd call a mainframe.

http://en.wikipedia.org/wiki/CDC_Cyber#Cyber_200_series

While at CDC, I worked for the STAR Development Division. During the OPEC oil embargo, the machine room at ADL was probably the warmest place in the Twin Cities. I just got a good book and snuggled up between the SBUs. After a few years in microcomputers, I found myself working on a compiler for the ETA-10. That was different--liquid nitrogen goodness.

Good times.
 
During the OPEC oil embargo, the machine room at ADL was probably the warmest place in the Twin Cities. I just got a good book and snuggled up between the SBUs.
I always figured the weather was a reason there was so much hi-tech located in the twin cities. Who wants to brave traffic at 20 below when you can stay at work in a nice heated building anyway? :) They could have piped some of that heat to Cedar Square, one of my memories of those apartments was they were not always warm in the winter.
 
... That was different--liquid nitrogen goodness.

Good times.

What, no fluorinert?

Mainframes are still rather popular, particularly running Linux. The S/390 port of RHEL, for instance, has a lot of traction. I have available the smallest one in the S/390 line, the 'Integrated Server' which is the IBM P/390 card in a PC Server 500 chassis. ESCON is available. The control processor runs OS/2 with the P/390 software to IPL the P/390 card and manage it. Basically the bigger brother to the older PC/370 system.

Mainframes tend to both have more reliable hardware and vastly more reliable software with comparatively little feeping creaturism and little to no bloat. It just does what needs to be done, and nothing more. Yes, they are more expensive to operate. But what is a $40,000 per year hardware maintenance contract compared to $100,000 per hour downtime? And when was the last time you heard of a security breach on a mainframe (or, for that matter, on a VMS mini system)? (Not that there haven't been any; but just think of all the recent breaches in non-mainframe-based point of sale systems).
 
What, no fluorinert?

No, that was Seymour's thing--expensive stuff too. The liquid nitrogen was Neil Lincoln's experiment. Amazingly, it worked well wnough to put a few machines into production.

The problem with the Minnesota/Wisconsin approach was that the population also tends to be extremely conservative (Scandinavian Protestant--Prairie Home Companion type) people. What probably killed the ETA machines was the software. I was stunned to discover that the text editor used by the operating system team in 1985 was the same one that I wrote in a weekend in 1976 to familiarize myself with some of the more arcane (and there were a lot of them!) instructions of the STAR-100. Even though all of those wonderful and strange instructions (e.g. "Search Masked Key Byte") were removed from the Cyber 205 (and hence, the ETA-10), someone had laboriously gone through the code and emulated the instructions with scalar code.

I strongly pushed porting Unix to the ETA-10, but was pushed back by management in St. Pail. Eventually, they offered me the opportunity to do the port (I'd previously worked on others), but the schedule was too short and I turned them down. I don't care how many women you employ, you can't make a baby in 1 month. :)

There was a lot of good stuff in the ETA/Cyber 20x machines. The memory word was 512 bits (544 with ECC), interleaved and pipelined like crazy. 48 bit (bit addressable) address space, 64 bit words, 256 GP register set, etc., but a big vector machine at the heart of it. STAR-100 flopped only because of the insane approach of internally treating scalars as vectors of length 1. Later versions of the machine added scalar execution units.

Generally, good memories.
 
No, that was Seymour's thing--expensive stuff too. The liquid nitrogen was Neil Lincoln's experiment. Amazingly, it worked well wnough to put a few machines into production....

Interesting stuff. Liquid nitrogen figured into the CPU overclocking scheme of things about 11 years ago: see http://www.tomshardware.com/reviews/5-ghz-project,731.html for the rundown. I thought then that one of our coldfingers here, a liquid helium cryostat used for tuning cryo-cooled low-noise radiometers, could push the envelope even further, but the TDP of the CPU was too high for the 18 watts of cooling (at 4 kelvin) our small unit can provide.

Heh, brings new meaning to the term 'freeze' in the computer jargon.....
 
In the ETA-10, the whole shebang was immersed in liquid N[sub]2[/sub]. You can imagine the stresses on the boards--one persistent problem apparently was the chips popping off the board.

I don't think that one has ever been tried since in a production machine.

I don't know if the MTTR was any worse than Seymour's "Bubbles", however.
 
Back
Top