That old ‘Mainframe is dead’ chestnut has really rolled around the field for a while, but despite the best hyperbole of many IT pundits, IBM Mainframes continue to thrive and survive, even if they are no longer the biggest source of IT spending in a typical enterprise. When I started at IBM the water cooled mainframes were a huge revenue source and the people who supported them were sort of living gods, especially the more senior ones.
I myself was more of a cable monkey at that time spending more time under the floor than above it, in those glory days of the late 80s and very early 90s, before HDS really started eating IBMs Mainframe Lunch and EMC started to eat their DASD (Disk) Desert.
I did however become a ‘real’ mainframe guy when the air cooled 9672s and 2064s blew HDS out of the compatible market and brought IBM marching back into quite a few of our old accounts with far smaller machines that one or two people could install quite comfortably.
Yet despite the (possibly unexpected) survival of the mainframe, what has truly gone is the almost completely bizarre world of mainframe I/O (Input/Output) devices and the vast number of service representatives who were needed to support them. So in the great tradition of IT War Stories, here are five things you will never get to do in a data centre:
You will never blow mainframe water into a toilet
Yes you read this right… not only will you never get to blow mainframe water into a toilet, I may never get to write that heading again.
To explain: A water cooled mainframe was not shipped with water in it. Apart from the obvious fact that shipping something with water in it is a challenge, these things weighed enough already without 400 litres of water sloshing around inside. So one of the many tasks done as part of installing such a beast was to “just add water”. This involved emptying 100s of litres of distilled water bottles, usually way bigger than this one:
This water would be pumped to chillers (normally on the roof) to take away the heat from the many many heat producing TCMs (Thermal Conductor Modules) that made the magic happen. Here is a terrible screen capture showing a TCM and some of the vast amount of plumbing hoses. What makes this photo ‘real’ is the fact that the gent is wearing an ESD (electrostatic discharge) wrist band.
Now when a water cooled mainframe was uninstalled, it could not be shipped with the water in it, which meant blowing the water out of the many pipes and hoses using a big old bottle of nitrogen, usually way bigger than this one:
But what to blow the water into?
Why the toilets of course!
Normally these were the grimy under-cleaned toilets just outside the computer hall.
We would hook up the hoses end to end and then with the hose aimed firmly into the bowl, until you eventually got down to just gas with spits of moisture. The disappointing thing is that while there are many marketing photos showing impossible scenes like the one below, none show anyone blowing mainframe water into a toilet. And no one had mobile phone cameras in those days (mainly because your current mobile phone has more of everything that is shown this photo, except the printers):
Now before you proudly inform me that the current IBM z15 has a water cooled option, just like in the ‘good old days’, the amount of water being used is comparatively tiny and the water is simply drained into a jug using a special tool like this. And seriously, this ‘tool’ looks like my Uncle Ted designed it.
You will never drag 100s and 100s of feet of dirty bus and tag cables around and under the floor.
A good computer room floor would give you two or three feet of clearance, but it was not unusual to get very shallow floors. The parallel bus and tag cables used from the 1960s to 1990s were seriously heavy and accumulated in great numbers under the floor. Sometimes when cables needed to be re-routed it was easier to drop the old cables into the floor (perhaps after beheading them) and just run new ones. Such sins could accumulate until the under-floor section became literally clogged with these cables, like weird veins of some great dead beast. In addition these cables seemed to accumulate filth. After twelve hours of dragging them through the floor you often found your hands and clothes filthy and blackened.
After removing around 2000kgs of cables from one data centre, we learned from our recycling agent that the wires themselves were aluminium and hard to recycle, but the connector pins (in those big fat connectors that just loved to snag on pretty well everything), were gold plated and if you got enough of them, you could actually make some real money.
I stole this image from Wikipedia, and the cable is clearly slightly grimy, an aspect I find oddly pleasing:
You will never get to ask for a 36 hour outage to install a new system
The thing is that mainframes were huge and hard to install and computer rooms were not always designed to let you install a new one alongside the old one. Even if you could, this would require a mountain of cabling to make it work. So for many customers, the installation schedule would look like this.
- Sometime on Saturday morning, once the Friday night batch and backups were finished, team one would begin. They would power off and de-install the old mainframe and remove it from the floor. This could take many hours and many people.
- Once this was done, team 2 would come in and re-cable the under floor region, especially if the layout had changed. This could also take many hours.
- Once this was done, team 3 would assemble the new mainframe and commission it. This could (you guessed it), also take 12-15 hours.
- Things would normally close out some times as late as Sunday evening with the debug team (team 4) fixing any issues so the client could be up and running by 8am on Monday morning.
This extended outage was quite normal, even for banking systems.
Tell that to dev-ops kids now-a-days and they won’t believe you.
You will never get to roll enormously heavy motor generators across the computer room floor
An IBM 3090 ran on 400 hertz power, meaning it took your 50/60 hertz power and then converted it to 400 hertz using a motor generator called an IBM 3089, which weighed 1075 Kg. Meanwhile the IBM 3097 that distributed power and cooling could weigh up to 1309 kg. The I/O frames meanwhile were a touch top heavy, so they had wings you would pivot out when rolling them across the floor to remove the risk of them toppling over.
You may never install a computer that comes with a desk.
Thats right, every IBM 3090 came with its own desk, officially known as the console table. It’s not unusual to visit a computer room that has not seen a water cooled mainframe for over 20 years, and yet those humble desks are still doing service, possibly because they weigh nearly 90 kilos and no one is strong enough to remove them (although the green screens that once sat on them are hopefully long gone).
Anyone still got an IBM Mainframe desk or been dragging some bus and tag cables? Let me know. Maybe take a photo with your smart phone.