I’ve never worked with major enterprise or government systems where there’s aging mainframes — the type that get parodied for running COBOL. So, I’m completely ignorant, although fascinated. Are they power hogs? Are they wildly cheap to run? Are they even run as they were back in the day?
Modern systems do a lot more work per second than these old machines, while drawing less power. If you were to collect a large enough collection of mainframes to equal the performance of a modern rack server, you would need 10-1000 times the power to run the old stuff, depending on how far back you want to go - even 10 year old hardware can cost 4-10 times more in electric bill compared to a modern server with the same total performance level.
Definitely power hogs. Modern switch mode power supplies are incredibly efficient.
I never really administered anything like that myself but I had a friend who took care of some old servers ~20 years ago in college. Multiple power drops in that small room went to fuse panels rated for several hundred amps each.
Unfortunately all I know were that they were VAX mainframes and were already considered obsolete in the late 90’s ;-)
Newer systems are way more power efficient than those of yester-year. Systems design and engineering, while built on the principles of the past, have very much changed just in the last decade alone. Older mainframe systems are really no better than museum pieces and technological curiosities today.
Are those older systems largely virtualized now? When you hear about some old system at a government office not being able to keep up, is it the same hardware?
A lot, just like today’s Mainframe and Super computers. They are calculating complex formulas and doing gigantic batch jobs, millisecond AI fraud detection etc. A regular computer or server will throttle a lot while they are designed to be loaded 100% of times. Dave Plummer of MS recently made a video of a 40TB RAM monster.
Did you ever look at how much today’s top of the line gaming rigs consume? ;-)
Mainframes are basically large rack mounted computers, and typically require many kW of power to run.
They’re still selling mainframes. A new IBM z16 takes 3-phase power and can use up to 30kW, or about 1,000 times a typical laptop.
Not all mainframes are ancient; new models are still designed and sold to this day. And the brand spanking new mainframes may still be running COBOL code and other such antiquities, as many new mainframes are installed as upgrades for older mainframes and inherit a lot of legacy software that way.
And to answer your question: a mainframe is just a server. A specific design-type of server with a particular specialism for a particular set of usecases, but the basics of the underlying technology are no different from any other server. Old machines (mainframes or otherwise) will always consume far more power per instruction than a newer machine, so any old mainframes still chugging along out there are likely to be consuming a lot of power comparable to the work they’re doing.
The value of mainframes is that they tend to have enormous redundancy and very high performance characteristics, particularly in terms of data access and storage. They’re the machine of choice for things like financial transactions, where every transaction must be processed almost instantly, data loss is unacceptable, downtime nonexistent, and spikes in load are extremely unpredictable. For a usecase like that, the over-engineering of a mainframe is exactly what you need, and well worth the money over the alternative of a bodged together cluster of standard rack servers.
See also machines like the HP Nonstop line of fault-tolerant servers, which aren’t usually called mainframes but which share a kinship with them in terms of being enormously over-engineered and very expensive servers which serve a particular niche.