Skip to main content

IBM upgrades Linux mainframe, boosting availability and AI performance

IBM LinuxOne Emperor 4
IBM LinuxOne Emperor 4
Image Credit: IBM

Join us in Atlanta on April 10th and explore the landscape of security workforce. We will explore the vision, benefits, and use cases of AI for security teams. Request an invite here.


The mainframe, the hardware stalwart that has existed for decades, is continuing to be a force in the modern era.

Among the vendors that still build mainframes is IBM, which today announced the latest iteration of its Linux-focused mainframe system, dubbed the LinuxOne Emperor 4. IBM has been building LinuxOne systems since 2015, when the first Emperor mainframe made its debut, and has been updating the platform on a roughly two-year cadence.

The LinuxOne Emperor 4 is based on the IBM z16 mainframe that was announced by IBM in April. Whereas the z16 is optimized for IBM’s z/OS operating system, the LinuxOne, not surprisingly, is all about Linux and to a large extent the Kubernetes cloud-native platform for container orchestration as well.

“It only runs Linux and it’s really meant to meet the needs of the people who run Linux-based infrastructure in the data centers by providing them a new paradigm around how to drive a Linux environment that’s more efficient and more scalable,” said Marcel Mitran, IBM Fellow, CTO of cloud platform, IBM LinuxONE.

VB Event

The AI Impact Tour – Atlanta

Continuing our tour, we’re headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security workforce. Space is limited, so request an invite today.
Request an invite

IBM continues to build out non-x86 hardware for enterprises

The LinuxOne is part of IBM’s overall hardware portfolio, which is competitive against other silicon architectures, most notably x86, which is developed by Intel and AMD.

IBM also builds the Power-based architecture, which also can be optimized for Linux deployments. In July, IBM announced a new lineup of Power10 servers for enterprise use cases. Across its mainframe and Power systems portfolio, IBM reported revenue growth in its most recent financial quarter, with mainframe revenues up by 69%.

Mainframes and, in particular, the LinuxOne are continuing to find adoption within financial services organizations around the world. Among IBM’s LinuxOne users is Citibank, which uses the mainframe system alongside the MongoDB database to power some of its mission-critical financial services.

Inside the LinuxOne Emperor 4

The new LinuxOne Emperor 4 system supports 32 IBM Telum processors, which are built on a 7 nm process. The system provides up to 40 TB of RAIM (Redundant Array of Independent Memory) and has been designed with quantum-safe cryptographic algorithms to help provide a high degree of security.

Image source: IBM.

Mitran noted that the LinuxOne Emperor 4 provides “seven nines” of availability (99.99999), which translates into only three seconds of downtime in a year.

The high availability is enabled by several innovative technologies, including the use of self-healing RAIM memory. Mitran said that the new system also has a feature that will enable a system core to fail over to an available core when needed, instantly.

“There’s integrated technology to do data center failover, both from a compute and storage perspective, using new technology called GDPS [Geographically Dispersed Parallel Sysplex] hyperswap technology,” Mitran said. “That and so much more of what’s engineered into these systems is how we deliver on the design for seven nines of availability.”

AI inference fit for an emperor

Among the new capabilities in the LinuxOne Emperor 4 is integrated artificial intelligence (AI) inference that is embedded at the hardware layer.

AI inference is the part of the process that makes a prediction or a decision. Mitran said that the integrated inference with LinuxOne Emperor 4 enables the inferencing to be done as part of a transaction. Without the integration, inference is done in a separate process, which could increase latency. 

One common use case where integrated inference might help is with fraud detection, which can now be done quicker, without unnecessarily delaying a transaction.

“By having the AI accelerator on the chip, making it extremely fast, we’re able to now run the inferencing as part of transactional workloads,” Mitran said.

VB Daily - get the latest in your inbox

Thanks for subscribing. Check out more VB newsletters here.

An error occured.