Mainframe to Linux: still a howling headache?

"The way COBOL is being talked about is a red herring"

Mainframe to Linux: still a howling headache?

COBOL has hit the news in 2020 in unexpected ways. In November, popular longform story aggregator Longform.org led with a prominent piece on the "code that controls your money" -- "COBOL is a coding language older than Weird Al Yankovic. The people who know how to use it are often just as old. It underpins the entire financial system," wrote Clive Thompson.

Earlier in the year, New Jersey Governor Phil Murphy drew international media attention with a plaintive request for help knocking the back-end systems powering New Jersey's emergency care system into shape: "Literally, we have systems that are 40 years-plus old, and there’ll be lots of postmortems," he said, as New Jersey ran into issues modernising the system . And one of them on our list will be how did we get here where we literally needed COBOL programmers?” Murphy told the world.

With the language typically (although not exclusively) running on mainframes -- some of them decades old -- and underpinning a hugely substantial part of financial services infrastructure, it put "Big Iron" as well as the 61-year-old programming language firmly back in the spotlight. With applications running on mainframes often also decades old, frequently featuring thousands of lines of undocumented code, typically still mission-critical, and hard to adapt to a world in which customers want new features, fast, they can be that immovable tree stump in banks (and other organisations') IT landscape, with efforts happening around them.

Change is possible. In 2019 telco Swisscom moved its entire mainframe workload of 2,500+ installed MIPSto a private cloud (featuring EMC/Cisco x86 hardware and storage and VMware's vCloud Director, since you ask) without any data reformatting or recompilation of its application programme code. That effort was led by Switzerland's LzLabs. Others have made similar efforts, of mixed success, cost, and overall effort.

With the Covid-19 pandemic having massively accelerated digital transformation efforts, we asked LzLabs' Executive Chairman Mark Cresswell what he'd seen in terms of customers getting workloads off mainframes. [Ed: We appreciate Mark has "skin in the game" here, but also found his answers insightful and worth sharing].

Mark Cresswell, LzLabs, on getting workloads off mainframes

Mark - To what extent has the pandemic changed thinking about getting workloads off mainframes?

“What we’ve seen this year from both our conversations with customers and industry analysts is that there has been a significant increase in the number of organisations looking to migrate off their legacy mainframes. It seems the pandemic has exacerbated the existing mainframe skills shortage. During the first wave earlier this year, a lot of companies took the opportunity to accelerate early retirement and voluntary redundancy programmes. The majority of people who took advantage of those were employees that were close to retirement anyway.

"This disproportionately affected their mainframe systems administration teams, so now companies find themselves in a situation where they have fewer staff available to support these critical platforms than they would otherwise have had. Mainframe development environments are complex and idiosyncratic, so the skills necessary to maintain the platform and applications were already thin on the ground. The acceleration of these skills leaving the business has been the tipping point for organisations to seriously consider moving away from the mainframe.”

“Mainframe to x86/cloud migrations remain a convoluted, risky, expensive step that few sane IT teams are likely to take; rather, they'll spin up a range of new greenfield, cloud-native applications where humanly possible.” True/false/your thoughts?

“There are really two separate points raised here, firstly are migrations convoluted, risky and expensive, and secondly would IT managers rather spin-up a range of greenfield, cloud native applications?

"Firstly, to say that migrations are convoluted, risky and expensive in 2020 is false – historically perhaps they were, but not any longer. It’s now possible to take the applications exactly as they were, without any of the historical recompilation and data type risk, and run them on x86 – it’s much easier than before. The assertion that IT managers would prefer to spin up new, cloud-native applications is plausible, it’s human nature – exciting new technology captures the imagination – and budget, but it’s important to not conflate these two points. People like to work on new, cutting edge applications rather than mess with something you thought you’d fixed 30 years ago. But the idea that someone can rewrite all that legacy code, built up over decades, with a modern cloud native environment in any timeframe that matters, is fanciful. People have tried the re-engineering and re-factoring approach and it has only ever worked on the fringes.

"This approach won’t work for the mainstream, mission-critical core applications on the mainframe.”

To what extent have new capabilities like the availability of OpenShift on IBM Z or the ability to run Kubernetes on mainframes changed the conversation about legacy mainframe-based apps?

“This is an interesting question, and one that we’ve seen several times recently. The mainframe is a unique hardware architecture, that supports a legacy operating environment, the environment that runs what we know as legacy applications, and a modern Linux operating environment.

"When we talk about legacy migration we are not talking about the Linux side of the mainframe, it’s the legacy side. There is a bit of sleight of hand going on with these discussions. OpenShift, Containers, Kubernetes etc. may be available on a mainframe, and indeed they can even run virtualised Linux applications in the legacy partition, – but that doesn’t help someone who’s got five million lines of COBOL running under CICS!” [IBM's transaction processing subsystem for the z/OS operating system.]

How can teams enhance productivity and reduce costs without entirely abandoning the mainframe? Top tips welcomed.

“We firmly believe that an incremental approach to mainframe modernisation is the most likely to succeed. This incremental approach can be a step on a journey or an end in itself. As an example, we recently worked with a major European bank to migrate its core banking applications off the mainframe and on to Linux, whilst allowing them to leave data, and some elements of the application on the mainframe during the migration phase. They are still running some workload on their mainframe, but they have dramatically accelerated their application deployments through being able to apply modern Linux DevOps tools and development pipelines to their legacy mainframe applications.

"A development pipeline that requires a mainframe is an expensive and time consuming proposition.  Moving the pipeline off-mainframe for the purposes of testing opens up so many opportunities for agility improvements and cost savings. If at the end of the day the customer wants to deploy a modernised application on a mainframe, that’s fine, but we want to help them get there faster. They may choose to keep the mainframe for those applications that, in their view, demand it, but for those applications that don’t, we make it easy to use Linux instead. Platform choice is returned to the customer.”

IBM offers some pretty flexible new mainframe licensing options and slick new boxes themselves. Isn't that an easier bet than some kind of esoteric lift-and-shift to a private or public cloud?

“The mainframe debate isn’t just about cost. Of course, everyone is looking to do things more efficiently and save money, but the bigger issues that people have with their legacy mainframes are the aforementioned staff shortages, and the fact that the applications are often impenetrable.

"They live on what is essentially a technological island, where they can’t naturally take advantage of the innovations of open systems.

"It doesn’t matter what the mainframe vendors do in terms of mainframe pricing or making the boxes look cool, it doesn’t solve the core legacy problem. That isn’t fixed with a cheaper mainframe.”

How real is the perennially recurring horror story about the demise of all the COBOL experts? How much impact does that have on those thinking about getting workloads off mainframes?

“I’ve had a lot of discussions this year about the so-called ‘COBOL crisis’. From our perspective the way COBOL is being talked about in this debate is a red herring. The challenges organisations face with legacy systems are not in fact a result of COBOL, or any other programming language; the language is just a syntax for expressing business rules. COBOL is a programming language like any other which any self-respecting programmer could pick up and learn. The problem is the mainframe development environment, which really is unique. People with skills in the development environment are retiring and organisations are struggling to find people to replace them. The problem it is attributed to COBOL skills but it is people that understand how to develop on a mainframe that are in short supply.

“That said, the shortage of mainframe development skills is a very real threat. As we saw earlier this year in the U.S, multiple US state government departments found themselves in the news as their mainframe systems failed to handle a surge in unemployment claims processing resulting from the pandemic. The strategy at the time was to try and drag as many COBOL programmers out of retirement as they could find, but a more strategic answer is to move the applications into an environment where millennials can work on them as easily as any other application they are used to supporting. These US state governments are merely the canaries in the coal mine, and that’s why we’ve seen such a surge of interest in migration."

See also: Can open source survive the cloud?