Simplifying Db2 for z/OS CPU Optimization: Eradicating Inefficient SQL Processing

Without doubt the IBM Z Mainframe server is recognised as the de facto choice for storing mission critical System of record (SOR) data in database repositories for 92 of the top 100 global banks, 23 of the 25 top global airlines; the top 10 global insurers & ~70% of all Fortune 500 companies. ~80% of mission critical data is hosted by IBM Z Mainframe servers, processing 30+ Billion transactions per day, including ~90% of all credit card transactions. This data is accessed by ~1.3 Million CICS transactions per second, compared with a Google (mostly search) processing rate of ~70,000 transactions per second. Interestingly enough, despite processing so many mission critical transactions the IBM Z Mainframe server platform is only accountable for ~6.2% of global IT spend. One must draw one’s own conclusions as to why some IT professionals perceive the IBM Z Mainframe server as being a legacy platform, not worthy of consideration as a strategic IT server platform…

The digital transformation has delivered an exponential growth of data, typically classified as Cloud, Mobile & Social based. This current & ever-growing data source requires intelligent analytics to deliver meaningful business decisions, requiring agile application software delivery to gain competitive edge. This digital approach can sometimes deliver a myriad of micro business application changes, personalised for each & every customer, often delivering “pop-up” applications…

IBM Z Mainframe software costs are often criticized as being a major barrier to maintaining or indeed commissioning the platform. IBM have tried to minimize these costs with numerous sub-capacity pricing options over the last 30 years or so, but this is perceived by many as being overly complicated; although with a modicum of knowledge, a specialized personnel resource can easily control software costs. All that said, IBM have introduced Tailored Fit Pricing for IBM Z, in an attempt to simplify software cost management. A recent blog reviewed the Tailored Fit Pricing for IBM Z offering & whether you decide whether this IBM Z pricing mechanism is suitable for your organization, optimizing IBM Z CPU MSU/MIPS usage is mandatory. Recognizing that the IBM Z Mainframe server is the de facto database server for System of Record data, primarily via the Db2 subsystem, clearly optimizing Db2 CPU usage, whether OLTP transactions, typically via CICS, or the batch window, has been & always will be, worthwhile…

All too often, many IT disciplines can be classified with a generic 80/20 rule & typically data can be classified accordingly, where 80% of data is accessed 20% of the time & 20% of data is accessed 80% of the time. The challenge with such a blunt Rule of Thumb (ROT) is that it’s static, but it’s a good starting point. Ideally for any large data source, there would be a dynamic sampling mechanism that would identify the most active data, loading this into the highest speed memory resource to reduce I/O access times & therefore CPU usage. Dynamic management of such a data buffer would render the 80/20 rule extraneous to requirements, as each & every business has their own data access profile. However, a simple cost benefit & therefore Proof of Value (POV) analysis could ensue.

From a Db2 viewpoint, pre-defined structures such as buffer pools offer some relief in storing highly referenced data in a high-speed server memory resource, but this has a finite capacity versus performance benefit, not necessarily using the fastest memory structures available nor dynamically caching the most accessed data. The business considerations of not optimizing Db2 data access are:

  • Elongated Batch Processing: With ever increasing amounts of data to process & greater demands for 247365 availability & real-time access, data access optimization is fundamental for optimized service delivery, often measured by mission critical SLA & KPI metrics. Optimized batch processing is a fundamental requirement for acceptable customer facing business service delivery.
  • Slow Transaction Response Times: As the nature of customer requirements change, mobile device applications exponentially increasing the number of daily transactions, overall system resource capacity constraints are often stressed during peak hours. Optimized transaction response time is a fundamental requirement, being the most transparent service delivered to each & every end customer.

An easy but very expensive solution to remediate batch processing & transaction response issues is to provide more resources via a CPU server upgrade activity. A more sensible approach is to optimize the currently deployed resources, safeguarding that frequently accessed data is mostly if not always high speed cache resident, reducing the I/O processing overhead, reducing CPU usage, which in turn will optimize batch processing & transaction response times, while controlling associated IBM Z Mainframe server hardware & software costs.

The ubiquitous Db2 data access method is Structured Query Language (SQL) based, where IBM has their own implementation, SQL for Db2 for z/OS, which could be via the commonly used COBOL (EXEC SQL) programming language or a Db2 Connect API (E.g. ADO.NET, CLI, Embedded SQL, JDBC, ODBC, OLE DB, Perl, PHP, pureQuery, Python, Ruby, SQLJ). For Db2 Connect, there are 2 types of embedded SQL processing, static & dynamic SQL. Static SQL minimizes execution time by processing in advance. Though some relief is provided by Dynamic Statement Cache, dynamic SQL is processed when the SQL statement is submitted to the IBM Z Db2 server. Dynamic SQL is more flexible, but potentially slower. The decision to use static or dynamic SQL is typically made by the application programmer. There is a danger that Dynamic Statement Cache might be considered as a panacea for SQL CPU performance optimization, but as per any other performance activity, reviewing any historical changes is a good idea. The realm of possibility exists for the Db2 Subject Matter Expert (SME) to be pleasantly surprised that more often than not, there are still significant SQL CPU optimization opportunities…

From a generic Db2 viewpoint, with static SQL, you cannot change the form of SQL statements unless you make changes to the program. However, you can increase the flexibility of static statements by using host variables. Obviously, application program changes are not always desirable.

Dynamic SQL provides flexibility, if an application program needs to process many data types & structures, dictating that the program cannot define a model for each one, dynamic SQL overcomes this challenge. Dynamic SQL processing is facilitated by Query Management Facility (QMF), SQL Processing Using File Input (SPUFI) or the UNIX Systems Services (USS) Command Line Processor (CLP). Not all SQL statements are supported when using dynamic SQL. A Db2 application program that processes dynamic SQL accepts as input, or generates, an SQL statement in the form of a character string. Programming is simplified when you can structure programs not to use SELECT statements, or to use only those that return a known number of values of known types.

For Db2 data access, SQL statement processing requires an access path. The major SQL statement performance factors to consider are the amount of time that Db2 uses to determine the access path at run time & whether the access path is efficient. Db2 determines the SQL statement access path either when you bind the plan or package that contains the SQL statement or when the SQL statement executes. The repeating cost of preparing a dynamic SQL statement can make the performance worse when compared with static SQL statements. However, if you execute the same SQL statement often, using the dynamic SQL statement cache decreases the number of times dynamic statements must be prepared.

Typically, organizations have embraced static SQL over dynamic because static is more predictable, showing little or no change, while dynamic implies ever changing & unpredictable. Db2 performance optimization functions have been incorporated into base Db2 (E.g. Buffer Pools) & software products (E.g. IBM Db2 AI for z/OS, IBM Db2 for z/OS Optimizer, IBM Db2 Analytics Accelerator, IBM Z Table Accelerator, IZTA), with varying levels of benefit & cost. Ultimately IBM Z Mainframe customers need simple cost-efficient off-the-shelf solutions of a plug & play variety & without doubt, optimizing static SQL data processing is a pragmatic option for reducing Db2 subsystem CPU usage.

In Db2 Version 10, support for 64-bit run time was introduced, providing Virtual Storage Constraint Relief (VSCR), improving the vertical scalability of Db2 subsystems. With Db2 Version 11, the key z/Architecture benefit of 64-bit virtual addressing support was finally introduced, increasing capacity of central memory & virtual address spaces from 2 GB to 16 EB (Exabytes), eliminating most storage constraints. It therefore follows that any Db2 CPU performance optimization solution should also exploit the z/Architecture 64-bit feature, to support the ever-increasing data storage requirements of today’s digital workloads.

As we have identified, Db2 can consume significant amounts of z/OS CPU accessing & retrieving the same static frequently used data elements repetitively. Upon analysis, these static frequently used data elements are typically identified originating from a small percentage of Db2 tablespaces. Typically, at first glance these simple SQL programs are considered as low risk, but are repeatedly processed, often in peak processing times, consuming excessive CPU & increasing processing cost accordingly, typically z/OS Monthly Licence Charges (MLC) related. Db2 optimization tools for access path or buffer pool management provide some benefit, but this is not always significant & may require application changes. Patently there is a clear & present requirement for a simple plug & play solution, transparent to Db2 processing, maintaining an optimized high-performance in-memory cache of frequently used Db2 data, safeguarding data integrity in environments various, including SYSPLEX, Data Sharing, et al…

QuickSelect is a plug-in solution dynamically activated in a batch or OLTP environment (I.E. CICS, IMS/TM) intercepting repetitive SQL statements from Db2 application programs, storing the most active result set, not necessarily the entire tablespace, in a high-performance in-memory cache, returning to applications the same result set as per Db2, but much faster & using less CPU accordingly. QuickSelect is completely transparent to z/OS applications, eliminating any requirement to change/recompile/relink application source or rebind packages. QuickSelect processing can be switched on or off using a single keystroke, either defaulting to standard Db2 SQL processing or to benefit from the QuickSelect high-speed cache for optimized CPU resource usage.

The 64-bit QuickSelect server, implemented as a started task, intelligently caching data in self-managed memory above the bar, supporting up to 16 EB of memory, eliminating concerns of using any other commonly used storage areas (E.g. ECSA). The intelligent caching mechanism safeguards that only highly active data is retained, optimizing the associated cache memory size required.

QuickSelect caches frequently requested Db2 SQL result sets, returning these results to the application from QuickSelect cache, when a repetition of the same SQL is encountered. For data integrity purposes, QuickSelect immediately invalidates result sets upon detection of changes to underlying tables, implicitly validating each cache resident SQL result set. Changes to Db2 data by application programs are captured by a standard Db2 VALIDPROC process, attached to the typically small subset of frequently accessed tables of interest to QuickSelect. Db2 automatically activates the VALIDPROC routine whenever the table contents are changed by INSERT, DELETE, UPDATE or TRUNCATE statements, invalidating cached data from the updated tables automatically. For standard Db2 utilities such as LOAD/REPLACE, REORG/DISCARD & RECOVER, table-level changes are identified by a QuickSelect utility-trap, invalidating cached data from the updated tables automatically. QuickSelect also supports SYSPLEX & Data Sharing environments, supporting update activity via the same XCF functions & processes used by Db2.

QuickSelect delivers the following benefits:

  • CPU Savings: Meaningful reduction (E.g. 20%) in the Db2 SQL direct processing; 10%+ peak time CPU reduction is not uncommon.
  • Faster Processing: Optimized CPU usage delivers shorter batch processing & OLTP transaction response times, for related SLA & KPI objective compliance.
  • Transparent Implementation: No application changes required, source code, load module or Db2 package.
  • Survey Mode: Unobtrusive & minimal Db2 workload overhead data sampling to identify potential CPU savings from repetitive SQL & tables of interest, before implementation.
  • Staggered Deployment: Granular criteria (E.g. Job, Program, Table, Transaction, Etc.) implementation ability.
  • Reporting & Analytics: Extensive information detailing cache usage for Db2 programs & tables.

Since 1993 Db2 has evolved dramatically, in line with the evolution of the IBM Z Mainframe server. When considering today’s requirement for a digital world, processing ever increasing amounts of mission critical data, a base requirement to optimize CPU processing for Db2 SQL data access is mandatory. In a hybrid support environment where today’s IBM Z Mainframe support resource requires an even blend of technical & business skills, plug & play, easy-to-use & results driven solutions are required to optimize CPU usage, transparent to the subsystem & related application programs. QuickSelect is such a solution, fully exploiting 64-bit z/Architecture for ultimate scalability, identifying & resolving a common CPU consuming data access problem, for a mission critical resource, namely the Db2 subsystem, maintaining mission-critical System of Record data.

z/OS CPU optimization is a mandatory requirement for every organization, to reduce associated software & hardware costs & in theory, as a mandatory pre requisite for deploying the Tailored Fit Pricing for IBM Z pricing mechanism. Tailored Fit Pricing uses the previous 12 Months SCRT submissions to establish a baseline for MSU charging over a contracted period, typically 3 years. If there are any unused MSU resources, these are carried forward to the next year, but if those MSU resources remain unused at the end of the contracted period, they are lost, meaning the organization has paid too much. If the MSU resource exceeds the agreed Tailored Fit Pricing, excess MSU resources are charged at a discounted rate. Clearly achieving an optimal MSU baseline before embarking on a Tailored Fit Pricing contract is arguably mandatory & it therefore follows that optimizing CPU forever more, safeguards optimal z/OS MLC charging during the Tailored Fit Pricing contract. QuickSelect for Db2 is a seamless CPU optimization product that will perpetually deliver benefit, assisting organizations minimize their z/OS MLC costs, whether they continue to proactively manage the R4HA, submitting monthly SCRT reports or they embark on a Tailored Fit Pricing contract…

Data Entry – Is Windows XP & Office 2003 End Of Support An Issue?

Recently somebody called me to say “do you realize your Assembler (ASM) programs are still running, some 25 years after you implemented them”?  Ouch, the problem with leaving comments and an audit trail, even in 1989!  It was a blast-from-the-past and a welcome acknowledgement, even though secretly, I can’t really remember the code.  We then got talking about how Mainframe programs can stand the test of time, through umpteen iterations of Operating System.  This article will consider whether you need a Mainframe to write application code that will stand the test of time.

Spoiler alert: No you don’t; nowadays a good application development environment, a competent software coder and most importantly of all, common sense, can achieve this, for Mainframe and Distributed Systems alike.  However, you might need to recompile the source code from time-to-time…

An aging industry report from Gartner Research revealed that “many Independent Software Vendors (ISVs) are unlikely to support new versions of applications on Windows XP in 2011; in 2012, it will become common.”  And it may stifle access to hardware innovation: Gartner Research further states that in 2012, “most PC hardware manufacturers will stop supporting Windows XP on the majority of their new PC models.

After several years of uncertainty, Microsoft have officially announced that support for Windows XP (SP3) & Office 2003 ends as of 8 April 2014.  Specifically, there will be no new security updates, non-security hotfixes, free or paid assisted support options or online technical content updates.  Furthermore, Microsoft state:

Running Windows XP SP3 and Office 2003 in your environment after their end of support date may expose your company to potential risks, such as:

  • Security & Compliance Risks: Unsupported and unpatched environments are vulnerable to security risks. This may result in an officially recognized control failure by an internal or external audit body, leading to suspension of certifications, and/or public notification of the organization’s inability to maintain its systems and customer information.
  • Lack of Independent Software Vendor (ISV) & Hardware Manufacturers Support:  A recent industry report from Gartner Research suggests “many independent software vendors (ISVs) are unlikely to support new versions of applications on Windows XP in 2011; in 2012, it will become common.” And it may stifle access to hardware innovation: Gartner Research further notes that in 2012, most PC hardware manufacturers will stop supporting Windows XP on the majority of their new PC models.

Looking at the big picture, anybody currently deploying Windows XP might want to consider the lifecycle of other Microsoft Operating System versions, for example, Windows Vista, Windows 7 & Windows 8.  As the Microsoft Windows Lifecycle Fact Sheet states, mainstream support for Windows 7 ends in January 2015, less than one year from now, and so arguably the only viable option is Windows 8.  The jump from Windows XP to Windows 8 is massive, not necessarily in terms of usability, but certainly and undoubtedly in terms of compatibility.

Those of us that experienced the Windows Vista, Windows 7 and more latterly Windows 8 upgrades, know from experience that each of these upgrades had interoperability challenges, whether hardware (I.E. Printers, Scanners, Removable Storage, et al), software (I.E. Bespoke, COTS, Utilities, et al) or even web browser (I.E. Internet Explorer, Firefox, Chrome, Safari, et al) related.  Although many of these IT resources might be considered standalone or technology commodities, where a technology refresh is straightforward and an operational benefit, the impact on the business for user facing applications might be considerable.  Of course, the most pervasive business application for capturing and processing customer information is typically classified as data entry related…

So, why might a business still be deploying Windows XP or Office 2003 today?  One typical reason relates to data entry systems, either in-house written or packaged in a Commercial Off the Shelf (COTS) software product.  In all likelihood, one way or another, these deployments have become unsupported from a 3rd party viewpoint, either because of the Microsoft software support ethos or the COTS ISV support policy.

Looking back to when Microsoft XP was first released, it offered an environment that allowed customers to think outside of the box for alternatives to traditional development methods, or put another way, Rapid Application Development (RAD) techniques.  Such a capability dictated that businesses could deploy their own bespoke or packaged systems for capturing data and thus automating the entirety of their business processes from cradle to grave with IT systems.  For a Small to Medium sized Enterprise (SME), this was a significant benefit, allowing them to compete, or at least enter their market place, without deploying a significant IT support infrastructure.

Therefore RAD and Microsoft Software Development Kit (SDK) techniques for GUI (E.g. .NET, Visual, et al) presentation, sometimes and more latterly browser based, were supplemented with structured data processing routines, vis-à-vis spreadsheet (CSV), database (SQL) and latterly more formalized data structure layouts (I.E. XML, XHMTL).  Let’s not forget, Excel 2003 and Access 2003 that offered powerful respective spreadsheet and database solutions, which could capture data, however crude that implementation might have been, while processing this data and delivering reports with a modicum of in-built high-level code.

However, as technology evolves, sometimes applications need to be revisited to support the latest and greatest techniques, and perhaps the SME that embraced this brave new world of RAD techniques, were left somewhat isolated, for whatever reasons; maybe business related, whether economic related (E.g. dot com or financial markets) or not.

Let’s not judge those business folks still running Windows XP or Microsoft Office 2003 today; there are probably many good reasons as to why.  When they developed their business systems using a Windows XP or Office 2003 software base, I don’t think they envisaged that the next Microsoft Operating system release might eradicate their original application development investments; requiring a significant investment to upgrade their infrastructure for subsequent Windows versions, but more notably, for interoperability resources (I.E. Web Browsers, .NET, Excel, Access, ODBC, et al).

So if you’re a business running Windows XP and maybe Office 2003 today, potentially PC (E.g. Desktop, Laptop) upgrade challenges can be separated into two distinct entities; firstly the hardware platform and operating system itself; where the “standard image” approach can simplify matters; and secondly, the business application, typically data entry and processing related.  Let’s not forget, those supported COTS software products, whether system utility (E.g. Security, Backup/Recovery/Archive, File Management, et al) or function (E.g. Accounting, ERP, SCM, et al) can be easily upgraded.  It’s just those bespoke in-house systems or unsupported systems that require a modicum of thought and effort…

We all know from our life experiences, if we only have lemons, let’s make lemonade!  It’s not that long ago that we faced the so-called Millennium Bug (Year 2000/Y2K) challenge.  So that could either be a problem or an opportunity.  The enlightened business faced up to the Year 2000 challenge, arguably overblown by media scare stories, and upgraded their IT infrastructures and systems, and perhaps for the first time, at least made an accurate inventory of their IT equipment.  So can similar attributes be applied to this Windows XP and Office 2003 challenge?

The first lesson is acceptance; so yes we have a challenge and we need to do something about it.  Even if your business has been running Windows XP or Office 2003, in an extended support mode for many years, in all likelihood, the associated business systems are no longer fit-for-purpose or could benefit from a significant face-lift to incorporate new logic and function that the business requires!

The second lesson is technology evolution; so just as RAD and SDK were the application development buzzwords of the Windows XP launch timeframe, today the term studio or application studio applies.  An application studio provides a complete, integrated and end-to-end package for the creation, including the design, test, debug and publishing activities of your business applications.  Furthermore, in the last decade or so, there has been a proliferation of modern language (E.g. XHTML, Java, C, C++, et al) programmers, whether formalized as IT professionals, or not (E.g. home coders).

The third lesson is as always, cost versus benefit; the option of paying for Windows XP or Office 2003 extended support ends as of April 2014.  So what is the cost of doing nothing?  As always, cost is never the issue, benefit is.  Investing in new systems that are fit-for-purpose, will of course deliver business benefit, and if the investment doesn’t pay for itself in Year 1, hopefully your business can build a several year business case to deliver the requisite ROI.

Finally, is remote data entry possible with a Windows XP based system?  Perhaps, but certainly not for each and every modern day device (E.g. Smartphone, Tablet, et al).  Therefore enhancing your data entry systems, with the latest presentation techniques, might deliver significant benefit, both for the business and its employees.  Remote working, whether field or home based, delivers productivity benefits, where such benefits can be measured in both business administration cost reduction and increased employee job satisfaction and associated working conditions.

So how easy can it be to replace an aging Windows XP and/or Office 2003 application?

Entrypoint is a complete application development package for creating high-performance data entry applications.  Entrypoint software is built around a scalable, client-server architecture that interfaces with SQL databases for data storage.  Entrypoint data entry software interfaces with standard communications products and commercial networks.

Entrypoint is a web based data entry system that includes Application Studio, a local development tool that allows the user to easily create any data entry system, based upon their specific and typically unique business requirements.  The Entrypoint thin and thick clients let the user enter their data either directly via web resources or via a local workstation (E.g. PC), as per their requirements, while being connected to the same database.

Entrypoint Benefits: Today’s 21st Century business is focussed on delivering tangible business benefit and cost efficient customer facing solutions that can be rapidly deployed, while being secure and compliant:

  • Flexible Data Entry: Whether via Intelligent Data Capture (IDC) and/or Electronic Data Capture (EDC), Entrypoint can accommodate any business requirement, either from scratch, or perhaps via conversion from a legacy platform (E.g. DOS).
  • Rapid Application Deployment: Entrypoint can be deployed in hours, sometimes and typically by non-application development personnel, safeguarding long-term management and associated TCO concerns.
  • Audit: The Entrypoint Audit Trail Facility (ATF) tracks all changes made to records from the time they are first entered into the case report form throughout all editing activity, regardless of the number of users working on them.  The audit facility can be enabled on an application-by-application basis for all users, groups of users or individual users.
  • Security: Entrypoint includes a variety of features that yield the highest levels of critical security required for Clinical Trials.  Its inbuilt security features let you create a customized and granular security policy specific to your needs.  Entrypoint uses ODBC to connect to SQL databases for data storage, which provides an additional level of security; database logins, passwords and even built-in encryption, not always available for other data entry solutions.  Optional 128-bit encryption protects all messages sent to or from the server delivering significantly greater protection, not always available for other data entry solutions.

Entrypoint is one of the simplest but most comprehensive data entry solutions that I have encountered and provides a cost-efficient solution for both the smallest and largest of businesses.  Furthermore, in all likelihood, and definitely in real-life, an entry-level employee or graduate with programming skills could rapidly develop a Data Entry system with Entrypoint to replace any existing Windows XP (or any other Windows OS) based solution.  This observation alone dictates that somebody who actually works for the business, not a 3rd party IT professional, can not only perform the technical work required, but more importantly, be a company employee that can easily relate to and sometimes learn about the end-to-end business.

In the IT world, change is inevitable, and sometimes change is forced upon us.  Whatever your thoughts regarding end of support for Windows XP and Office 2003, there are options for you and your business to embrace this change, move forward, and improve your processes.  You no longer have the option to pay Microsoft for extended support, and so why not use these money’s and invest in a system that can be easily supported, and easily adapted in the future, to provide long-term benefit for your business!

COBOL – A Viable Programming Language?

For the last twenty years or so I have encountered many scenarios where Mainframe users consider migration to a Distributed Systems (E.g. Wintel, UNIX/Linux, et al) platform, where more often than not the primary reasons seems to be “green screen” and/or “COBOL is a declining legacy language” based.

Going back to basics, COBOL is a Common Business Oriented Language, although the naysayers might say COBOL is a Completely Obsolete Business Oriented Language; we will perhaps try to be more dispassionate in this discussion…

Industry Analysts have stated that there are ~220 Billion lines of COBOL code and ~100,000 programmers and that COBOL applications process ~80% of business transactions daily, and that there are ~200 times more COBOL transactions processed daily, when compared with Google searches!  A lot of numbers and statistics, but seemingly COBOL is still widely used and accepted.  Even from a new development viewpoint, ~5 Billion lines of COBOL code per annum (~15% of Annual Global Development) is stated, suggesting that COBOL is not in any way obsolete or legacy, so why is COBOL perceived by some in a dubious manner?

Maybe because COBOL was introduced in 1959 and primarily it is deployed on the Mainframe, and so anything that is 50+ years old and has an association with the Mainframe just has to be dubious, doesn’t it?  Of course not, as this arguably “pioneering” or at least one of the first “widely deployed” programming languages allowed many global and significant businesses grow, in tandem with the IBM Mainframe platform, automating and streamlining business processes, increasing productivity and so on.  So depending on your viewpoint, COBOL was either in the right place at the right time, stimulating the Data Processing (DP) and Information Technology (IT) revolution, or COBOL just got lucky, it was “Hobson’s Choice”…

Although there have been several iterations of COBOL standards (I.E. COBOL-68, COBOL-74, COBOL-85), primarily associated with the American National Standards Institute (ANSI) and more latterly COBOL 2002 (ISO), a COBOL program that was written and compiled on an IBM Mainframe several decades ago, will most likely still run on the latest generation IBM Mainframe.  Put another way, its backwards compatibility ability has been significant, and although there were some migration considerations associated with the Language Environment (LE), the original COBOL Application Development investment has generated a readily usable Return On Investment (ROI) over and over again.  How true is this for other programming languages and computing platforms?  For the avoidance of doubt, a COBOL program that was written in 16-bit, can still run today on a 64-bit platform, and with a modicum of evolution, fully exploit the latest functionality and 64-bit performance, with minimal fuss.  While how many revolutionary or significant upgrades have been required for Commercial Off The Shelf (COTS) software and associated bespoke application development code, to upgrade non-Mainframe platforms from 16-32-64-bit?

So, is COBOL a viable programming language of the future?  One must draw one’s own conclusions, but we can look to recent functional enhancements and statements of direction from an IBM Mainframe viewpoint.

In recent years IBM have actually increased the number of COBOL R&D personnel by a factor of ~100%, while increasing allocated investment, commitment and interest accordingly.  This observation more than any other, suggests that at least from an IBM Mainframe viewpoint, COBOL is an important function.

From a technical function viewpoint, the realm of possibility exists with COBOL, interacting with all 21st century programming and function techniques, dismissing the notion that COBOL can only be considered as a traditional/legacy option for CICS-Batch applications and associated “green screen” environments, for example:

  • Support for CICS integrated translator
  • Support for latest SQL data types in syntax via DB2
  • Support for Java interoperability via object-oriented COBOL syntax
  • Support access for WebSphere enterprise beans
  • Support for Java SDK
  • Support for XML high speed parsing and validation (UTF-8, UTF-16 & various EBCDIC codepages)

From a strategic statement of direction viewpoint, IBM have declared the following major notable activities:

  • Performance and resource utilization optimization, reducing TCO accordingly
  • Improved middleware (I.E. CICS, DB2, IMS, WebSphere) programmability and problem determination
  • Improved capabilities (E.g. XML, Java, et al) for modernizing & creating business critical applications
  • Improved programmer (E.g. Usability and Problem Determination) productivity
  • Source and load (I.E. recompile not required) compatibility (E.g. old programs can call new and vice versa)

Even for those occasions where the IBM Mainframe platform might be decommissioned, COBOL can still be processed on alternative platforms via code migration techniques such as Micro Focus, where such functions and services can be Cloud based.  However, once again, isn’t the IBM Mainframe the ultimate “Cloud” platform, which has arguably been the case “forever thus”?

One must draw one’s own conclusions as to why the Mainframe platform and/or COBOL applications are often considered for replacement via migration, when the Mainframe platform is both strategic and cost efficient.  As with any technology decision, there is no “one size fits all” solution, but perhaps a little education can go a long way, and at least the acceptance that seeming “legacy” technologies are strategic and viable.