Simplified Business Facing IBM Z Mainframe DevOps APM Problem Determination

Increasingly IBM Z Mainframe stakeholders are becoming cognizant that traditional processes for handling Information Technology operations are becoming obsolete, hence the emergence of DevOps (DevSecOps) frameworks.  Driven by digital transformation & the perpetually increasing demand for new digital services, consuming vast unparalleled amounts of data, Data Centres are becoming increasingly pressurized to deliver & maintain these mission-critical services.  A major challenge is the availability of these services, where transaction & throughput workloads can be unpredictable, often ad-hoc demand driven (E.g. Consumer) & not the typical periodic planned peaks (E.g. Monthly, Annual, et al).

Today’s inward facing, dispassionate & honest CIO knows their organization can spend inordinate amounts of time, being reactive to business application impact incidents, often finding they spend too long reacting to incidents & all too often they don’t have enough bandwidth to be proactive & prevent the incident from occurring in the first place.  It’s widely accepted that for the majority of Global 1000 companies, deploying an IBM Z Mainframe platform provides them with the de facto System Of Record (SOR) data platform, with associated Database (E.g. Db2) & Transaction (E.g. CICS, IMS) subsystems.  Therefore playing such a central & integral part of today’s 21st century digital application infrastructure, business performance issues can affect the entire application, dictating that early detection & resolution of performance issues are business critical, with the ultimate goal of eliminating such issues altogether.

Technologies such as z/OS Connect, provide a simple & intuitive API based method for the IBM Z Mainframe to become an interconnected platform, with all other Distributed Platforms.  This dictates the evolution in Operations Management processes, considering the business application from a non-technical viewpoint, treating management from a holistic viewpoint with end-to-end monitoring, regardless of the underlying hardware & software platforms.

Today’s 21st Century digital economy dictates that central Operation teams don’t have inordinate amounts of time & indeed the requisite Subject Matter Expert (SME) skills for problem investigation activities.  A more proactive & automated response would be the deployment of simplified, lean & cost-efficient automated monitoring processes, allowing Operations teams to detect potential problems & their associated failure reason in near real-time.

Distributed tracing provides a methodology for interpreting how applications function across processes, services & networks.  Tracing uses the associated activity log trail from requests processed, capturing tracing information accordingly, as they move from interconnected system to system.  Therefore with Distributed tracing, organisations can monitor applications with Event Streams, helping them to understand the size & shape of the generated traffic, assisting them in the identification & related causes of potential business application problems.  It comes as no surprise that Distributed tracing has become a pivotal cornerstone of the DevOps toolbox, leveraging from the pervasive Kafka Open-Source Software architecture technology for distributed systems.  Simply, Kafka provides meaningful context from messaging & logging generated by IT platforms various, delivering data flow visualizations, simplifying identification & prioritization of business application performance anomalies.  Put simply, Kafka Distributed tracing pinpoints where failures occur & what causes poor performance (I.E. X marks the spot)!

From a business & therefore non-technical viewpoint, the utopia is to understand the user experiences delivered & associated business impacts; ideally positive, therefore eliminating the negative.  Traditionally from a technical viewpoint, experts have focussed on MELT (Metrics, Events, Logs, Traces) data collection, allowing for potential future problem determination & resolution.  Historically when this was the only data available, it therefore follows, manual & time consuming technical processes ensued.  As we have explored, DevOps is about simplification, optimization, automation & ultimately delivering the best business service!  If only there was a better way…

OpenTelemetry is a collection of tools, APIs & SDKs, utilized to instrument, generate, collect & export telemetry data (Metrics, Events, Logs, Traces) to assist software performance behavioural analysis.  Put simply, OpenTelemetry is an Open-Source Software vendor agnostic standard for application telemetry & supporting infrastructures & services data collection:

  • APIs: Code instrumentation deployment for telemetry data trace generation
  • SDKs: Collect the telemetry data for the rest of the telemetry data processing
  • In-Process Exporters: Translate telemetry data into custom formats for Back-End processing or
  • Out-Of-Process Exporters: Translate telemetry data into custom formats for Back-End processing

In conclusion, from a big picture viewpoint, the IBM Z Mainframe is just another IP node on the network, seamlessly interconnecting with Distributed Systems platforms for 21st century digital business application processing.  Regardless of technical platform, DevOps is not a technical discipline, it’s a business orientated user experience process & as such, requires automated issue detection & rapid resolution.  Open-Source Software (OSS) frameworks such as OpenTelemetry & Distributed Tracing allow for the simplified low cost collection & visualization of instrumentation data.  How can the IBM Z Mainframe organization incorporate a DevOps facing solution to aggregate this log data, providing an optimal cost, resource friendly Application Performance Management (APM) solution for simplified business application performance identification?

z/IRIS (Integrable Real-Time Information Streaming) integrates the IBM Z Mainframe platform into commonplace pervasive enterprise wide Application Performance Monitoring (APM) solutions, allowing DevOps resources to gain the insights they need to better understand Mainframe utilization & potential issues for mission critical business services.

z/IRIS incorporates OpenTelemetry observability for IBM Z Mainframe systems & applications, enriching traces (E.g. Db2 Accounting, Db2 Deadlock, zOS Connect, JES2, OMVS, STC, TSO) with attributes to facilitate searching, filtering & analysis of traces in today’s 3rd party enterprise wide APM tools (E.g. AppDynamics, Datadog, Dynatrace, IBM Instana, Jaeger, New Relic, Splunk, Sumo Logic).

Capturing metrics & creating associated charts has been an integral part of performance monitoring for several decades or more.  z/IRIS seamlessly integrates with APM tools such as Instana & data visualization tools such as Grafana to supply zero maintenance automated dashboards for commonplace day-to-day usage.  Of course, each & every business requires their own perspectives, hence z/IRIS incorporates easy-to-use customizable dashboards for such requirements. Because APM & data visualization tools collect data metrics from a variety of information sources, tracing every request from cradle (E.g. Client Browser) to Grave (E.g. Host Server), the z/IRIS Mainframe data combinations for your digital dashboards are potentially infinite, where the data presented is always accurate & in real time.

z/IRIS is simple to use & simple to install, incorporating many tried & tested industry standard Open-Source Software components, optimizing costs & simplifying product support.  Wherever possible, using Java based applications, from an IBM Z Mainframe viewpoint, CPU utilization is minimized, utilizing zIIP processing cycles whenever available.  z/IRIS delivers a lightweight, resource & cost efficient z/OS APM solution to provide an end-to-end performance analysis of today’s 21st Century digital solutions.  Because z/IRIS leverages from industry standard Open-Source frameworks deployed by commonplace Distributed Systems APM solutions, the instrumentation captured & interpreted by z/IRIS enriches dynamically as APM functionality increases.  For example, Datadog Watchdog Insights can identify increased latency from a downstream z/OS Connect application, just by processing new analytics, from existing telemetry data.  The data had already been captured, as APM functionality evolves, new meaningful business insights are gained.  z/IRIS can deliver the following example benefits for any typical IBM Z Mainframe DevOps environment:

  • Automated IBM Z Mainframe Observability: Automate the collection of end-to-end data tracing information.
  • Real Time Impact Notification: Intelligent data processing to present meaningful DevOps dashboard notifications of business applications service status & variances.
  • Universal Access & Ease Of Use: Facilitate end-to-end Application Performance Monitoring (APM) for all IT teams, not just IBM Z Mainframe Subject Matter Experts (SME).
  • Reduce MTTD & MTTR For Optimized User Services: Reduce Mean Time To Detect (MTTD) & ideally eradicate the Mean Time To Repair (MTTR), the typical Key Performance Indicators (KPIs), with intelligent root cause analysis.

IBM Z Mainframe Pre-Production Testing: Spring Into Stress Testing via zBuRST

For those of us in the Northern Hemisphere it’s been another long & cold Winter & for many, a time of pandemic lockdown.  As we enter Spring, we often associate this annual season with hope & new life & perhaps opportunity.  Henry Wadsworth Longfellow once wrote ”If Spring came but once in a century, instead of once a year, or burst forth with the sound of an earthquake, and not in silence, what wonder and expectation would there be in all hearts to behold the miraculous change”!  Let’s not carried away, but I have recently worked with an IBM Z customer to finally perform a Pre-Production full workload test via the IBM Z Business Resiliency Stress Test (zBuRST) solution…

In an ideal world, zBuRST would offer a much needed solution for all IBM Z Mainframe users with limited resource or budget to perform Pre-Production full workload testing activities.  However, in reality, there are some significant qualification caveats, primarily a minimum of 10,000 MIPS workload capacity & the need for latest generation z14 or newer Mainframe servers.  As with anything in business or indeed life, if you don’t ask, you will never know & there is some flexibility from an installed MIPS viewpoint via your local IBM account team.

IBM Z Business Resiliency Stress Test (zBuRST) is a solution that enables the use of spare IBM Z server physical resources to stress test changes at Production workload scale, allowing qualitative & quantitative validation of any Production change to safeguard the performance & resilience profile of IBM Z mission critical workloads.  For the avoidance of doubt, a Pre-Production test can be verified with a minimal data subset for qualitative purposes, but only a 100%+ data quantitative stress test will verify the SLA & KPI metrics required for a mission critical workload.  zBuRST only supports Pre-Production (DevTest) environments, which could include a GDPS internal environment, or a 3rd party DR supplier.  However, zBuRST cannot be used for any DR activity, testing or real-life invocation.  Hopefully most IBM Z mainframe users are savvy & have included some flexibility in their 3rd party DR provision contracts, allowing for periodic use of such facilities, not solely DR based.  This is not an unusual requirement & if you rely upon a 3rd party provider for IBM Z resilience, work with them to evolve your IBM Z resource provision service contracts accordingly.

From a big picture viewpoint, zBuRST reduces change risk, safeguarding business resiliency by enabling the detection and resolution of abnormalities and defects in a Pre-Production environment, which inevitably manifest business outages, disruptions, or slowdowns:

  • For IBM Z users with matching (identical) hardware in a standalone test or DR environment, zBuRST provides the ability to perform load or stress test of new IBM Z hardware features & upgraded functions.
  • For IBM Z users whose DR sites do not match their Production environment, the zBuRST objective is to enable critical workload (E.g. use all available resource to test the mission critical workloads) testing.

From an eligibility viewpoint, if your organization is currently testing with constrained IBM Z resources, prohibiting adequate Production workload sized testing, zBuRST improves workload resiliency:

  • Can your business scale reliably & conform to SLA & KPI Metrics during seasonal or ad-hoc peak processing demands (E.g. Year End, Black Friday, Cyber Monday, et al)?
  • Is your business mission critical application impacted by change aversion, with fear of disrupting Production stability?
  • Are your agile DevOps aspirations hampered by the legacy waterfall application development approach, taking too long to adequately test changes, or introduce new features, functions, for Production workloads?
  • Do elongated Production outages (I.E. Downtime) come at an excessive or prohibitive business cost?
  • Is it too complex to provision adequate local or 3rd party IBM Z resources for large scale volume or integration tests?

The zBuRST solution has a number of prerequisites & the primary considerations are:

  • zBuRST is an extension of the IBM Z Application Development and Test Solution (DevTest Solution).
  • zBuRST Tokens are discounted at 80% from the cost of On/Off CoD capacity.
  • zBuRST can be purchased or for systems with a minimum of 10,000 installed MIPS, for up to 50-100% of Production capacity.  All MIPS capacity must reside in the same country.
  • zBuRST pre-paid tokens can be purchased up to 100% of the additional capacity needed to support Production scale stress testing.
  • zBuRST tokens allow for up to 15 days of testing; tokens can be activated for any 15 calendar days, whether consecutive or not (E.g. Preform n stress tests of n days duration).
  • zBuRST tokens expire 5-years from the IBM Z server LICC “Withdrawal from Marketing” date.
  • For DevTest Solutions, zBuRST capacity can be purchased to increase the size of the DevTest environment up to the equivalent number of Production MIPS.
  • For DR machine usage, zBuRST tokens can be purchased up to the equivalent number of Production MIPS.
  • zBuRST tokens can only be installed & exclusively used on IBM Z hardware owned by the IBM Z user (customer); zBuRST is not available to 3rd party IBM Z resource service providers.
  • zBuRST tokens are pre-paid On/Off CoD LIC records.  There can only be one On/Off CoD record active at a time.  Post-paid On/Off CoD LIC records & zBuRST tokens cannot be active at the same time on the same machine.  There cannot be mixing of pre-paid & post-paid On/Off CoD LIC records.

zBuRST can deliver greater certainty & benefit for an IBM Z organization via:

  • Change risk eradication with Production workload stress testing, increasing business resiliency, customer satisfaction & operational efficiency.
  • Faster delivery of new business features & functions at reduced risk, enabling an agile DevOps application change environment.
  • Empowering IT personnel to safely test changes, at Production workload scale, in a DevTest environment, identifying problems or anomalies that might or typically only occur at scale.
  • Higher ROI for DR resource usage (E.g. Use for stress testing, not just for DR testing).
  • Increased & comprehensive application testing capabilities for a lower cost.

When working with my customer over the last few months, the real-life lessons learned were:

  • Collaborate with the 3rd party IBM Z resource supplier, to safeguard the use of their IBM Z server based upon a days as opposed to a DR testing usage approach.  For the avoidance of doubt, contract for n days, where those n days could be used for any number of Pre-Production testing & DR usage.
  • Engage with all ISV organizations from an FYI viewpoint, informing them of this DevTest approach, where their software will be used for Pre-Production testing purposes, allowing them to safely generate temporary software license codes accordingly, as & if required.
  • Work really closely with your IBM account team, as this customer was a ~9,000 MIPS user & find a win-win situation for all.  That could be the provision of anticipated White Space CPU capacity by IBM or as a committed IBM Z Mainframe user, maybe the 10,000 MIPS watermark is just too high.
  • Educate your Operations, Applications & Business units on this zBuRST options.  Some IBM Z users might have been restricted for years if not decades, not being able to perform a 100% data & CPU resource Pre-Production workload test.  The brainstorming, collaboration & good will that manifests itself, is one of those few occasions in IT where the users of your IT services are happy to be an integral part of the change process!

My final observation is a reflection on the last few months of my day-to-day activities.  For 2-3 days per week, I have been combining IT work with being “Captain Clipboard” at a local UK COVID-19 vaccination centre, which in itself, has been so rewarding.  To see the relief on people, especially those that are of a mature age, perhaps infirmed, feeling they can be a part of the wider community again.  The parallels are obvious, zBuRST can allow those IBM Z users prohibited from performing 100% data & CPU Pre-Production testing activities, the opportunity to advance their business.  However, unlike the COVID-19 vaccination, which for the fortunate developed countries, is available to all citizens, zBuRST does have some usage restrictions.  Perhaps it’s up to the wider IBM Z user community to encourage IBM to revisit & modify their approach, perhaps reducing the MIPS capacity requirements to 5,000 MIPS.  Wherever you’re based globally, if you’re a member of SHARE (USA) or GSE (Europe), et al, maybe reach out to your Large Systems representatives & see if the global collective from the IBM Z user organizations can encourage IBM to evolve their opportunity, enabling zBuRST solution usage to a larger majority if not all IBM Z Mainframe users.

The Ever Changing IBM Z Mainframe Disaster Recovery Requirement

With a 50+ year longevity, of course the IBM Z Mainframe Disaster Recovery (DR) requirement and associated processes have changed and evolved accordingly.  Initially, the primary focus would have been HDA (Head Disk Assembly) related, recovering data due to hardware (E.g. 23nn, 33nn DASD) failures.  It seems incredulous in the 21st Century to consider the downtime and data loss with such an event, but these failures were commonplace into the early 1980’s.  Disk drive (DASD) reliability increased with the 3380 device in the 1980’s and the introduction of the 3990-03 Dual Copy capability in the late 1980’s eradicated the potential consequences of a physical HDA failure.

The significant cost of storage and CPU resources dictated that many organizations had to rely upon 3rd party service providers for DR resource provision.  Often this dictated a classification of business applications, differentiating between Mission Critical or not, where DR backup and recovery processes would be application based.  Even the largest of organizations that could afford to duplicate CPU resource, would have to rely upon the Ford Transit Access Method (FTAM), shipping physical tape from one location to another and performing proactive or more likely reactive data restore activities.  A modicum of database log-shipping over SNA networks automated this process for Mission Critical data, but successful DR provision was still a major consideration.

Even with the Dual Copy function, this meant DASD storage resources had to be doubled for contingency purposes.  Therefore this dictated only the upper echelons of the business world (I.E. Financial Organizations, Telecommunications Suppliers, Airlines, Etc.) could afford the duplication of investment required for self-sufficient DR capability.  Put simply, a duplication of IBM Mainframe CPU, Network and Storage resources was required…

The 1990’s heralded a significant evolution in generic IT technology, including IBM Mainframe.  The adoption of RAID technology for IBM Mainframe Count Key Data (CKD) provided an affordable solution for all IBM Mainframe users, where RAID-5(+) implementations became commonplace.  The emergence of ESCON/FICON channel connectivity provided the extended distance requirement to complement the emerging Parallel SYSPLEX technology, allowing IBM Mainframe servers and related storage to be geographically dispersed.  This allowed a greater number of IBM Mainframe customers to provision their own in-house DR capability, but many still relied upon physical tape shipment to a 3rd party DR services provider.

The final significant storage technology evolution was the Virtual Tape Library (VTL) structure, introduced in the mid-1990’s.  This technology simplified capacity optimization for physical tape media, while reducing the number of physical drives required to satisfy the tape workload.  These VTL structures would also benefit from SYSPLEX implementations, but for many IBM Mainframe users, physical tape shipment might still be required.  Even though the IBM Mainframe had supported IP connectivity since the early 1990’s, using this network capability to ship significant amounts of data was dependent upon public network infrastructures becoming faster and more affordable.  In the mid-2000’s, transporting IBM Mainframe backup data via extended network carriers, beyond the limit of FICON technologies became more commonplace, once again, changing the face of DR approaches.

More recently, the need for Grid configurations of 2, 3 or more locations has become the utopia for the Global 1000 type business organization.  Numerous copies of synchronized Mission Critical if not all IBM Z Mainframe data are now maintained, reducing the Recovery Time Objective (RTO) and Recovery Point Objective (RPO) DR criteria to several Minutes or less.

As with anything in life, learning from the lessons of history is always a good thing and for each and every high profile IBM Z Mainframe user (E.g. 5000+ MSU), there are many more smaller users, who face the same DR challenges.  Just as various technology races (E.g. Space, Motor Sport, Energy, et al) eventually deliver affordable benefit to a wider population, the same applies for the IBM Z Mainframe community.  The commonality is the challenges faced, where over the years, DR focus has either been application or entire business based, influenced by the technologies available to the IBM Mainframe user, typically dictated by cost.  However, the recent digital data explosion generates a common challenge for all IT users alike, whether large or small.  Quite simply, to remain competitive and generate new business opportunities from that priceless and unique resource, namely business data, organizations must embrace the DevOps philosophy.

Let’s consider the frequency of performing DR tests.  If you’re a smaller IBM Z Mainframe user, relying upon a 3rd party DR service provider, your DR test frequency might be 1-2 tests per year.  Conversely if you’re a large IBM z Mainframe user, deploying a Grid configuration, you might consider that your business no longer has the requirement for periodic DR tests?  This would be a dangerous thought pattern, because it was forever thus, SYSPLEX and Grid configurations only safeguard from physical hardware scenarios, whereas a logical error will proliferate throughout all data copies, whether, 2, 3 or more…

Similarly, when considering the frequency of Business Application changes, for the archetypal IBM Z Mainframe user, this might have been Monthly or Quarterly, perhaps with imposed change freezes due to significant seasonal or business peaks.  However, in an IT ecosystem where the IBM Z Mainframe is just another interconnected node on the network, the requirement for a significantly increased frequency of Business Application changes arguably becomes mandatory.  Therefore, once again, if we consider our frequency of DR tests, how many per year do we perform?  In all likelihood, this becomes the wrong question!  A better statement might be, “we perform an automated DR test as part of our Business Application changes”.  In theory, the adoption of DevOps either increases the frequency of scheduled Business Application changes, or organization embraces an “on demand” type approach…

We must then consider which IT Group performs the DR test?  In theory, it’s many groups, dictated by their technical expertise, whether Server, Storage, Network, Database, Transaction or Operations based.  Once again, if embracing DevOps, the Application Development teams need to be able to write and test code, while the Operations teams need to implement and manage the associated business services.  In such a model, there has to be a fundamental mind change, where technical Subject Matter Experts (SME) design and implement technical processes, which simplify the activities associated with DevOps.  From a DR viewpoint, this dictates that the DevOps process should facilitate a robust DR test, for each and every Business Application change.  Whether an organization is the largest or smallest of IBM Z Mainframe user is somewhat arbitrary, performing an entire system-wide DR test for an isolated Business Application change is not required.  Conversely, performing a meaningful Business Application test during the DevOps code test and acceptance process makes perfect sense.

Performing a meaningful Business Application DR test as part of the DevOps process is a consistent requirement, whether an organization is the largest or smallest IBM Z Mainframe user.  Although their hardware resource might differ significantly, where the largest IBM Z Mainframe user would typically deploy a high-end VTL (I.E. IBM TS77n0, EMC DLm 8n00, Oracle VSM, et al), the requirement to perform a seamless, agile and timely Business Application DR test remains the same.

If we recognize that the IBM Z Mainframe is typically deployed as the System Of Record (SOR) data server, today’s 21st century Business Application incorporates interoperability with Distributed Systems (E.g. Wintel, UNIX, Linux, et al) platforms.  In theory, this is a consideration, as mostly, IBM Z Mainframe data resides in proprietary 3390 DASD subsystems, while Distributed Systems data typically resides in IP (NFS, NAS) and/or FC (SAN) filesystems.  However, the IBM Z Mainframe has leveraged from Distributed Systems technology advancements, where typical VTL Grid configurations utilize proprietary IP connected disk arrays for VTL data.  Ultimately a VTL structure will contain the “just in case” copy of Business Application backup data, the very data copy required for a meaningful DR test.  Wouldn’t it be advantageous if the IBM Z Mainframe backup resided on the same IP or FC Disk Array as Distributed Systems backups?

Ultimately the high-end VTL (I.E. IBM TS77n0, EMC DLm 8n00, Oracle VSM, et al) solutions are designed for the upper echelons of the business and IBM Z Mainframe world.  Their capacity, performance and resilience capability is significant, and by definition, so is the associated cost.  How easy or difficult might it be to perform a seamless, agile and timely Business Application DR test via such a high-end VTL?  Are there alternative options that any IBM Z Mainframe user can consider, regardless of their size, whether large or small?

The advances in FICON connectivity, x86/POWER servers and Distributed Systems disk arrays has allowed for such technologies to be packaged in a cost efficient and small footprint IBM Z VTL appliance.  Their ability to connect to the IBM Z server via FICON connectivity, provide full IBM Z tape emulation and connect to ubiquitous IP and FC Distributed Systems disk arrays, positions them for strategic use by any IBM Z Mainframe user for DevOps DR testing.  Primarily one consistent copy of enterprise wide Business Application data would reside on the same disk array, simplifying the process of recovering Point-In-Time backup data for DR testing.

On the one hand, for the smaller IBM Z user, such an IBM Z VTL appliance (E.g. Optica zVT) could for the first time, allow them to simplify their DR processes with a 3rd party DR supplier.  They could electronically vault their IBM Z Mainframe backup data to their 3rd party DR supplier and activate a totally automated DR invocation, as and when required.  On the other hand, moreover for DevOps processes, the provision of an isolated LPAR, would allow the smaller IBM Z Mainframe user to perform a meaningful Business Application DR test, in-house, without impacting Production services.  Once again, simplifying the Business Application DR test process applies to the largest of IBM Z Mainframe users, and leveraging from such an IBM Z VTL appliance, would simplify things, without impacting their Grid configuration supporting their Mission critical workloads.

In conclusion, there has always been commonality in DR processes for the smallest and largest of IBM Z Mainframe users, where the only tangible difference would have been budget related, where the largest IBM Z Mainframe user could and in fact needed to invest in the latest and greatest.  As always, sometimes there are requirements that apply to all, regardless of size and budget.  Seemingly DevOps is such a requirement, and the need to perform on-demand seamless, agile and timely Business Application DR tests is mandatory for all.  From an enterprise wide viewpoint, perhaps a modicum of investment in an affordable IBM Z VTL appliance might be the last time an IBM Z Mainframe user needs to revisit their DR testing processes!

System z DevOps & Application Lifecycle Management (ALM) Integration: Evolution or Revolution?

From an IT viewpoint, seemingly the 2010’s decade will be dominated by the digital data explosion, primarily fuelled by Cloud, Mobile and Social Media data sources, while intelligent and timely if not real-time Analytics are required to process this vast and ever-growing data source.  Who could have imagined just a decade ago that the Mobile Phone, specifically Smartphone would be the de facto computing device, although some might say, only for a certain age demographic?  I’m not so sure, I encounter real-life and day-to-day evidence that a Smartphone or tablet can also empower the older generation to simplify their computer usage and access.  From a business perspective, Smartphones have allowed geographically dispersed citizens gain access to Banking facilities for the first time; Cloud allows countless opportunities for data sharing and number crunching for collaborative scientific, health, education and anything else a human being might conceive activities.  The realm of opportunity exists…

When thinking of the bigger picture, we somehow have to find a workable and seamless balance that will integrate the dawn of business computing from the 1960’s to these rapidly moving 21st Century requirements.  When considering which came first, the data or the application, I always think the answer is really simple; the data came first, but I have been wrong before!  What is without doubt, the initial requirement for a business application was to automate data processing and the associated medium-term waterfall (E.g. n-nn Months) application development process is now outdated.  As of 2017, today’s application needs to leverage from this vast and rich digital data source, to identify and leverage new business opportunities, increasingly unplanned and therefore rapid application delivery is required.  For example, previously I wrote about this subject matter in the zAPI: System z Deployment Into The API Economy blog entry.

From an IT perspective, one of the greatest achievements in the 21st Century is collaboration, whether technology based, leveraging from a truly interconnected (E.g. Internet Protocol/IP) heterogeneous computing environment, or personnel based, with IT teams collaborating in a more open and timely manner, primarily via DevOps.  This might be a better chicken and egg analogy; which came first, the data explosion or an IT ecosystem that allowed such a digital data explosion?

There are a plethora of modern-day application development tools that separate the underlying target deployment server from the actual application developer.  Put another way, today’s application developer ideally works from a GUI display via an Eclipse-based Integrated Development Environment (IDE) interface, creating code using rapid and agile development techniques.  From an IBM System z perspective, these platforms include Compuware Topaz Workbench, IBM Developer for z Systems (IDz AKA RDz) and Micro Focus Enterprise Developer, naming but a few.  Therefore when considering the DevOps framework, these excellent Eclipse-based IDE products provide solutions for the Dev part of the equation; but what about the Ops part?

In a collaborative world, where we all work together, from an Application Lifecycle Management (ALM) perspective, IT Operations are a key part of application delivery and management.  Put simply, once code has been created, it needs to be packaged (E.g. Compile, Link-Edit, et al), tested (E.g. Unit, Integration, System, Acceptance, Regression, et al) and implemented in a Production environment.  We now must consider the very important discipline of Source Code Management (SCM), where from a System z Mainframe perspective, common solutions are CA Endevor SCM, Compuware ISPW, IBM SCLM, Micro Focus ChangeMan ZMF, et al.  Once again, from a DevOps perspective, we somehow have to find a workable and seamless balance that will integrate the dawn of business computing from the 1960’s to these rapidly moving 21st Century requirements.  As previously discussed the Dev part of the DevOps framework is well-covered and straightforward, but perhaps the Ops part requires some more considered thought…

Recently Compuware have acquired ISPW (January 2016) to supplement their Topaz Workbench and Micro Focus acquired ChangeMan ZMF (May 2016) to complement their Micro Focus Enterprise Developer solution.  IBM IDz offers out-of-the-box integration for the IBM Rational Team Concert, CA Endevor SCM and IBM SCLM tools.  Clearly there is a significant difference between Source Code Management (SCM) for Distributed Systems when compared with the System z Mainframe, but today’s 21st century business application will inevitably involve interconnected platforms and so a consistent and seamless SCM process is required for accurate and timely application delivery.  In all likelihood a System z Mainframe user has been using their SCM solution for several decades, evolving processes around this solution, which was never designed for Distributed Systems SCM.  Hence the major System z Application Development ISV’s have acquired SCM products to supplement their core capability, but is it really that simple?  The simple answer is no!

Traditionally, for Application Development activities we deployed the Software Development Life Cycle (SDLC), limited to software development phases, including requirements, design, coding, testing, configuration, project management and change management.  Modern software development processes require real-time collaboration, access to centralized data repository, cross-tool and cross-project visibility, proactive project monitoring and reporting, to rapidly develop and deliver quality software.  This requirement is typically classified as Application Lifecycle Management (ALM).

The first iteration of ALM, namely ALM 1.0 was wholly unsuccessful.  Application Development teams were encouraged to consider the value of point solutions for task management, planning testing, requirements, release management, and other functions.  Therefore ALM 1.0 became just a set of tools, where the all too common question for the Application Development team was “what other tool can we use”!

ALM 2.0 or ALM 2.0+ can be considered as Integrated Application Lifecycle Management or Integrated ALM, where all the tools and their users are synchronized with each other throughout the application development stages.  This integration ensures that every team member knows the Who, What, When, and Why of any changes made during the development process, eradicating arduous, repetitive, manual and error prone activities.  The most important lesson for the DevOps team in a customer environment is to concentrate on the human perspective.  They should ask “how do we want our teams to work together and collaborate” as opposed to asking an Application Development ISV team, “what ALM tools do you have”.  Inevitably the focus will be ISV based, as opposed to customer based.  As per the recent Compuware and Micro Focus SCM acquisition history demonstrates, these tools by definition, were never fully integrated from their original inception…

If the customer DevOps teams collaborate and formulate how they want to work together, an ALM evolution can take place in a timely manner, maintaining investment in previous technologies, as and if required.  Conversely, a revolutionary approach is the most likely outcome for the System z Mainframe user, if looking to the ISV for a “turn-key” ALM solution.  By definition, an end-to-end and turn-key ALM solution from one ISV is not possible and in fact, not desirable!  Put another way, as a System z user, do you really want to write off several decades investment in an SCM solution, for another competitive solution, which will still require many other tools to provide the Integrated ALM capability you require?  As always, balance and compromise is the way forward…

If the ubiquitous System z Application Development ISV were to develop their first software product today, it would inevitably be a DevOps and ALM 2.0+ compatible product, allowing for full integration with all other Application Development tools, whether System z Mainframe or Distributed Systems orientated.  Of course that is not the reality.  It seems somewhat disingenuous that the System z Application Development ISV would ask a potential customer to write-off their several decades investment in a SCM technology, when said ISV has just acquired such a technology!  Once again, this is why the customer based Application Development teams must decide how they want to collaborate and what ALM and DevOps tools they want to use.

Seemingly a pragmatic solution is required, hence the ALM 2.0+ initiative.  If an ISV could develop an all-encompassing DevOps and ALM 2.0+ end-to-end Application Development solution for all IT platforms, they would probably become one of the most popular and biggest ISV’s in a short time period.  However, this still overlooks the existing tools that customer IT organizations have used for many years.  Hence, the pragmatic way forward is to build an open DevOps and ALM 2.0+ solution that will integrate with all other Application Development lifecycle tools, whether market place available, or not!  HPE Application Lifecycle Management (ALM) and Quality Centre (QC) is one such approach for Distributed Systems, but what about the System z Mainframe?

IKAN ALM is an ALM 2.0+ and DevOps architected solution that is vendor and platform agnostic.  Put another way, IKAN ALM can operate in single or multiple-vendor mode.  In all likelihood, single vendor mode is unlikely, as there are many efficient Application Development tools in the marketplace.  However, the single most compelling feature of IKAN ALM is its open framework and interoperability with other ALM technologies.  As previously stated, we might consider source code development as the Dev side of the DevOps framework.  IKAN ALM will interface with these technologies, while its core functionality concentrates on the Ops side of the DevOps framework.  Therefore from an Application Lifecycle Management (ALM) viewpoint, the IKAN ALM solution starts where versioning systems end, with an objective of optimizing the entire software engineering process.

IKAN ALM offers a uniquely integrated web-based Application Lifecycle Management platform for both Agile and traditional software development teams.  It combines Continuous Integration and Lifecycle Management, offering a single point of control, delivering support for build and deploy processes, approval processes, release management and software lifecycles.  From a pragmatic and common-sense viewpoint, typically organizations want to continue working with their preferred tools in their preferred environment.  Being ALM 2.0+ compliant, IKAN ALM fully integrates with any versioning tool and any issue tracking tool, providing ALM reports across repositories.  Therefore IKAN ALM offers an evolutionary approach, allowing an organization to leverage from timely ALM benefits, without risk and without the need for displacing any existing technologies.  Over time, should the organization wish to displace older legacy ALM software products, they could so, leveraging from the stand-alone or multiple vendor flexibility of the IKAN ALM solution.

IKAN ALM incorporates ready to use solutions and processes for multiple environments.  These solutions include ALM 2.0+ compliant processes and the necessary scripts to automate the integration with a specific environment, including but not limited to CA Endevor (SCM), CollabNet, HPE ALM/Quality Centre (QC), Oracle Warehouse Builder (OWB), SAP, et al.

The IKAN ALM central server is an open framework web application responsible for User Authentication and Authorization, User Interface Processing, Distributed Version Repository Management and Scheduling Code Builds.  The IKAN ALM agents perform the application build and install functions.

The data repository is an open central database where all administrative data and the audit trail history are stored.  IKAN ALM communicates with the repository using standard JDBC interfaces.  The required JDBC drivers are installed along with the product.  The repository can reside in any RDBMS system, including IBM DB2/UDB, Informix, Microsoft SQL Server, MySQL, Oracle, et al.

Source code is always stored in a Version Control Repository.  IKAN ALM integrates with all the typical versioning systems including Apache Subversion, CVS, Git, Microsoft Visual SourceSafe (VSS), IBM Rational ClearCase (UCM & LT), Serena PVCS Version Manager, et al.  The choice of IDE often drives the choice of the Version Control System (VCS), where organizations can have more than one operational VCS.  IKAN ALM is a solution that provides a unique process control over all versioning systems present in the organization.  IKAN ALM stores each build result within its central server filesystem, labelling the source accordingly in the associated versioning system, guaranteeing a correct source-build relationship.

IKAN ALM safeguards Authentication & Authorization interacting with the organizations security deployment (E.g. Active Directory, LDAP, Kerberos, et al) via the Java Authentication and Authorization Service (JAAS) interface.

IKAN ALM audits any changes (E.g. Who, What, Why, When, Approver, et al), orchestrating the various components and phases of Application Lifecycle Management, delivering an automated workflow to drive a continuous flow of activity throughout the development lifecycle, efficiently coordinating and optimizing application development changes.

In an environment with ever increasing mandatory regulatory compliance requirements, IKAN ALM simplifies the processes for delivering such compliance.  As per the IKAN ALM Build, Deploy, Lifecycle and Approval Management framework, compliance for industry standard regulations (E.g. CMM, ITIL, Sarbanes-Oxley, Six Sigma, et al) is delivered via a reliable, repeatable and auditable process throughout the development life cycle.

Clearly any IT organization can benefit from a fully integrated ALM 2.0+ solution, by enforcing and safeguarding the ALM process is repeatable, reliable and documented.  Regardless of the development team headcount size, ALM releases key people from repetitive and less interesting tasks, allowing them to focus on delivering today’s Analytics based, Cloud, Mobile and Social applications.  A fully integrated ALM 2.0+ solution such as IKAN ALM allows for simplified legacy environment modernization, while simultaneously allowing for experimentation and improvement of all environments alike, both legacy and new.

In conclusion, savvy organizations will safeguard that their Application Development and Operations teams collaborate as per the DevOps framework and decide how they want to implement processes for their environment and more importantly, their business.  This focus should avoid any notion of asking the ubiquitous Application Development ISV, “which tools we should use”!  Similarly, recognizing the integration foundation of ALM 2.0+ will eliminate any notion to displace existing technologies and processes, unless absolutely required.  The need for agile, rapid and quality source code development and delivery is obvious, as is the related solution, which is inevitably pragmatic, evolutionary and multiple vendor tool based.

zAPI: System z Deployment Into The API Economy

Having been in the IT industry for 35+ years, I have always fully embraced and learned new technologies, to find strategic solutions for business challenges.  Obviously, starting in 1980, my heritage is IBM Mainframe, supplemented by UNIX, Wintel and Linux along the way.  Each and every platform has its merits, and during this 35+ year period, I have attended many conferences, for all platforms.  What I have noticed during this period is the attendance of many IBM Mainframe CIO, CTO or Chief Architect individuals at non-IBM Mainframe conferences, but very few, if any, equivalent Distributed Systems personnel at IBM Mainframe conferences.

I’m always surprised and disappointed to hear about organizations talking about decommissioning the IBM Mainframe platform, with tenuous reasons, based on Distributed Systems FUD messaging, as opposed to their own business requirements.  Thankfully these scenarios are decreasing over the years.  Presumably if an organization decides to migrate from one Distributed Systems platform to another or perhaps the Cloud, they do at least attend the relevant platform conferences to make an informed decision.

Over the last 25 years or so, IBM themselves compete with differing divisions and options, whether UNIX (AIX), System z and in recent years, Linux on z Systems, most notably with the LinuxONE launch at LinuxCon 2015.  One would hope that the world’s key IT decision makers might attend LinuxCon with an open mind and learn more about the System z Mainframe?

A ridiculous notion might be that one server platform technology can satisfy a 21st Century organizations IT infrastructure for their mission critical services.  Clearly that has not been the case since the advent of Client Server and today’s emerging Digital business requires an infrastructure of multiple layers, where the underlying server technology is somewhat arbitrary, and arguably a commodity resource.  Conversely the underlying data and associated applications differentiate one business from another, delivering business value and competitive edge.

Let’s take some time to consider this IT architecture design, which very quickly dismisses any notion that one server technology delivers all business requirements:

Such an architecture diagram does not impose any technology decisions.  Conversely it explores the “data journey” from access or creation, via Systems of Engagement (SoE) to eventual storage within Systems of Record (SOR) data repositories (I.E. Database).  Some might say it was forever thus, with the exception of the Multi-Channel SDK’s & API’s layer, where the savvy organizations will embrace DevOps, Hybrid Cloud and connectivity (I.E. API, SDK) solutions, seamlessly integrating modern agile applications, with that most valuable business asset, Systems of Record (SoR) data.

Today’s Application Developer doesn’t need to concern themselves as to the platform used for their DevOps application processes, the Transaction Server or indeed the Database Server.  Sure, several decades ago, maybe even a decade ago, application code was deeply associated if not confined to a specific CPU server architecture.  Clearly that is no longer the case.  Any organization that still thinks in this legacy manner, is behind the times, and this is unfortunate.  Associating such outdated thinking with the System z Mainframe is arguably careless, and not a reason for dismissing an incumbent System z platform, or not considering a System z platform in the future.

Arguably the greatest strengths of today’s System z IBM Mainframe, currently packaged as the z13 or LinuxONE, are as a Database Server (E.g. DB2), Transaction Server (E.g. CICS, WebSphere Application Server) and Security Server (E.g. ACF2, RACF, Top Secret).  From a LinuxONE viewpoint, it’s just another server, capable of processing all of the latest strategic Open Source and Commercial Off The Shelf (COTS) Cloud, Database and Application solutions, while benefitting from the unparalleled System z Quality of Service (QoS) attributes.

However, for those organizations already deploying a System z Mainframe, its greatest perceived issue is TCO.  Without doubt the convoluted and intricate Workload Licence Charges (WLC) are unnecessarily complicated and perceived as being very expensive.  Optimizing these costs requires a modicum of expertise, safeguarding that the best contractual conditions are negotiated.  However, I encounter the same complexities with Distributed Systems platforms, where software license costs can spiral out of control for significant CPU capacity deployments.  Whatever platform is deployed, System z Mainframe or Distributed System, unless the business has the requisite skills in place, technical and commercial, to safeguard the lowest cost possible, commercial ISV suppliers will take advantage of such an oversight.

I’m not advocating any server technology, System z Mainframe, Distributed System or Cloud, as each resource has its merits, depending on the business requirement.  However, today’s 21st Century organization must enable new business channels by leveraging from and arguably enable new business channels by monetizing their Systems of Record (SoR) enterprise data.

Today, organizations need to consider an API Economy, where they expose their internal digital business assets or services in the form of Web API services to external 3rd party partners and consumers, with an overall objective of unlocking increased business value via the creation of new assets.  Such an API Economy will require integration of Transaction and Data resources, specifically:

  • Centrally manage the consumption of enterprise wide business logic, for both Systems of Record (SoR) & Systems of Engagement (SoE)
  • Extend business (E.g. Product, Brand) reach from Systems of Record (SoR), incorporation Systems of Engagement (SoE)

Previously I wrote about How to Connect Mobile Workloads to System z, detailing the conceptual steps required to expose existing SoR data assets with SoE transaction services, via z/OS Connect.  For a fully integrated end-to-end integrated solution, we must also consider the Application Programming Interfaces (API), being the digital glue that seamlessly links applications, services and systems together.

IBM API Connect is a solution that manages the API lifecycle for both On-Premises and Cloud environments.  IBM API Connect delivers capabilities to Create, Run, Manage & Secure API resources and Microservices.  It also enables you to rapidly deploy and simplify API administration, across the organization.

API Connect can be deployed On-Premises via Linux on z Systems, in the cloud (E.g. Bluemix), as well as all other popular Distributed Systems.  Once again, the main message is that the chosen server is arbitrary, System z Mainframe, Distributed System or Cloud.  The server should be considered as a commodity resource, leveraging from existing business logic (I.E. SoE) and data (I.E. SoR), while evolving existing Application Lifecycle Management (E.g. Agile, API Economy, DevOps) is the key.

My final observation is the Mainframe Baby Boomer (E.g. Born ~1960) versus the Millennial (E.g. Born ~1995) technical personnel resource.  Without doubt, there are significant differences in their approach to application programming, but only one resource, namely the Baby Boomer knows the business really well.  I think these folks have the ability to learn another 21st Century programming language, as well as COBOL, but perhaps their best attribute is an analytical role, especially for the integration of SoE and SoR layers.  Working very closely with Millennial technical resources, delivering the new Application (I.E. App, API) resources, the Mainframe Baby Boomer still has something valuable to offer in their final employment years.  For the avoidance of doubt, still delivering value from an analytical viewpoint, while transferring their skills and knowledge to their successors, namely the Millennial.

In conclusion, dismissing any server technology for Fear, Uncertainty or Doubt (FUD) reasons, is an unproductive and ridiculous notion.  More importantly, what might your business lose in opportunity, spending several years or more, migrating from one platform to another, while your competitors are embracing the Digital Age with an API Economy approach, delivering more value from their existing business SoE (transactions) and SoR (data) assets?

Blockchain: A New Application Development Paradigm – What About System z?

Since the inception of Data Processing and the advent of the IBM Mainframe there has been a progressive movement to deliver the de facto “System Of Record (SOR)”, typically classified as a centralised database and related applications.  The key or common denominator for this “Golden Record” is somewhat arbitrary, but more often than not, for most businesses, it will be customer or product identity related.  The benefit of identifying and establishing an SOR is the reuse of this data, for a multitude of different business usage scenarios.

From an application programming viewpoint, historically there was a structured approach when delivering new business function, whether with bespoke programs or Commercial Off the Shelf (COTS) software packages.  More recently data analytics has accelerated this approach, where new business opportunities can be identified from data trends, with near real-time processing, while DevOps frameworks allow for rapid application delivery and implementation.  However, what if there was a new approach with a different type of database and as a consequence, a new approach to application programming?

From a simplistic viewpoint, Blockchain architecture is analogous to traditional database processing, whereas the interaction with said Blockchain database is vastly different, changing from a centralised to decentralised focus.  Therefore for application developers, Blockchain is a paradigm shifting architecture, in how software applications will be architected and coded.  Recognition of this new and rapidly emerging computing paradigm is of vital importance, because it’s the cornerstone for the creation of decentralised applications, a logical and natural evolution from distributed computing architectural constructs.

If we take some time to step back from the Information Technology world and consider the possibilities when comparing a centralised versus decentralised approach, the realm of possibility exists for a truly global interconnectivity approach, which isn’t limited to a specific discrete focus (E.g. Governance, Market, Business Sector, et al).  In theory, decentralised applications might deliver a dynamic and highly collaborative business approach…

A Blockchain is a pseudo linear container space (block) to store data for “controlled public usage”.  In theory, with the right credentials, this data can be accessed by any user!  The Blockchain container is secured with the originators key, so only the key holder or authorised program can unlock the container data.  This is the fundamental difference between a database and a Blockchain.  For a Blockchain, the header record can be considered “eligible for Public usage”.

The data stored within a Blockchain might be considered as a “token”, the most obvious implementation being Bitcoin.  Generically, Blockchain might be considered as an alternative and flexible data transfer system that no private or public authority and especially a malicious third party can tamper with, because of the encryption process.  Put really simply, the data header has “Public” visibility, but data access requires “Private” authenticated access.

From a high-level viewpoint, Blockchain can be considered as an architectural approach, connecting an infinite a number of peer computers, collaborating with a generic process for releasing or recording data, based upon cryptographic transactions.

One must draw one’s own conclusions as to whether this Centralised to Distributed to Decentralised data and application programming approach is the way forward for their business.

Decentralised Consensus is the inverse of a centralised approach where one central database was accessed to validate transaction processing.  A decentralised scheme transfers authority and trust to a decentralised virtual network, enabling processing nodes to continuously access or record transactions within a public block, creating a unique chain for modification operations, hence the Blockchain terminology.  Each successive data block contains a unique fingerprint (hash) of the previous code.  The basic premise of cryptographic processing applies, where hash codes are used to secure transaction origination authentication, eliminating the requirement for centralised processing. Duplicate transaction processing is eliminated because of Blockchain and associated cryptographic processing.

This separation of consensus (data access) from the actual application itself is the fundamental building block for a decentralised application programming approach.

Smart Contracts are the building blocks for decentralised applications.  A smart contract is a small self-contained program that you entrust with a value unit (token) and associated rules.  The simple philosophy of a smart contract is to programmatically facilitate transactional contractual governance between two or more parties via the Blockchain.  This eliminates the requirement of an arbitrary 3rd party authority for governance, when two or more parties can agree exchange between themselves.  Even today, this type of approach is not unusual between organizations, typically based upon a data (file) interchange standard (E.g. Banking).

Put simply, smart contracts eliminate the requirements of 3rd party intermediaries for transaction processing.  Ideally, the collaborating parties define and agree the required policy, embedded inside the business transaction, enabling a self-managed process between nodes (computers) that represent the reciprocal interests of the associated users and owners.

Trusted Computing combines the architectural foundations of Blockchain, decentralised consensus and smart contracts, enabling the spread of resources and transactions with a trusted “peer-to-peer” relationship, in theory enabling trust between numerous nodes (computers).

Previously institutions and central organizations were necessary as trusted authorities.  Deploying a Blockchain approach, these historical centralised central functions can be simplified via smart contracts, governed by decentralised consensus within a Blockchain.

Proof of Work is an important concept to identify the unequivocal authenticator of transactions, allowing the authorised access to participate in the Blockchain system.  Proof of work is a fundamental building block because once created, it cannot be modified, being secured by cryptographic hashes that ensure its authenticity.  Usability challenges ensue, preventing users from changing Blockchain records, without reprocessing the “proof of work”.

It therefore follows, proof of work will be expensive to maintain, with likely future scalability and security issues, depending on the data user (miner) requirements and incentives, which in all likelihood, will reduce over time.  As we all know, most data access is high when data has been recently created, rapidly decreasing to low or even null after a limited period of time.

Proof of Stake is a more elegant and alternative approach, determining which user can update the consensus, while preventing unwanted forking of the underlying Blockchain, being a more cost efficient approach, while being more difficult and expensive to compromise.

Once again, if we consider the benefits of Blockchain from a business processing viewpoint, there is a clear and present opportunity to eliminate manual or semi-automated processes, both internal and external to the business.  This could expedite the completion of processes that previously required days or even weeks to complete and the potential for human error.  A simple example might be a car purchase, based upon 3rd party finance.  Such a process typically includes 3rd party data requirements, for vehicle provenance, credit scoring, identity proof, et al.  If the business world looks at the big picture, they can simplify and automate their processes, by collaborating with existing and more likely, yet to be identified partners.  The benefits are patently obvious…

From a System z viewpoint, recent technological developments leverage from existing IBM resources, including the LinuxONE, Bluemix and Watson offerings:

  • LinuxONE: The System z and LinuxONE platforms are best placed to drive Blockchain innovation, arguably via the Open Mainframe and Hyperedger IBM supports testing and development of the open Blockchain fabric code for developers on their LinuxONE Community Cloud.
  • Bluemix: the IBM Blockchain services available on Bluemix, developers can access fully integrated DevOps tools for creating, deploying, running and monitoring Blockchain applications on the IBM Cloud.
  • Watson: Leveraging from the Watson IoT Platform, IBM will enable information from devices such as RFID-based locations, barcode-scan events or device-reported data, to be used within the IBM Blockchain. Devices will be able to communicate to Blockchain based ledgers to update or validate smart contracts.

From a business benefits viewpoint, the IBM System z platform is ideally placed for Blockchain deployment, being a highly secure EAL5+ certified platform.  Hardware accelerators deliver high speed secure encryption and hashing, supplemented by tamper-proof security Crypto Express modules for key management.  Numerous memory resident partitions can also be created rapidly to keep ledgers separate and secure.  As per usual, the System z platform has the fastest commercial processor, a highly scalable I/O system to handle massive numbers of transactions, ample memory for Blockchain operations and an optimised secure network for optimised Blockchain peer communications.

Returning full circle to where this article started, the System z Mainframe is arguably the de facto System Of Record platform for the worlds traditional Fortune 500 or Global 2000 businesses.  These well established businesses have in all likelihood spent several decades or more establishing this centralised application programming and database usage model.  The realm of opportunity exists to make this priceless data asset available to numerous businesses, both large and small via Blockchain architectures.  If we consider just one simple example, a highly globalised and significant Banking institution could facilitate the creation of a new specialised and optimised “challenger banking” operation, for a particular location or business sector, leveraging from their own internal System Of Record data and perhaps, vital data from another source.  One could have the hypothetical debate as to whether a well-established bank is best placed for such a new offering, but with intelligent collaboration, delivering a valuable service to a new market, where such a service has not been previously possible, doesn’t everybody win?

Perhaps with Blockchain, truly open and collaborative cooperation is possible, both from a business and technology viewpoint.  For example, why wouldn’t one of the new Fortune 500 companies such as a Social Media company with billions of users, look to a traditional Fortune 500 company deploying an IBM System z Mainframe, to expand their revenue portfolio from being advertising driven, to include service provision, whatever that might be.  Rightly or wrongly, if such a Social Media company is a user’s preferred portal for accessing a plethora of other company resources (E.g. Facebook Login), why wouldn’t this user want to fully process some other business transaction (E.g. Financial) via said platform?  However unlikely, maybe Blockchain can truly simplify and expedite Globalisation, for the benefit of users and businesses alike…

DevOps: What Does It Mean For System z?

A recent buzzword in the IT industry is DevOps, being a term for eradicating any gap between the IT disciplines and/or processes of Development and Operations. In simplistic terms, Development is the full application code lifecycle, while Operations is the management and ultimate delivery of IT business services, typically Production orientated. However, what does this mean for the System z environment?

From a big picture viewpoint, the typical mission critical business application comprises many layers, including System z and other Distributed Systems platforms. Even though there are many solutions and “dashboard” type approaches for Operations to manage the IT service, there will always be differences when managing IT platforms, whether System z, Wintel, UNIX, Linux, et al.

Additionally, there may be some interpretation as to what DevOps is and should be from an ISV viewpoint. If you’re an ISV with a rich history in performance management, your viewpoint of DevOps will be identifying and resolving performance problems, because you believe a performance problem will manifest itself in a Production Operations environment, but is ideally fixed in the Applications environment. Conversely, if you’re an ISV with a software portfolio incorporating many Application Development solutions, your viewpoint will be streamlining the Applications Development lifecycle for all platforms, expediting the delivery of Production changes, simplifying the burden on associated Operations Change and Problem Management processes.

Clearly the System z environment has matured over many years and application code portfolios have been managed by SCM tools such as CA Endevor SCM, Serena ChangeMan, ISPW, et al. Even the acronym SCM has various interpretations, whether Source Code Management, Software Configuration Management or some other term.

Recently agile workstation solutions that simplify the application development process have evolved, for example IBM RDz (Rational Developer for z Systems), Compuware Workbench, typically incorporating Eclipse function, allowing for a common framework of multiplatform application code development.

By definition, System z means zero downtime and as such, due diligence, continuity and no/minimal impact regression have been built into each and every change process for many years. Therefore from a Systems Programming viewpoint, any heterogeneous DevOps technical frameworks that might emerge will have little relevance to existing System z processes. However these System z oriented change processes could and no doubt should be recognized by the DevOps framework, extending the System z approach to all platforms.

Whatever your viewpoint and whatever System z tooling your organization deploys for end-to-end Application Lifecycle Management, including Development and Operations, you should not lose sight that an objective of DevOps is to bring together the various IT departments that are impacted by Production Service changes. Therefore if only from a simple communication and collaboration viewpoint, even the most mature and maybe bigoted System z professional should embrace DevOps.

In conclusion, DevOps is an evolving framework that will facilitate quality controlled continuous application delivery for multiple platform business solutions, typically including the Systems z platform. By definition, DevOps encompasses many IT processes, Development and Operations as a minimum, where each and every organization probably has their own interpretation of where interdependent Systems Management functions interact; for example, Performance Management, Change Management, Problem Management and even Capacity Planning. The savvy organization will embrace DevOps as a framework, review their existing software function tooling and in all likelihood, deploy a best-of-breed approach when facilitating continuous application delivery for heterogeneous platforms. It is unlikely that one ISV will provide a fully inclusive best-of-breed software portfolio for DevOps, hence the universal, open and platform independent approach of Eclipse.

Data Entry – Is Windows XP & Office 2003 End Of Support An Issue?

Recently somebody called me to say “do you realize your Assembler (ASM) programs are still running, some 25 years after you implemented them”?  Ouch, the problem with leaving comments and an audit trail, even in 1989!  It was a blast-from-the-past and a welcome acknowledgement, even though secretly, I can’t really remember the code.  We then got talking about how Mainframe programs can stand the test of time, through umpteen iterations of Operating System.  This article will consider whether you need a Mainframe to write application code that will stand the test of time.

Spoiler alert: No you don’t; nowadays a good application development environment, a competent software coder and most importantly of all, common sense, can achieve this, for Mainframe and Distributed Systems alike.  However, you might need to recompile the source code from time-to-time…

An aging industry report from Gartner Research revealed that “many Independent Software Vendors (ISVs) are unlikely to support new versions of applications on Windows XP in 2011; in 2012, it will become common.”  And it may stifle access to hardware innovation: Gartner Research further states that in 2012, “most PC hardware manufacturers will stop supporting Windows XP on the majority of their new PC models.

After several years of uncertainty, Microsoft have officially announced that support for Windows XP (SP3) & Office 2003 ends as of 8 April 2014.  Specifically, there will be no new security updates, non-security hotfixes, free or paid assisted support options or online technical content updates.  Furthermore, Microsoft state:

Running Windows XP SP3 and Office 2003 in your environment after their end of support date may expose your company to potential risks, such as:

  • Security & Compliance Risks: Unsupported and unpatched environments are vulnerable to security risks. This may result in an officially recognized control failure by an internal or external audit body, leading to suspension of certifications, and/or public notification of the organization’s inability to maintain its systems and customer information.
  • Lack of Independent Software Vendor (ISV) & Hardware Manufacturers Support:  A recent industry report from Gartner Research suggests “many independent software vendors (ISVs) are unlikely to support new versions of applications on Windows XP in 2011; in 2012, it will become common.” And it may stifle access to hardware innovation: Gartner Research further notes that in 2012, most PC hardware manufacturers will stop supporting Windows XP on the majority of their new PC models.

Looking at the big picture, anybody currently deploying Windows XP might want to consider the lifecycle of other Microsoft Operating System versions, for example, Windows Vista, Windows 7 & Windows 8.  As the Microsoft Windows Lifecycle Fact Sheet states, mainstream support for Windows 7 ends in January 2015, less than one year from now, and so arguably the only viable option is Windows 8.  The jump from Windows XP to Windows 8 is massive, not necessarily in terms of usability, but certainly and undoubtedly in terms of compatibility.

Those of us that experienced the Windows Vista, Windows 7 and more latterly Windows 8 upgrades, know from experience that each of these upgrades had interoperability challenges, whether hardware (I.E. Printers, Scanners, Removable Storage, et al), software (I.E. Bespoke, COTS, Utilities, et al) or even web browser (I.E. Internet Explorer, Firefox, Chrome, Safari, et al) related.  Although many of these IT resources might be considered standalone or technology commodities, where a technology refresh is straightforward and an operational benefit, the impact on the business for user facing applications might be considerable.  Of course, the most pervasive business application for capturing and processing customer information is typically classified as data entry related…

So, why might a business still be deploying Windows XP or Office 2003 today?  One typical reason relates to data entry systems, either in-house written or packaged in a Commercial Off the Shelf (COTS) software product.  In all likelihood, one way or another, these deployments have become unsupported from a 3rd party viewpoint, either because of the Microsoft software support ethos or the COTS ISV support policy.

Looking back to when Microsoft XP was first released, it offered an environment that allowed customers to think outside of the box for alternatives to traditional development methods, or put another way, Rapid Application Development (RAD) techniques.  Such a capability dictated that businesses could deploy their own bespoke or packaged systems for capturing data and thus automating the entirety of their business processes from cradle to grave with IT systems.  For a Small to Medium sized Enterprise (SME), this was a significant benefit, allowing them to compete, or at least enter their market place, without deploying a significant IT support infrastructure.

Therefore RAD and Microsoft Software Development Kit (SDK) techniques for GUI (E.g. .NET, Visual, et al) presentation, sometimes and more latterly browser based, were supplemented with structured data processing routines, vis-à-vis spreadsheet (CSV), database (SQL) and latterly more formalized data structure layouts (I.E. XML, XHMTL).  Let’s not forget, Excel 2003 and Access 2003 that offered powerful respective spreadsheet and database solutions, which could capture data, however crude that implementation might have been, while processing this data and delivering reports with a modicum of in-built high-level code.

However, as technology evolves, sometimes applications need to be revisited to support the latest and greatest techniques, and perhaps the SME that embraced this brave new world of RAD techniques, were left somewhat isolated, for whatever reasons; maybe business related, whether economic related (E.g. dot com or financial markets) or not.

Let’s not judge those business folks still running Windows XP or Microsoft Office 2003 today; there are probably many good reasons as to why.  When they developed their business systems using a Windows XP or Office 2003 software base, I don’t think they envisaged that the next Microsoft Operating system release might eradicate their original application development investments; requiring a significant investment to upgrade their infrastructure for subsequent Windows versions, but more notably, for interoperability resources (I.E. Web Browsers, .NET, Excel, Access, ODBC, et al).

So if you’re a business running Windows XP and maybe Office 2003 today, potentially PC (E.g. Desktop, Laptop) upgrade challenges can be separated into two distinct entities; firstly the hardware platform and operating system itself; where the “standard image” approach can simplify matters; and secondly, the business application, typically data entry and processing related.  Let’s not forget, those supported COTS software products, whether system utility (E.g. Security, Backup/Recovery/Archive, File Management, et al) or function (E.g. Accounting, ERP, SCM, et al) can be easily upgraded.  It’s just those bespoke in-house systems or unsupported systems that require a modicum of thought and effort…

We all know from our life experiences, if we only have lemons, let’s make lemonade!  It’s not that long ago that we faced the so-called Millennium Bug (Year 2000/Y2K) challenge.  So that could either be a problem or an opportunity.  The enlightened business faced up to the Year 2000 challenge, arguably overblown by media scare stories, and upgraded their IT infrastructures and systems, and perhaps for the first time, at least made an accurate inventory of their IT equipment.  So can similar attributes be applied to this Windows XP and Office 2003 challenge?

The first lesson is acceptance; so yes we have a challenge and we need to do something about it.  Even if your business has been running Windows XP or Office 2003, in an extended support mode for many years, in all likelihood, the associated business systems are no longer fit-for-purpose or could benefit from a significant face-lift to incorporate new logic and function that the business requires!

The second lesson is technology evolution; so just as RAD and SDK were the application development buzzwords of the Windows XP launch timeframe, today the term studio or application studio applies.  An application studio provides a complete, integrated and end-to-end package for the creation, including the design, test, debug and publishing activities of your business applications.  Furthermore, in the last decade or so, there has been a proliferation of modern language (E.g. XHTML, Java, C, C++, et al) programmers, whether formalized as IT professionals, or not (E.g. home coders).

The third lesson is as always, cost versus benefit; the option of paying for Windows XP or Office 2003 extended support ends as of April 2014.  So what is the cost of doing nothing?  As always, cost is never the issue, benefit is.  Investing in new systems that are fit-for-purpose, will of course deliver business benefit, and if the investment doesn’t pay for itself in Year 1, hopefully your business can build a several year business case to deliver the requisite ROI.

Finally, is remote data entry possible with a Windows XP based system?  Perhaps, but certainly not for each and every modern day device (E.g. Smartphone, Tablet, et al).  Therefore enhancing your data entry systems, with the latest presentation techniques, might deliver significant benefit, both for the business and its employees.  Remote working, whether field or home based, delivers productivity benefits, where such benefits can be measured in both business administration cost reduction and increased employee job satisfaction and associated working conditions.

So how easy can it be to replace an aging Windows XP and/or Office 2003 application?

Entrypoint is a complete application development package for creating high-performance data entry applications.  Entrypoint software is built around a scalable, client-server architecture that interfaces with SQL databases for data storage.  Entrypoint data entry software interfaces with standard communications products and commercial networks.

Entrypoint is a web based data entry system that includes Application Studio, a local development tool that allows the user to easily create any data entry system, based upon their specific and typically unique business requirements.  The Entrypoint thin and thick clients let the user enter their data either directly via web resources or via a local workstation (E.g. PC), as per their requirements, while being connected to the same database.

Entrypoint Benefits: Today’s 21st Century business is focussed on delivering tangible business benefit and cost efficient customer facing solutions that can be rapidly deployed, while being secure and compliant:

  • Flexible Data Entry: Whether via Intelligent Data Capture (IDC) and/or Electronic Data Capture (EDC), Entrypoint can accommodate any business requirement, either from scratch, or perhaps via conversion from a legacy platform (E.g. DOS).
  • Rapid Application Deployment: Entrypoint can be deployed in hours, sometimes and typically by non-application development personnel, safeguarding long-term management and associated TCO concerns.
  • Audit: The Entrypoint Audit Trail Facility (ATF) tracks all changes made to records from the time they are first entered into the case report form throughout all editing activity, regardless of the number of users working on them.  The audit facility can be enabled on an application-by-application basis for all users, groups of users or individual users.
  • Security: Entrypoint includes a variety of features that yield the highest levels of critical security required for Clinical Trials.  Its inbuilt security features let you create a customized and granular security policy specific to your needs.  Entrypoint uses ODBC to connect to SQL databases for data storage, which provides an additional level of security; database logins, passwords and even built-in encryption, not always available for other data entry solutions.  Optional 128-bit encryption protects all messages sent to or from the server delivering significantly greater protection, not always available for other data entry solutions.

Entrypoint is one of the simplest but most comprehensive data entry solutions that I have encountered and provides a cost-efficient solution for both the smallest and largest of businesses.  Furthermore, in all likelihood, and definitely in real-life, an entry-level employee or graduate with programming skills could rapidly develop a Data Entry system with Entrypoint to replace any existing Windows XP (or any other Windows OS) based solution.  This observation alone dictates that somebody who actually works for the business, not a 3rd party IT professional, can not only perform the technical work required, but more importantly, be a company employee that can easily relate to and sometimes learn about the end-to-end business.

In the IT world, change is inevitable, and sometimes change is forced upon us.  Whatever your thoughts regarding end of support for Windows XP and Office 2003, there are options for you and your business to embrace this change, move forward, and improve your processes.  You no longer have the option to pay Microsoft for extended support, and so why not use these money’s and invest in a system that can be easily supported, and easily adapted in the future, to provide long-term benefit for your business!

Application Performance Tuning – Why Bother?

With older generations of Mainframe Operating Systems, certainly MVS/XA and perhaps MVS/ESA, application performance tuning was a necessity, not an afterthought.  Quite simply, the cost of Mainframe resources, namely CPU, memory and disk, dictated that your mission critical business application might not perform to business requirements, unless you tuned your programming code.  Programmers, both of the system and application variety understood the bits and bytes of available programming languages (E.g. ASM, COBOL, PL/I) and Operating System (I.E. MVS), collaborating either via proactive process, or reactive problem solving.  With the continuing reduction of IT hardware component costs, the improvement in Operating Systems (E.g. 64-bit architecture) and newer programming languages (E.g. C, C++), it seems that application performing tuning is somewhat of an afterthought, but at what cost?

We all know that the cost of a Mainframe MIPS is significant, and although it might have reduced dramatically from a hardware viewpoint, from a software viewpoint, the cost remains largely static at ~£1,500-£3,500, per year, depending on your configuration.  So if your applications are burning several hundred if not several thousand extra MIPS unnecessarily, that’s very expensive indeed!  Additionally and just as importantly, a badly tuned system will manifest itself in slower transaction response times and longer batch jobs, if applicable, which could impact service availability.  So why is there a seeming reluctance to tune business applications, Mainframe resident or not?

If ever there was a functional IT area where the skills gap has never been wider, then application performance tuning is said skill, when comparing the salty old sea dog Mainframe dinosaur, with the newer Mainframe technician!

From an application development process viewpoint, where does the application performance tuning task live; before or after implementation?  The cynical amongst us will know; if it’s after implementation, there’s a strong likelihood said activity will never be performed!  If it’s before implementation, how many projects incorporate a meaningful stress test, or measure transaction response times versus an SLA or KPI metric?  Additionally, if the project is high-priority and/or running behind schedule, then performance testing is an activity that is easily removed…

Back in the good old days, the late 1980’s to early 1990’s, some application performance tuning tools did start to emerge, most notably Strobe.  Strobe was useful to even the most accomplished of system and application programmer personnel, and invaluable to less experienced personnel, and so arguably Strobe became the de facto software tool for tuning Mainframe applications.  However, later releases of MVS (E.g. OS/390 and z/OS), the non-event that was the Year 2000 (Y2K), seemed to remove the focus on and importance of application tuning.

Arguably most importantly of all, that software MIPS cost item, where Strobe and its competitors (E.g. ASG/BMC TriTune, CA Application Tuner, IBM APA, Macro4 ExpeTune, et al) will utilize even more CPU to capture diagnostic trace information, contributed to the demise of application performance tuning.  However, those companies that have undertaken such application tuning activities in the last decade or so are sitting pretty, having reduced the CPU (MIPS) resource consumed, lowering TCO and optimizing performance accordingly.  In the 21st Century, these software solutions are classified as Application Performance Management (APM) solutions.

Is there a better and easier way to stimulate an interest in the application performance tuning discipline?  If the desire exists to tune an application, lowering CPU MIPS usage, optimizing service performance, then the traditional tools and methods mentioned previously exist, but perhaps a new (or not so new) CPU performance data source exists…

With the introduction of the z10 server, a new function CPU MF (CPU Measurement Facility) was incorporated.  Let’s not forget, z10 is now an n-2 technology, having been superseded by the z196/z114 and the latest zBC12/zEC12 generation of servers.  So each and every committed Mainframe customer should be positioned to benefit from the CPU MF function.

CPU MF provides optional hardware assisted collections of information about logical CPU activity executed over a specified interval in selected Logical Partitions (LPARs).  The CPU MF counters function is intended to be run on a constant basis to collect long-term performance data (I.E. SMF Record 113), in a similar manner to how you collect other performance data.  I have previously briefly discussed how CPU MF SMF data can be used to increase Mainframe Server Capacity Planning efficiencies. 

The CPU MF sampling function is a short duration, precise function that identifies where CPU resources are being used, to help you improve application efficiency.  Put very simply, CPU MF sampling data has minimal CPU overhead (E.g. ~0.1-1.0%) when collecting data (I.E. z/OS Hardware Instrumentation Services – HIS), but this data can then be used to identify CPU “hot spots”, which can then be further analysed to identify the “areas of code” generating the high CPU usage.  However, it was forever thus, whether an APM tool, or CPU MF sampling data, high CPU usage can be identified, but the application programmer must undertake the task of optimizing the application code!

IBM have done a great job in providing CPU MF counters data, optimizing the Capacity Planning process with the SMF 113 record, and the realm of possibility exists with the sample data, but a software solution is required to analyse and summarize this data.

Currently there are very few if only one software solution that analyses CPU MF sample data, namely zHISR from Phoenix Software International.  zHISR interfaces directly with z/OS Hardware Instrumentation Services to collect data for hotspot analysis of customer, vendor, or operating system program execution.  zHISR features include:

  • Support for up to 128 simultaneous data collections events.  zHISR collections do not interfere with any HIS functions including sample or counter collection.
  • System console commands for many zHISR functions.
  • An Application Programming Interface to COBOL and Assembler for starting and stopping data collections. Collection lengths for API generated collections have a time range of one second or more.
  • Ability to schedule a collection with JCL so that collection starts when a given job or step begins.
  • Ability to store data collections as z/OS data sets or UNIX files.
  • Support for collections against CICS/TS transactions.
  • Analysis based on a time range within the collected data for a narrower spotlight on problem code.

An intuitive ISPF dialog allows the user to easily produce a CPU hot spots analysis, which can then be used for identifying the offending code sections.  The user can then drill down and highlight the high CPU CSECT and program offset (instruction), comparing with their Associated Data (ADATA), and thus the source programming instruction.  Therefore the skill required to perform analysis is minimal, as is the CPU overhead in collecting analysis data, and so eradicating the potential barriers when embarking on an application tuning initiative.  Furthermore, the actual cost of deploying the zHISR software is not onerous and so perhaps each and every committed Mainframe user can easily include application performance tuning into their application development lifecycle processes. 

zHISR has a UNIX file system interface that lets you navigate the system and browse or delete files.  With zHISR, users can start and stop hardware event data collections and view the status of the current or prior HIS run.  zHISR also includes a memory display/alter utility that lets you view main storage in the CPU you are logged on to.  If zIIPs are present and zHISR is defined as an authorized subsystem, nearly all of the CPU processing used by zHISR is redirected to a zIIP.

There are also instances, however few and far between, where Mainframe customers have written their own proprietary in-house OLTP (On-Line Transaction Processor) and Relational Database Management Subsystem (RDBMS), where traditional APM software tools can’t provide a solution, only interfacing with underlying subsystems (E.g. Adabas, CICS, DB2, IDMS, WebSphere, et al).  In these instances, CPU MF and zHISR offer a solution to help such customers, who probably face challenges when they upgrade their Mainframe servers, safeguarding software and application code is compatible with the new hardware, and ideally, exploits the latest functionality.

In conclusion, application performance tuning has to be a very important if not mandatory activity for the Mainframe Data Centre.  Whether via CPU MF or traditional APM software solutions, the cost reduction and performance improvement benefits of tuning should be compelling reasons to proactively engage in application tuning activities.  From a skills viewpoint, maybe the KISS (Keep It Simple Stupid) principle can apply, where CPU MF collects the data very simply and efficiently, complemented by zHISR, analysing the data in an intuitive and cost optimized manner.

So turning the subject matter on its head, Application Performance Tuning – Why Bother?  Why not!

Further information can be found from my z/OS Application Performance Tuning presentation, delivered at UK GSE in November 2012.

COBOL – A Viable Programming Language?

For the last twenty years or so I have encountered many scenarios where Mainframe users consider migration to a Distributed Systems (E.g. Wintel, UNIX/Linux, et al) platform, where more often than not the primary reasons seems to be “green screen” and/or “COBOL is a declining legacy language” based.

Going back to basics, COBOL is a Common Business Oriented Language, although the naysayers might say COBOL is a Completely Obsolete Business Oriented Language; we will perhaps try to be more dispassionate in this discussion…

Industry Analysts have stated that there are ~220 Billion lines of COBOL code and ~100,000 programmers and that COBOL applications process ~80% of business transactions daily, and that there are ~200 times more COBOL transactions processed daily, when compared with Google searches!  A lot of numbers and statistics, but seemingly COBOL is still widely used and accepted.  Even from a new development viewpoint, ~5 Billion lines of COBOL code per annum (~15% of Annual Global Development) is stated, suggesting that COBOL is not in any way obsolete or legacy, so why is COBOL perceived by some in a dubious manner?

Maybe because COBOL was introduced in 1959 and primarily it is deployed on the Mainframe, and so anything that is 50+ years old and has an association with the Mainframe just has to be dubious, doesn’t it?  Of course not, as this arguably “pioneering” or at least one of the first “widely deployed” programming languages allowed many global and significant businesses grow, in tandem with the IBM Mainframe platform, automating and streamlining business processes, increasing productivity and so on.  So depending on your viewpoint, COBOL was either in the right place at the right time, stimulating the Data Processing (DP) and Information Technology (IT) revolution, or COBOL just got lucky, it was “Hobson’s Choice”…

Although there have been several iterations of COBOL standards (I.E. COBOL-68, COBOL-74, COBOL-85), primarily associated with the American National Standards Institute (ANSI) and more latterly COBOL 2002 (ISO), a COBOL program that was written and compiled on an IBM Mainframe several decades ago, will most likely still run on the latest generation IBM Mainframe.  Put another way, its backwards compatibility ability has been significant, and although there were some migration considerations associated with the Language Environment (LE), the original COBOL Application Development investment has generated a readily usable Return On Investment (ROI) over and over again.  How true is this for other programming languages and computing platforms?  For the avoidance of doubt, a COBOL program that was written in 16-bit, can still run today on a 64-bit platform, and with a modicum of evolution, fully exploit the latest functionality and 64-bit performance, with minimal fuss.  While how many revolutionary or significant upgrades have been required for Commercial Off The Shelf (COTS) software and associated bespoke application development code, to upgrade non-Mainframe platforms from 16-32-64-bit?

So, is COBOL a viable programming language of the future?  One must draw one’s own conclusions, but we can look to recent functional enhancements and statements of direction from an IBM Mainframe viewpoint.

In recent years IBM have actually increased the number of COBOL R&D personnel by a factor of ~100%, while increasing allocated investment, commitment and interest accordingly.  This observation more than any other, suggests that at least from an IBM Mainframe viewpoint, COBOL is an important function.

From a technical function viewpoint, the realm of possibility exists with COBOL, interacting with all 21st century programming and function techniques, dismissing the notion that COBOL can only be considered as a traditional/legacy option for CICS-Batch applications and associated “green screen” environments, for example:

  • Support for CICS integrated translator
  • Support for latest SQL data types in syntax via DB2
  • Support for Java interoperability via object-oriented COBOL syntax
  • Support access for WebSphere enterprise beans
  • Support for Java SDK
  • Support for XML high speed parsing and validation (UTF-8, UTF-16 & various EBCDIC codepages)

From a strategic statement of direction viewpoint, IBM have declared the following major notable activities:

  • Performance and resource utilization optimization, reducing TCO accordingly
  • Improved middleware (I.E. CICS, DB2, IMS, WebSphere) programmability and problem determination
  • Improved capabilities (E.g. XML, Java, et al) for modernizing & creating business critical applications
  • Improved programmer (E.g. Usability and Problem Determination) productivity
  • Source and load (I.E. recompile not required) compatibility (E.g. old programs can call new and vice versa)

Even for those occasions where the IBM Mainframe platform might be decommissioned, COBOL can still be processed on alternative platforms via code migration techniques such as Micro Focus, where such functions and services can be Cloud based.  However, once again, isn’t the IBM Mainframe the ultimate “Cloud” platform, which has arguably been the case “forever thus”?

One must draw one’s own conclusions as to why the Mainframe platform and/or COBOL applications are often considered for replacement via migration, when the Mainframe platform is both strategic and cost efficient.  As with any technology decision, there is no “one size fits all” solution, but perhaps a little education can go a long way, and at least the acceptance that seeming “legacy” technologies are strategic and viable.