The Software Defined Mainframe (SDM): An Alternative Approach?

Some consider the IBM Mainframe to be the last bastion of proprietary computing platforms, for obvious reasons, namely the CPU server architecture and the single manufacturer, IBM.  The historical and legacy ability of said IBM Mainframe to transform Data Processing into Information Technology and still participating in the Digital Era is without doubt.  However, for many, the complicated and perceived ultra-expensive world of software pricing generate concern, largely based upon Fear, Uncertainty and Doubt (FUD), which might have generated years if not decades of under investment for those organizations with an IBM Mainframe.

Having worked with the IBM Mainframe for 35+ years, I have gained a knowledge that allows cost optimization and contemporaneous usability, which given the importance of the IBM Mainframe platform to IBM from a revenue viewpoint, will safeguard that the IBM Mainframe will have a long future.  However, the last decade or so has seen a rapid evolution in Open Source, DevOps, Enterprise Class Support for Distributed Platforms, Mobile and Cloud computing, et al, potentially generating an opportunity for the global IBM Mainframe user base to once again consider the platforms value proposition…

Let’s consider this server platform choice from a business viewpoint.  On the one hand, there are the well versed market statements, where 80%+ of corporate data resides or originates from IBM Mainframes, while IBM Mainframes enable 70%+ of global commercial transactions, et al.  In recent times there are global businesses, leveraging from the cloud or Linux Open Source technologies, to run their business.  For instance, Netflix reportedly runs its media on demand business via the Amazon Web Services (AWS) cloud, while said platform is facilitating a Data Centre reduction of 34 to 4 for General Electric (GE).  There are many other such “early adopters” of this commodity infrastructure provision opportunity, including Capital One, Hertz and Juniper, naming but a few.

Quite simply, the power of Mobile processors, primarily ARM and supporting software ecosystem empower each and every potential consumer with a palm sized smart computing platform, while the power and supporting software ecosystem of x86 processors, generate an environment for each and every global business, mature or not even launched, to deliver an eminently usable and scalable IT Infrastructure for their business model.

Of course, the IBM Mainframe can do this, it always has been at the forefront of IT architectures and always will be, but for the “naysayers”, its perceived high acquisition and running costs are always an easy target.  As somebody much cleverer than I once said, timing is everything, and we’re now encountering a “golden sunset” for those Mainframe Baby Boomers, just like myself, that will retire in the next decade or so.  Recently I was talking with a large IBM Mainframe customer, who stated “we’re going to lose 1500 years of IBM Mainframe experience in the next 10 years, how can you replace that resource easily”?  Let’s just think about that metric; ~50 people with an average of ~30 years’ experience, but of course, they will all retire in a short time frame!  You must draw your own conclusions as to that conundrum, how do you replace that level of experience?

In conclusion, no matter what IBM deliver from an IBM system z viewpoint, there is no substitute for experience and skill and no company, especially IBM has an answer to skills provision.  In the last 10-20 years, Outsourcing or Managed Services has provided an alternative approach for some companies, but even this option has finite resource.  If we consider the CFO viewpoint, where the bottom line is the only true financial metric, it’s easy to envisage a situation where many companies consider an alternative to the IBM Mainframe platform, both from a cost and viability viewpoint.  As a lifelong IBM Mainframe champion and as previously stated, there will always be a solution for safeguarding the longevity and viability of the IBM mainframe for any Medium to Large sized business.  However, now is the time to act, embrace the new Open Source, DevOps and Hybrid Cloud opportunities, to transition from a Baby Boomer to Millennial Mainframe workforce!

Is there an alternative approach and what is the Software Defined Mainframe (SDM)?

Put simply, SDM is a technology from LzLabs enabling the migration of mission-critical workloads from legacy IBM Mainframe environments to x86 Linux platforms.  Put another way, LzLabs have developed a managed software container that provides enterprises with a viable way to lift and shift applications from IBM Mainframes into Red Hat Linux or Cloud environments.  From my first glance, the primary keyword here is container; there was a time where the term container might have been foreign to the System z Mainframe, but with LinuxONE and zVM, Docker and KVM are now commonplace and accepted functions.  The primary considerations for any platform migration would include:

  • Seamless Migration: The LzLabs Software Defined Mainframe (SDM) ensures the key capabilities of screen handling, transaction management, recovery and concurrency are preserved without changes to the applications. LzOnline is capable of processing thousands of online customer transactions per second using commercial off-the-shelf hardware.
  • Major Subsystem Compatibility: The LzLabs Software Defined Mainframe (SDM) safeguards 100% compatibility with existing job control syntax, and also enables job submission via network connected nodes that support conventional job entry protocols. LzBatch provides a full spool capability that enables output to be managed and routed in familiar ways. Use of conventional job submission models, with standard job control, also means existing batch scheduling can operate with minimal changes.  Other solutions include LzRelational for Relational Database Management System (RDBMS) support and LzSecure, an authentication and authorization subsystem using security rules migrated from the incumbent IBM Mainframe platform.
  • Application Code Stability: An innovative approach that avoids the requirement to recompile or rewrite legacy COBOL or PLI application source code. Leveraging from functionality delivered by Cobol-IT and Eranea, a simple and straightforward process to convert and potentially modernize existing application source code to Java.

The realm of possibility exists and there are likely to be a number of existing IBM Mainframe users that find themselves with challenges, whether retiring workforce or back level application code based.  The Software Defined Mainframe (SDM) solution provides them with a potential option of simplifying a transition process, with seemingly minimal risk, while eradicating any significant dependence on another Distributed Systems platform supplier, during the arduous application source and data migration process.

From my viewpoint, I hope that this innovative LzLabs approach is a wake-up call for IBM themselves, who continue to deliver a strategic Enterprise Class System z platform, with all of its long term challenges, primarily cost based and the intricate and over complicated sub-capacity software pricing structure.  Without doubt, any new workload can easily be accommodated for low cost via the recent LinuxONE offering, but somewhere along the line, IBM perhaps overlooked a number of Small to Medium sized customers, who once might have used entry level or plug-compatible platforms, including and not limited to S/390 Integrated Server, MP3000, FLEX-ES zFrame, T3 Liberty, et al.  Equally from a dispassionate viewpoint, I welcome the competition of the LzLabs Software Defined Mainframe (SDM) offering and I would encourage all CIO and indeed other CxO personnel to consider the merits of this solution.

z/VM: The Most Flexible System z Operating System?

When considering IBM System z Operating Systems, typically z/OS is considered to be the flagship product, delivering best-of-breed features, including but not limited to, performance, reliability, availability, security, capacity, et al.  Therefore it easy to overlook the flexible virtualization capabilities of z/VM, delivering the architectural foundation for the increasingly attractive LinuxONE offering.  Quite simply, the fundamental strength of z/VM is an ability for hundreds if not thousands of virtual machines to share system resources with high levels of resource utilization.  The recent release of z/VM V6.4 provides even greater levels of scalability, security, resource optimization and efficiency to create opportunities for cost savings, while providing a robust foundation for cloud computing on z Systems servers.

Major technical highlights of z/VM 6.4 include:

  • Simultaneous MultiThreading (SMT) technology extends per-processor, core capacity growth beyond single-thread performance for Linux on z Systems running on an IBM Integrated Facility for Linux (IFL) specialty engine on a z13, z13s or LinuxONE server.
  • Enhanced Real & Guest Virtual Memory Support. The maximum amount of real storage supported by z/VM increases from 1 to 2 TB, whereas maximum supported virtual memory for a single guest remains at 1 TB.  Maintaining the virtual to real memory allocation, doubling the real memory used, results in doubling the active virtual memory that can be used effectively.  This virtual memory can be sourced from an increased number of virtual machines and/or larger virtual machines, delivering greater leverage of white space.
  • Surplus CPU Power Distribution Improvement. Virtual machines not utilizing all of their entitled CPU power, determined by their share setting, generate “surplus CPU power.”  This surplus CPU resource can be distributed to other virtual machines in proportion to their share settings, managed independently across virtual machines for each processor type, namely General Purpose (GP), zIIP, IFL, et al.
  • Guest Large Page Support. z/VM 6.4 now includes support for the Enhanced Dynamic Address Translation (DAT), allowing a guest machine to exploit large (1 MB) pages.  Larger page sizes decrease the amount of guest memory needed for DAT tables, therefore decreasing the overhead required to perform address translation.  In all cases, guest memory is mapped into 4 KB pages at the host level.

From a Linux environment viewpoint, z/VM V6.4 is a supported environment using IBM Dynamic Partition Manager for Linux-only systems with SCSI storage.  This simplifies system administration tasks for a more positive experience by those with limited System z Mainframe administration skills.  IBM Wave Version 1 Release 2 is now included in z/VM V6.4 as a priced feature, simplifying the task of administering a z/VM environment.  Using Dynamic Partition Manager, an inexperienced z/VM technician can create a z/VM partition in ~10 Minutes!

Supporting today’s agile application development and hybrid cloud implementations, z/VM and LinuxONE virtual servers can be natively managed using OpenStack open cloud architecture-based interfaces IBM OpenStack for z Systems.  OpenStack is an Infrastructure as-a Service (IaaS) cloud computing open source project, managed by the OpenStack Foundation.  With the adoption of OpenStack as part of the IBM cloud strategy, z/VM drivers provide OpenStack enablement for z/VM virtual machines running Linux on z Systems and LinuxONE.  Open standards such as OpenStack enable enterprises to be more agile, resolving potential issues such as vendor lock-in, technical expert recruitment, long application development cycles and security challenges.

The next evolution of z/VM cloud enablement technology is the OpenStack Liberty based Cloud Management Appliance (CMA), available for z/VM 6.3 and 6.4.  z/VM installations wanting to deploy cloud based solutions beyond Cloud Manager with OpenStack for z Systems, should utilize the cloud enablement support provided by the z/VM OpenStack Liberty based CMA.  This OpenStack Liberty based Cloud Management Appliance (CMA) replaces the IBM Cloud Manager with OpenStack for System z solution, withdrawn from marketing in June 2016.

The z/VM hypervisor extends the capabilities of z Systems and LinuxONE environments from the standpoint of sharing hardware assets, virtualization facilities and communication resources.  In conjunction with IBM Wave, z/VM makes it easier to derive maximum value from largescale virtual server hosting on z Systems and LinuxONE.  These benefits includes software and personnel savings, operational efficiency, power savings and optimal qualities of service.  The z/VM virtualization technology is designed to enable organizations to run hundreds to thousands of Linux servers on a single System z Mainframe footprint, alongside other System z Operating Systems, such as z/OS, z/VSE, or as a large-scale enterprise LinuxONE server solution.

Advanced virtualization features like multisystem virtualization and live guest relocation with z Systems, LinuxONE, z/VM, and Linux on z Systems or LinuxONE help to provide an efficient infrastructure for deploying private clouds to support workloads that scale both horizontally and vertically at a low total cost of ownership.

Although some might consider z/OS to be the flagship IBM system z Mainframe Operating System, arguably z/VM is the industry standard for optimal resource virtualization for numerous Operating System deployments.

IBM Doc Buddy: System z Mobile Problem Diagnosis

 

Having worked with the IBM Mainframe over the last several decades or more, I have always found a need for quick access to error messages, for obvious reasons.  In the 1980’s, I would have a paper copy of the “most common” MVS messages I was likely to encounter.  In the 1990’s, the adoption of optical media and the introduction of BookManager allowed the transport of many more messages, for numerous products on CD-ROM.  With the advent of higher speed Broadband, Wi-Fi and Mobile networks, I graduated to accessing BookManager on-line and eventually using the Mobile edition of LookAt.  So, isn’t it time for an IBM documentation app?

In August 2016, IBM introduced Doc Buddy, a no charge mobile application that enables retrieving z Systems message documentation and provides the following values:

  • Enables looking up message documentation without Internet connections after the initial download
  • Improves your information experience
  • Accelerates the time you spend in resolving problems
  • Includes links to the relevant product Support Portals and supports calling a contact from the app

IBM Doc Buddy, provides the message documentation of the products including z/OS, z/VM, TPF, DB2, CICS, IMS, ISPF, Tivoli OMEGAMON XE for Messaging for z/OS, IBM Service Management Unite, IBM Operations Analytics of z Systems, InfoSphere, et al.

Obviously to make this app local, you need to download the relevant manuals to your Mobile device and so this might generate storage capacity considerations.  However, once downloaded, this is a great tool for quick access to error messages.  There will be times where you can get a mobile signal to take a call, but no or limited access to mobile data or Wi-Fi services.

I have used this app on both iOS and Android and it works great.  At the time I downloaded this app, there were less than 100 downloads on both Apple and Google platforms.  Therefore, if you ever need to access System z error messages, give this app a go, as IBM have dropped support for LookAt.  It’s an awful lot easier than accessing paper manuals of firing up your PC to access a CD-ROM!

zAPI: System z Deployment Into The API Economy

Having been in the IT industry for 35+ years, I have always fully embraced and learned new technologies, to find strategic solutions for business challenges.  Obviously, starting in 1980, my heritage is IBM Mainframe, supplemented by UNIX, Wintel and Linux along the way.  Each and every platform has its merits, and during this 35+ year period, I have attended many conferences, for all platforms.  What I have noticed during this period is the attendance of many IBM Mainframe CIO, CTO or Chief Architect individuals at non-IBM Mainframe conferences, but very few, if any, equivalent Distributed Systems personnel at IBM Mainframe conferences.

I’m always surprised and disappointed to hear about organizations talking about decommissioning the IBM Mainframe platform, with tenuous reasons, based on Distributed Systems FUD messaging, as opposed to their own business requirements.  Thankfully these scenarios are decreasing over the years.  Presumably if an organization decides to migrate from one Distributed Systems platform to another or perhaps the Cloud, they do at least attend the relevant platform conferences to make an informed decision.

Over the last 25 years or so, IBM themselves compete with differing divisions and options, whether UNIX (AIX), System z and in recent years, Linux on z Systems, most notably with the LinuxONE launch at LinuxCon 2015.  One would hope that the world’s key IT decision makers might attend LinuxCon with an open mind and learn more about the System z Mainframe?

A ridiculous notion might be that one server platform technology can satisfy a 21st Century organizations IT infrastructure for their mission critical services.  Clearly that has not been the case since the advent of Client Server and today’s emerging Digital business requires an infrastructure of multiple layers, where the underlying server technology is somewhat arbitrary, and arguably a commodity resource.  Conversely the underlying data and associated applications differentiate one business from another, delivering business value and competitive edge.

Let’s take some time to consider this IT architecture design, which very quickly dismisses any notion that one server technology delivers all business requirements:

Such an architecture diagram does not impose any technology decisions.  Conversely it explores the “data journey” from access or creation, via Systems of Engagement (SoE) to eventual storage within Systems of Record (SOR) data repositories (I.E. Database).  Some might say it was forever thus, with the exception of the Multi-Channel SDK’s & API’s layer, where the savvy organizations will embrace DevOps, Hybrid Cloud and connectivity (I.E. API, SDK) solutions, seamlessly integrating modern agile applications, with that most valuable business asset, Systems of Record (SoR) data.

Today’s Application Developer doesn’t need to concern themselves as to the platform used for their DevOps application processes, the Transaction Server or indeed the Database Server.  Sure, several decades ago, maybe even a decade ago, application code was deeply associated if not confined to a specific CPU server architecture.  Clearly that is no longer the case.  Any organization that still thinks in this legacy manner, is behind the times, and this is unfortunate.  Associating such outdated thinking with the System z Mainframe is arguably careless, and not a reason for dismissing an incumbent System z platform, or not considering a System z platform in the future.

Arguably the greatest strengths of today’s System z IBM Mainframe, currently packaged as the z13 or LinuxONE, are as a Database Server (E.g. DB2), Transaction Server (E.g. CICS, WebSphere Application Server) and Security Server (E.g. ACF2, RACF, Top Secret).  From a LinuxONE viewpoint, it’s just another server, capable of processing all of the latest strategic Open Source and Commercial Off The Shelf (COTS) Cloud, Database and Application solutions, while benefitting from the unparalleled System z Quality of Service (QoS) attributes.

However, for those organizations already deploying a System z Mainframe, its greatest perceived issue is TCO.  Without doubt the convoluted and intricate Workload Licence Charges (WLC) are unnecessarily complicated and perceived as being very expensive.  Optimizing these costs requires a modicum of expertise, safeguarding that the best contractual conditions are negotiated.  However, I encounter the same complexities with Distributed Systems platforms, where software license costs can spiral out of control for significant CPU capacity deployments.  Whatever platform is deployed, System z Mainframe or Distributed System, unless the business has the requisite skills in place, technical and commercial, to safeguard the lowest cost possible, commercial ISV suppliers will take advantage of such an oversight.

I’m not advocating any server technology, System z Mainframe, Distributed System or Cloud, as each resource has its merits, depending on the business requirement.  However, today’s 21st Century organization must enable new business channels by leveraging from and arguably enable new business channels by monetizing their Systems of Record (SoR) enterprise data.

Today, organizations need to consider an API Economy, where they expose their internal digital business assets or services in the form of Web API services to external 3rd party partners and consumers, with an overall objective of unlocking increased business value via the creation of new assets.  Such an API Economy will require integration of Transaction and Data resources, specifically:

  • Centrally manage the consumption of enterprise wide business logic, for both Systems of Record (SoR) & Systems of Engagement (SoE)
  • Extend business (E.g. Product, Brand) reach from Systems of Record (SoR), incorporation Systems of Engagement (SoE)

Previously I wrote about How to Connect Mobile Workloads to System z, detailing the conceptual steps required to expose existing SoR data assets with SoE transaction services, via z/OS Connect.  For a fully integrated end-to-end integrated solution, we must also consider the Application Programming Interfaces (API), being the digital glue that seamlessly links applications, services and systems together.

IBM API Connect is a solution that manages the API lifecycle for both On-Premises and Cloud environments.  IBM API Connect delivers capabilities to Create, Run, Manage & Secure API resources and Microservices.  It also enables you to rapidly deploy and simplify API administration, across the organization.

API Connect can be deployed On-Premises via Linux on z Systems, in the cloud (E.g. Bluemix), as well as all other popular Distributed Systems.  Once again, the main message is that the chosen server is arbitrary, System z Mainframe, Distributed System or Cloud.  The server should be considered as a commodity resource, leveraging from existing business logic (I.E. SoE) and data (I.E. SoR), while evolving existing Application Lifecycle Management (E.g. Agile, API Economy, DevOps) is the key.

My final observation is the Mainframe Baby Boomer (E.g. Born ~1960) versus the Millennial (E.g. Born ~1995) technical personnel resource.  Without doubt, there are significant differences in their approach to application programming, but only one resource, namely the Baby Boomer knows the business really well.  I think these folks have the ability to learn another 21st Century programming language, as well as COBOL, but perhaps their best attribute is an analytical role, especially for the integration of SoE and SoR layers.  Working very closely with Millennial technical resources, delivering the new Application (I.E. App, API) resources, the Mainframe Baby Boomer still has something valuable to offer in their final employment years.  For the avoidance of doubt, still delivering value from an analytical viewpoint, while transferring their skills and knowledge to their successors, namely the Millennial.

In conclusion, dismissing any server technology for Fear, Uncertainty or Doubt (FUD) reasons, is an unproductive and ridiculous notion.  More importantly, what might your business lose in opportunity, spending several years or more, migrating from one platform to another, while your competitors are embracing the Digital Age with an API Economy approach, delivering more value from their existing business SoE (transactions) and SoR (data) assets?

Are You Ready For z Systems Workload Pricing for Cloud (zWPC) for z/OS?

Recently IBM announced the z Systems Workload Pricing for Cloud (zWPC) for z/OS pricing mechanism, which can minimize the impact of new Public Cloud workload transactions on Sub-Capacity license charges.  Such benefits will be delivered where higher Public Cloud workload transaction volumes may cause a spike in machine utilization.  Of course, if this looks familiar and you have that feeling of déjà vu, this is a very similar mechanism to Mobile Workload Pricing (MWP)…

Put simply, zWPC applies to any organization that has implemented Sub-Capacity pricing via the basic AWLC or AEWLC pricing mechanisms, for the usual MLC software suspects, namely z/OS, CICS, DB2, IMS, MQ and WebSphere Application Server (WAS).  An eligible transaction is one classified as Public Cloud originated, connecting to a z/OS hosted transactional service and/or data source via a REST or SOAP web service.  Public Cloud workloads are defined as transactions processed by named Public Cloud applications transactions identified as originating from a recognized Public Cloud offering, including but not limited to, Amazon Web Services (AWS), Microsoft Azure, IBM Bluemix, et al.

As per MWP, SCRT calculates the R4HA for Public Cloud transaction GP MSU resource usage, subtracting 60% of those values from the traditional Sub-Capacity software eligible MSU metric, with LPAR granularity, for each and every reporting hour.  The software program values for the same hour are aggregated for all Sub-Capacity eligible LPARs, deriving an adjusted Sub-Capacity value for each reporting hour.  Therefore SCRT determines the billable MSU peak for a given MLC software program on a CPC using the adjusted MSU values.  As per MWP, this will only be of benefit, if the Public Cloud originated transactions generate a spike in the current R4HA.

One of the major challenges for implementing MWP was identifying those transactions eligible for consideration.  Very quickly IBM identified this challenge and offered a WorkLoad Manager (WLM) based solution, to simplify reporting for all concerned.  This WLM SPE (OA47042), introduced a new transaction level attribute in WLM classification, allowing for identification of mobile transactions and associated processor consumption.  These Reporting Attributes were classified as NONE, MOBILE, CATEGORYA and CATEGORYB.  Obviously IBM made allowances for future workload classifications, hence it would seem Public Cloud will supplement Mobile transactions.

In a previous z/OS Workload Manager (WLM): Balancing Cost & Performance blog post, we considered the merits of WLM for optimizing z/OS software costs, while maintaining optimal performance.  One must draw one’s own conclusions, but there seemed to be a strong case for WLM reporting to be included in the z/OS MLC Cost Manager toolkit.  The introduction of zWPC, being analogous to MWP, where reporting can be simplified with supplied and supported WLM function, indicates that intelligent and proactive WLM reporting makes sense.  Certainly for 3rd party Soft-Capping solutions, the ability to identify MWP and zWPC eligible transactions in real-time, proactively implementing MSU optimization activities seems mandatory.

The Workload X-Ray (WLXR) solution from zIT Consulting delivers this WLM reporting function, seamlessly integrating with their zDynaCap and zPrice Manager MSU optimization solutions.  Of course, there is always the possibility to create your own bespoke reports to extract the relevant information from SMF records and subsystem diagnostic data, for input to the SCRT process.  However, such a home-grown process will only work on a monthly reporting basis and not integrate with any Soft-Capping MSU management, which will ultimately control z/OS MLC costs.

In conclusion, from a big picture viewpoint, in the last 2 years or so, IBM have introduced several new Sub-Capacity pricing mechanisms to help System z Mainframe users optimize z/OS MLC costs, namely Mobile Workload Pricing (MWP), Country Multiplex Pricing (CMP) and now z Systems Workload Pricing for Cloud (zWPC).  In theory, at least one of these new pricing mechanisms should deliver benefit to the committed System z user, deploying this server for strategic and Mission Critical workloads.  With the undoubted strategic importance associated with Analytics, Blockchain, Cloud, DevOps, Mobile, Social, et al, the landscape for System z workloads is rapidly evolving and potentially impacting those sacrosanct legacy Mission Critical workloads.  Seemingly the realm of possibility exists that Cloud and Mobile originated transactions will dominate access to System z Mainframe System Of Record (SOR) data repositories, which generates a requirement to optimize associated MLC costs accordingly.  Of course, for some System z users, such Cloud and Mobile access might not be on today’s to-do list, but inevitably it’s on the horizon, and so why not implement the instrumentation ability ASAP!

All Flash & Substance – Is The System z Microsecond The New Millisecond?

Is 2016 the year of the All Flash disk array?  Seemingly from a System z perspective, 2016 has seen improvement in the All Flash disk array offerings from the major disk suppliers, namely EMC, HDS and IBM.  From a usability perspective, managing latency might be the overall challenge, where these ultra-fast SSD systems are delivering I/O performance response times measured in the ~250-500 Microsecond (μs) range, potentially consigning the traditional Millisecond (ms) measurement to history!

Whatever the speeds and feeds might be, as of 2016, the benchmark for a System z All Flash Disk Array is seemingly measured @ <500 Microsecond μs response time, supporting ~n PB of capacity and delivering ~nnn GB/S throughput for mixed read/write workloads.  Of course, strong encryption, typically full disk Data @ Rest Encryption (D@RE) based and full seamless data replication interoperability are also mandatory.

Historically we evolved from Data Processing to Information Technology, not only automating the capture and processing of data, but gradually evolving our processes, using this data for business advantage.  In recent years, the information explosion has dictated that each and every business must be a cognitive business, using intelligent analytics to gain insight and faster decision-making from the business data collected.

Currently the Internet of Things (IoT) supplements the medium-term Cloud, Analytics, Mobile, Social & Security (CAMSS) initiative, being the processes and associated solutions required by cognitive businesses to make timely and informed decisions, capturing deeper customer insight, ultimately delivering competitive advantage.  Therefore the 21st century business generates a significant requirement for storage capacity and performance to fully realize the benefits of this truly business aligned cognitive approach.

The largest global organizations from all verticals leverage from the power and true 24*7*365 availability and reliability of the System z Mainframe to power enormous relational databases, processing millions of customer transactions on an hourly basis.  These always-on, mission critical business environments demand the performance, reliability, TCO and System z platform integration delivered by the associated DASD (3390) subsystem.

Each and every System z user will have their IHV of choice for delivering disk storage, in alphabetical order, EMC (E.g. VMAX AFA/All Flash Array), HDS (HAF/Hitachi Accelerated Flash) or IBM (E.g. DS8888).  The choice of disk storage was forever thus, reviewing the market place and choosing the best option for your business.  What might require reflection is how the DASD I/O subsystem is managed and the associated interaction with said IHV supplier.  Systems Management solutions such as Easy Disk Analyze Mainframe (EADM) and IntelliMagic Vision (Disk Magic) will certainly simplify the analysis and presentation of DASD subsystem performance data.

However the emphasis of the actual System z DASD subsystem for an All Flash array might move from being an internal consideration, to a direct and timely communication with the IHV supplier.  Put very simply, in an environment where Mission Critical systems rely upon ultra fast processing of massive amounts of data, any flash memory issues, whether capacity or defect related will need IHV interaction ASAP, arguably “Before The Event”.  As with the System z Server itself, where we’re used to On/Off Capacity on Demand (OOCoD) processes, maybe we need to consider a similar approach with our All Flash System z DASD arrays.  For the avoidance of doubt, as opposed to waiting for an issue to impact our business, maybe we need to work smarter with our IHV, to safeguard that sufficient flash memory is available, to proactively resolve capacity or defect issues…

Aligning this with our traditional 3390 DASD I/O subsystem analysis, which might have been on a daily basis, from the rich RMF/CMF data resource, we must fully automate this process to minimize or eliminate the Mean Time To Resolution (MTTR)!  The ultimate benefit will be the delivery of meaningful messages that incorporate our 3rd party IHV supplier, who potentially with Remote Support Facility (RSF) type processes, deploys the “Golden Screwdriver” to seamlessly safeguard the performance profiles of our Mission Critical business applications, leveraging from the latest All Flash disk array.

In conclusion, as always, technology can deliver business benefits, with substance, and this includes All Flash disk arrays.  As always, what might need to evolve are the associated Systems Management processes.  Therefore asking yet another potential rhetorical question, what is more important, the System z Server or timely data access?  The diplomatic answer is that they’re equally important and if so, let’s safeguard the availability of All Flash memory for our DASD subsystems, with the requisite levels of meaningful proactive reporting and IHV supplier interaction.

Blockchain: A New Application Development Paradigm – What About System z?

Since the inception of Data Processing and the advent of the IBM Mainframe there has been a progressive movement to deliver the de facto “System Of Record (SOR)”, typically classified as a centralised database and related applications.  The key or common denominator for this “Golden Record” is somewhat arbitrary, but more often than not, for most businesses, it will be customer or product identity related.  The benefit of identifying and establishing an SOR is the reuse of this data, for a multitude of different business usage scenarios.

From an application programming viewpoint, historically there was a structured approach when delivering new business function, whether with bespoke programs or Commercial Off the Shelf (COTS) software packages.  More recently data analytics has accelerated this approach, where new business opportunities can be identified from data trends, with near real-time processing, while DevOps frameworks allow for rapid application delivery and implementation.  However, what if there was a new approach with a different type of database and as a consequence, a new approach to application programming?

From a simplistic viewpoint, Blockchain architecture is analogous to traditional database processing, whereas the interaction with said Blockchain database is vastly different, changing from a centralised to decentralised focus.  Therefore for application developers, Blockchain is a paradigm shifting architecture, in how software applications will be architected and coded.  Recognition of this new and rapidly emerging computing paradigm is of vital importance, because it’s the cornerstone for the creation of decentralised applications, a logical and natural evolution from distributed computing architectural constructs.

If we take some time to step back from the Information Technology world and consider the possibilities when comparing a centralised versus decentralised approach, the realm of possibility exists for a truly global interconnectivity approach, which isn’t limited to a specific discrete focus (E.g. Governance, Market, Business Sector, et al).  In theory, decentralised applications might deliver a dynamic and highly collaborative business approach…

A Blockchain is a pseudo linear container space (block) to store data for “controlled public usage”.  In theory, with the right credentials, this data can be accessed by any user!  The Blockchain container is secured with the originators key, so only the key holder or authorised program can unlock the container data.  This is the fundamental difference between a database and a Blockchain.  For a Blockchain, the header record can be considered “eligible for Public usage”.

The data stored within a Blockchain might be considered as a “token”, the most obvious implementation being Bitcoin.  Generically, Blockchain might be considered as an alternative and flexible data transfer system that no private or public authority and especially a malicious third party can tamper with, because of the encryption process.  Put really simply, the data header has “Public” visibility, but data access requires “Private” authenticated access.

From a high-level viewpoint, Blockchain can be considered as an architectural approach, connecting an infinite a number of peer computers, collaborating with a generic process for releasing or recording data, based upon cryptographic transactions.

One must draw one’s own conclusions as to whether this Centralised to Distributed to Decentralised data and application programming approach is the way forward for their business.

Decentralised Consensus is the inverse of a centralised approach where one central database was accessed to validate transaction processing.  A decentralised scheme transfers authority and trust to a decentralised virtual network, enabling processing nodes to continuously access or record transactions within a public block, creating a unique chain for modification operations, hence the Blockchain terminology.  Each successive data block contains a unique fingerprint (hash) of the previous code.  The basic premise of cryptographic processing applies, where hash codes are used to secure transaction origination authentication, eliminating the requirement for centralised processing. Duplicate transaction processing is eliminated because of Blockchain and associated cryptographic processing.

This separation of consensus (data access) from the actual application itself is the fundamental building block for a decentralised application programming approach.

Smart Contracts are the building blocks for decentralised applications.  A smart contract is a small self-contained program that you entrust with a value unit (token) and associated rules.  The simple philosophy of a smart contract is to programmatically facilitate transactional contractual governance between two or more parties via the Blockchain.  This eliminates the requirement of an arbitrary 3rd party authority for governance, when two or more parties can agree exchange between themselves.  Even today, this type of approach is not unusual between organizations, typically based upon a data (file) interchange standard (E.g. Banking).

Put simply, smart contracts eliminate the requirements of 3rd party intermediaries for transaction processing.  Ideally, the collaborating parties define and agree the required policy, embedded inside the business transaction, enabling a self-managed process between nodes (computers) that represent the reciprocal interests of the associated users and owners.

Trusted Computing combines the architectural foundations of Blockchain, decentralised consensus and smart contracts, enabling the spread of resources and transactions with a trusted “peer-to-peer” relationship, in theory enabling trust between numerous nodes (computers).

Previously institutions and central organizations were necessary as trusted authorities.  Deploying a Blockchain approach, these historical centralised central functions can be simplified via smart contracts, governed by decentralised consensus within a Blockchain.

Proof of Work is an important concept to identify the unequivocal authenticator of transactions, allowing the authorised access to participate in the Blockchain system.  Proof of work is a fundamental building block because once created, it cannot be modified, being secured by cryptographic hashes that ensure its authenticity.  Usability challenges ensue, preventing users from changing Blockchain records, without reprocessing the “proof of work”.

It therefore follows, proof of work will be expensive to maintain, with likely future scalability and security issues, depending on the data user (miner) requirements and incentives, which in all likelihood, will reduce over time.  As we all know, most data access is high when data has been recently created, rapidly decreasing to low or even null after a limited period of time.

Proof of Stake is a more elegant and alternative approach, determining which user can update the consensus, while preventing unwanted forking of the underlying Blockchain, being a more cost efficient approach, while being more difficult and expensive to compromise.

Once again, if we consider the benefits of Blockchain from a business processing viewpoint, there is a clear and present opportunity to eliminate manual or semi-automated processes, both internal and external to the business.  This could expedite the completion of processes that previously required days or even weeks to complete and the potential for human error.  A simple example might be a car purchase, based upon 3rd party finance.  Such a process typically includes 3rd party data requirements, for vehicle provenance, credit scoring, identity proof, et al.  If the business world looks at the big picture, they can simplify and automate their processes, by collaborating with existing and more likely, yet to be identified partners.  The benefits are patently obvious…

From a System z viewpoint, recent technological developments leverage from existing IBM resources, including the LinuxONE, Bluemix and Watson offerings:

  • LinuxONE: The System z and LinuxONE platforms are best placed to drive Blockchain innovation, arguably via the Open Mainframe and Hyperedger IBM supports testing and development of the open Blockchain fabric code for developers on their LinuxONE Community Cloud.
  • Bluemix: the IBM Blockchain services available on Bluemix, developers can access fully integrated DevOps tools for creating, deploying, running and monitoring Blockchain applications on the IBM Cloud.
  • Watson: Leveraging from the Watson IoT Platform, IBM will enable information from devices such as RFID-based locations, barcode-scan events or device-reported data, to be used within the IBM Blockchain. Devices will be able to communicate to Blockchain based ledgers to update or validate smart contracts.

From a business benefits viewpoint, the IBM System z platform is ideally placed for Blockchain deployment, being a highly secure EAL5+ certified platform.  Hardware accelerators deliver high speed secure encryption and hashing, supplemented by tamper-proof security Crypto Express modules for key management.  Numerous memory resident partitions can also be created rapidly to keep ledgers separate and secure.  As per usual, the System z platform has the fastest commercial processor, a highly scalable I/O system to handle massive numbers of transactions, ample memory for Blockchain operations and an optimised secure network for optimised Blockchain peer communications.

Returning full circle to where this article started, the System z Mainframe is arguably the de facto System Of Record platform for the worlds traditional Fortune 500 or Global 2000 businesses.  These well established businesses have in all likelihood spent several decades or more establishing this centralised application programming and database usage model.  The realm of opportunity exists to make this priceless data asset available to numerous businesses, both large and small via Blockchain architectures.  If we consider just one simple example, a highly globalised and significant Banking institution could facilitate the creation of a new specialised and optimised “challenger banking” operation, for a particular location or business sector, leveraging from their own internal System Of Record data and perhaps, vital data from another source.  One could have the hypothetical debate as to whether a well-established bank is best placed for such a new offering, but with intelligent collaboration, delivering a valuable service to a new market, where such a service has not been previously possible, doesn’t everybody win?

Perhaps with Blockchain, truly open and collaborative cooperation is possible, both from a business and technology viewpoint.  For example, why wouldn’t one of the new Fortune 500 companies such as a Social Media company with billions of users, look to a traditional Fortune 500 company deploying an IBM System z Mainframe, to expand their revenue portfolio from being advertising driven, to include service provision, whatever that might be.  Rightly or wrongly, if such a Social Media company is a user’s preferred portal for accessing a plethora of other company resources (E.g. Facebook Login), why wouldn’t this user want to fully process some other business transaction (E.g. Financial) via said platform?  However unlikely, maybe Blockchain can truly simplify and expedite Globalisation, for the benefit of users and businesses alike…

The IBM Mainframe: A Several Year Hardware Refresh Cycle?

Typically a new generation of IBM Mainframe server is released every three years or so, along with a number of function and performance upgrades.  In 2003, IBM released their Mainframe Charter that included a statement:

IBM lowered MSU values incorporated in the z990 microcode by approximately 10 percent, resulting in IBM software savings for IBM zSeries software products with MSU-based pricing.  These reduced MSUs do not indicate a change in machine performance. Superior performance and technology within the z990 has allowed IBM to provide improved software prices for key IBM zSeries operating system and middleware software products.

This terminology was named by some as the “Technology Dividend” where put simply, when upgrading IBM Mainframe servers, users would benefit from a ~10%+ software price versus performance benefit.  However, the z10 server model was the last IBM Mainframe series that benefitted from this hardware CPU chip related performance benefit.  Subsequent IBM Mainframe models have compensated for this slowing of hardware performance increase, by compensating with AWLC and AEWLC pricing models.  Therefore, unless your business has an absolute need for the “latest and greatest” IBM Mainframe server hardware, the realm of possibility exists that your business can extend the useful and cost efficient lifetime of your IBM Mainframe asset beyond the typical three year period…

As we all know, with every IT platform, there is a strong correlation between server hardware and associated Operating System.  Arguably the IBM Mainframe server has the best compatibility attribute, where there are many server hardware and Operating System interoperability scenarios.  A recent Statement Of Direction (SOD) for z/OS states:

Going forward, IBM intends to make new z/OS and z/OSMF releases available approximately every two years. Such a schedule would be intended to provide you with sufficient time to plan for new releases and to leverage them for the most business value. In addition, beginning with z/OS Version 2, IBM plans to provide five years of z/OS support, with three years of optional, fee-based extended service (5+3) as part of the new release cadence. Beginning with z/OSMF Version 2, IBM also plans to provide five years of z/OSMF support. However, similar to z/OSMF Version 1, optional extended service is not planned to be available for z/OSMF Version 2.

In addition, in z/OS V2.1, IBM plans to further leverage enhancements in the current IBM mainframe servers and storage control units. z/OS V2.1 is planned to IPL only on System z9 and later servers. Also, z/OS Version 2 is planned to require 3990 Model 3 (3990-3), 3990 Model 6 (3990-6), and later storage control units.

In attempt to simplify this scenario, in theory an IBM Mainframe customer could benefit from 5 years z/OS Version 2 support, with an IBM z9 or newer server.  In addition, this support could be extended for a further 3 years, for an extended service fee.  Therefore, from a software support perspective, there are no tangible cost considerations for extending the asset life of an IBM Mainframe from a 3 to 5 year cycle.

We must then consider the End of Marketing (EOM), also known as Withdrawal From Marketing (WDFM) and End Of Service (EOS) life cycles for the IBM Mainframe Server (Hardware).  Once again, when compared to other non-Mainframe platforms, the IBM Mainframe Server demonstrates an arguably unparalleled support cycle, where in the last 20 years or more, an average of 4.2 years sales and service, supplemented by an additional average of 7.1 years additional service applies.  Once again, as per z/OS Operating System support, the realm of possibility exists for extending the typical 3 year hardware refresh cycle to 5 years or longer.

When considering IBM Mainframe server hardware provision and support, there is one subtle difference that is not necessarily obvious, especially for those organizations that refresh their IBM Mainframe server every 3 years or so.  Clearly and stating the obvious, only IBM or a highly certified IBM System z Business partner can supply a latest generation IBM System z server or field upgrade option.  Conversely, there are a higher number of certified organizations that can provide IBM Mainframe hardware support services, allowing for a competitive and healthy 3rd party market for these services.  Additionally these companies also maintain inventories of equipment and have access to Microcode and Firmware upgrades that offer a possibility for performing field upgrades of EOM/WDFM servers.  One such company with a longevity and good track record of providing these value-added IBM Mainframe services from The United Kingdom is Blue Chip Customer Engineering.  As per any other competitive market place, arguably each and every IBM Mainframe user might consider obtaining a comparative hardware support services quotation for their business, whether they’re using the current latest and greatest IBM System z server model, or a slightly older (E.g. 4-8+ Years) model.

In conclusion, there are always options for the cost savvy business to reduce costs.  In the IBM Mainframe environment, soft capping via standard IBM Defined Capacity (DC) or Group Capacity Limit (GCL) function is an option, intelligent soft capping via a 3rd party product such as zDynaCap might be an option, or leveraging from the latest Absolute Capping IBM feature also applies.  Moreover, exploring the 3rd party hardware support services market might prove to be a very simple and commercial exercise that could decrease IBM Mainframe TCO, while extending asset life accordingly.

The IBM Mainframe: A Closed & Difficult To Commission Platform?

All too often in life we are led to believe that some things are just too difficult to achieve.  Sometimes we believe them, but hopefully, more often than not, the human spirit wins and we try to achieve the allegedly impossible.  Some might call it reverse psychology; I learned this myself as a 17 year old, having a serious leg injury having been knocked off my motor cycle, where the surgeon said “you will never walk without a limp or play sport again”.  Luckily, I thought differently and worked very hard to prove the surgeon wrong.  The surgeon wasn’t wrong, he was just far more experienced than myself and this was his way of telling me to do the hard work…

In the last several decades or so, there will be many instances of scenarios where people have stated that the IBM Mainframe is just too difficult to operate; too expensive to even consider and in general, just the preserve of an aging workforce, which will inevitably become extinct, just like the dinosaurs.  Of course, such a viewpoint isn’t necessarily balanced and pragmatic and the IBM Mainframe community, supplier and customers alike have safeguarded the longevity and strategic importance of this platform.

Having worked with the IBM Mainframe platform for ~35 years, one of the most inspiring and can do scenarios I have encountered was articulated at the recent SHARE Winter 2016 conference.  In a session named, I Just Bought an IBM z890 – Now What?, Connor Krukosky a student from Cecil Community College articulates how he commissioned a used z890 in his home environment for $340.60!  The several hundred dollars cost is impressive, but the most impressive aspect of this story is the can do attitude of Connor and the community spirit of those who assisted them.

In a timeframe where very young students can learn programming with low cost platforms such as the Raspberry Pi and The BBC micro:bit, isn’t it great that we can see a young adult student find a seemingly obsolete Mainframe platform via an on-line auction site and then find a way of commissioning that platform once again?

As always, where there is a will there is a way, and if you look closely enough at this scenario, even if you don’t know anything about the IBM Mainframe platform, you might just learn that even an IBM Mainframe first released in 2004 can be considered as an “open system” and with a “can do attitude”, can be implemented with little or no experience.

As for Connor Krukosky, good luck young man and great job!  I hope you find a great job in your chosen field and if it’s working with the System z platform, we welcome you to our open and proud community.