Tailored Fit Pricing for IBM Z: A Viable R4HA Alternative?

In a previous blog entry, I discussed the pros and cons of IBM Z Solution Consumption License Charges (SCLC): A Viable R4HA Alternative.  Recently on 14 May 2019 IBM announced Tailored Fit Pricing for IBM Z, introducing two comprehensive alternatives to the Rolling 4 Hour Average (R4HA) based pricing model, for both new and existing workloads, with a General Availability (GA) date of 21 June 2019.

To digress a little, for those of us in the Northern Hemisphere, June 21 is considered as the Summer Solstice, where the date might vary, one day before or after, namely June 20-22.  You can then further complicate things with confusing Midsummer’s Day with the Summer Solstice and Astronomical versus Meteorological seasons, but whatever, it’s a significant timeframe, with many traditions throughout Europe.  Once again, Midsummer’s Day can be any date between June 19 and June 24.  Having considered my previous review of SCLC and now the Tailored Fit Pricing announcement, I was reminded of a quotation from A Midsummer Night’s Dream by William Shakespeare, “so quick bright things come to confusion”…

The primary driver for Tailored Fit Pricing for IBM Z is to help mitigate unpredictable costs whilst continuing to deliver optimal business outcomes in the world of Digital Transformation & Hybrid Cloud.  Depending on the type of workload activity in your organisation, a tailored pricing model may be far more competitive when compared to pay-as-you-go schemes that have been typical on many x86 based cloud implementations.  Combining technology with cost competitive commercial models delivered through Tailored Fit Pricing strongly challenges the mindset that IT growth must be done on a public cloud in order to make economic sense.  Put another way, this is the IBM Marketing stance to compete with the ever-growing presence of the major 3 Public Cloud providers, namely Amazon Web Services (AWS), Microsoft Azure and Google Cloud, totalling ~60% of Public Cloud customer spend.

In essence a significant portion of The Tailored Fit Pricing for IBM Z announcement is a brand renaming activity, where the Container Pricing for IBM Z name changes to Tailored Fit Pricing for IBM Z.  The IBM Application Development and Test Solution and the IBM New Application Solution that were previously introduced under the Container Pricing for IBM Z name, are now offered under the Tailored Fit Pricing for IBM Z name.  Tailored Fit Pricing for IBM Z pricing introduces two new pricing solutions for IBM Z software running on the z/OS platform.  The Enterprise Consumption and Enterprise Capacity Solutions are both tailored to your environment and offer flexible deployment options:

  • Enterprise Consumption Solution: a tailored usage-based pricing model where compute power is measured on a per MSU basis.  MSU consumption is aggregated hourly, providing a measurement system better aligned with actual system utilization, when compared with R4HA.  Software charges are based on the total annual MSU usage, assisting users with seasonal workload pattern variations.  A total MSU used charging mechanism is designed to remove MSU capping, optimizing SLA and response time metrics accordingly.
  • Enterprise Capacity Solution: a tailored full-capacity licensing model, offering the maximum level of cost predictability.  Charges are based on the overall size of the physical hardware environment.  Charges are calculated based on the estimated mix of workloads running, while providing the flexibility to vary actual usage across workloads. Charges include increased capacity for development and test environments and reduced pricing for all types of workload growth.  An overall size charging mechanism is designed to remove MSU capping, optimizing SLA and response time metrics accordingly.

The high-level benefits associated with the Enterprise Consumption and Enterprise Capacity solutions can be summarized as:

  • Licensing models that eradicate cost control capping activities, enabling clients to fully exploit the CPU capacity installed
  • Increased CPU capacity for Development and Test (DevTest) environments, enabling clients to dramatically increase DevTest activities, without cost consideration
  • Optimized and potential lower pricing for all types of workload growth, without requiring additional IBM approvals, or additional tagging and tracking

Enterprise Solution License Charges (ESLC) are a new type of Monthly License Charge (MLC) pricing methodology for Enterprise Solutions, tailored for each individual and specific client environment and related requirements.  It was forever thus, whatever the pricing mechanism, the ubiquitous z/OS, CICS, Db2, MQ, IMS, WAS software products are the major considerations for MLC pricing mechanisms.  The Key prerequisites for Tailored Fit Pricing for IBM Z are IBM z14 Models M01-M05 or z14 Model ZR1, running the z/OS 2.2 and higher Operating System.

For new Mission Critical workloads and existing or new Development and Test (DevTest) workloads, Tailored Fit Pricing for IBM Z is clearly a great fit.  The restriction of z14 hardware is a little disappointing, where Solution Consumption License Charges (SCLC) included support for the z13 and z13s server.  I’m guessing that IBM are relying upon a significant z14 field upgrade programme in the next few years, largely based upon the Pervasive Encryption (PE) functionality.  However, for those customers that have run the IBM Z platform for decades and might have invested in cost optimization activities, including but not limited to capping, the jump to these new Enterprise Solution License Charges (ESLC) might take a while…

We could review this isolated announcement to the nth degree, but I’m not sure how productive that might be.  For sure, there is always devil in the detail, but sometimes we need to consider the big picture…

As a baby boomer myself, I see my role as passing on my knowledge to the next generations, although still wanting and striving to learn each and every day.  At this time of year, where the weather is better and roads drier, I drive my classic car a lot more and I enjoy the ability to tune the engine with my ears, hands, eyes and a strobe; getting my hands dirty!  I wonder whether the future of the IBM Z platform ecosystem is somewhat analogous to that of the combustion engine.  Several decades ago, electronics and Engine Management Systems became common place for combustion engines and now the ubiquitous laptop is plugged into the engine bay, to retrieve codes to diagnose and in theory repair faults.  For the consumer, arguably a good thing from a vehicle reliability viewpoint, but from a mechanical engineer viewpoint, have these folks become deskilled?  If you truly want your modern vehicle fixed, you will probably need a baby boomer to do this, one that doesn’t rely on a laptop, but their experience.  Although a sweeping generalisation, as there are always exceptions to any rule, the same applies to the IBM Z environment, where it was forever thus, compute power (MSU/MIPS) optimization relies upon a tune, tune, tune approach.

Whether R4HA or Full Capacity based, software cost charges will only be truly optimized if the system and ultimately application code is tuned.  A possible potential downside of not paying close attention to MSU usage, especially when considering these Enterprise Solution License Charges, is a potential isolated activity to “fix” IBM Z software costs forevermore, based upon a high MSU baseline.  Just as the combustion engine management systems simplify fault or diagnostic data collection, they don’t necessarily highlight that the vehicle owner left their cargo carrier on the vehicle roof, harming fuel efficiency.  A crude analogy for sure, but experience counts for a lot.  We have all probably encountered the Old Engineer & The Hammer story before and ultimately it’s incumbent upon us all, to safeguard that we don’t enable a rapid “death of expertise”.  Once the skills are lost, they’re lost.  Whether iStrobe from Compuware, TurboTune from Critical Path Software Inc. or the myriad of other System Monitor options, engage the experienced engineer and safeguard MSU optimization.  At this point, deploy the latest IBM Z pricing mechanism, namely Tailored Fit Pricing for IBM Z, and you will have truly optimized software costs…

IBM Z Solution Consumption License Charges (SCLC): A Viable R4HA Alternative?

In the same timeframe as the recent IBM z14 and LinuxONE Enhanced Driver Maintenance (GA2) hardware announcements, there were modifications to the Container Pricing for IBM Z mechanism, namely Solution Consumption License Charges (SCLC) and the Application Development and Test Solution.  Neither of these new pricing models are dependent on the IBM z14 GA2 hardware announcement, but do require the latest IBM z13, IBM z13s, IBM z14 or IBM z14 ZR1 servers and z/OS V2.2 and upwards for collocated workloads and z/OS V2.1 and upwards for separate LPAR workloads.

For many years, IBM themselves have attempted to introduce new sub-capacity software pricing models to encourage new workloads to the IBM Z server and associated z/OS operating system.  Some iterations include z Systems New Application License Charges (zNALC), Integrated Workload Pricing (IWP) and z Systems Collocated Application Pricing (zCAP), naming but a few.  The latest iteration appears to be Container Pricing for IBM Z, announced in July 2017, with three options, namely the aforementioned Application Development and Test Solution, the New Application Solution and Payments Pricing Solution.  This recent October 2018 announcement adapts the New Application Solution option, classifying it as the Solution Consumption License Charges (SCLC) mechanism.  For the purposes of this blog, we will concentrate on the SCLC mechanism, although the potential benefits of the Application Development and Test Solution for non-Production workloads should not be under estimated…

From a big picture viewpoint, z/OS, CICS, Db2, IMS and MQ are the most expensive IBM Z software products and of course, IBM Mainframe users have designed their environments to reduce software costs accordingly, initially with sub-capacity and then Workload Licence Charging (WLC) and the associated Rolling 4 Hour Average (R4HA).  Arguably CPU MSU management is a specialized capacity and performance management discipline in itself, with several 3rd party ISV options for optimized soft-capping (I.E. AutoSoftCapping, iCap, zDynaCap/Dynamic Capacity Intelligence).  IBM thinks that this MSU management discipline has thwarted new workloads being added to the IBM Z ecosystem, unless there was a mandatory requirement for CICS, Db2, IMS or MQ.  Hence this recent approach of adding new and qualified workloads, outside of the traditional R4HA mechanism.  These things take time and with a few tweaks and repairs, maybe the realm of possibility exists and perhaps the Solution Consumption License Charges (SCLC) is a viable and eminently usable option?

SCLC offers a new pricing metric when calculating MLC software costs for qualified Container Pricing workloads.  SCLC is based on actual MSU consumption, as opposed to the traditional R4HA WLC metric.  SCLC delivers a pure and consistent metered usage model, where the MSU resource used is charged at the same flat rate, regardless of hourly workload peaks, delivering pricing predictability.  Therefore, SCLC directly reflects the total workload cost, regardless of consumption, on a predictable “pay for what you use” basis.  This is particularly beneficial for volatile workloads, which can significantly impact WLC costs associated with the R4HA.  There are two variations of SCLC for qualified and IBM verified New Applications (NewApp):

  • The SCLC pay-as-you-go option offers a low priced, per-MSU model for software programs within the NewApp Solution, with no minimum financial commitment.
  • The SCLC-committed MSU option offers a saving of 20% over the pay-as-you-go price points, with a monthly minimum MSU commitment of just 25,000 MSUs.

SCLC costs are calculated and charged per MSU on an hourly basis, aggregated over an entire (SCRT) month.  For example, if a NewApp solution utilized 50 MSU in hour #1, 100 MSU in hour #2 and 50 MSU in hour #3, the total chargeable MSU for the 3-hour period would be 200 MSU.  Hourly periods continue to be calculated this way over the entire month, providing a true, usage-based cost model.  We previously reviewed Container Pricing in a previous blog entry from August 2017.  At first glance, the opportunity for a predictable workload cost seems evident, but what about the monthly MSU commitment of 25,000 MSU?

Let’s try and break this down at the simplest level, using the SCLC hourly MSU base metric.  In a fixed 24-hour day and an arbitrary 30-day month, there would be 720 single MSU hours.  To qualify for the 25,000 MSU commitment, the hourly workload would need to average ~35 MSU (~300 MIPS) in size.  For the medium and large sized business, generating a 35 MSU workload isn’t a consideration, but probably is for the smaller IBM Mainframe user.  The monthly commitment also becomes somewhat of a challenge, as a calendar month is 28/29 days, once per year, 30 days, four times per year and 31 days, seven times per year.  This doesn’t really impact the R4HA, but for a pay per MSU usage model, the number of MSU hours per month does matter.  One must draw one’s own conclusions, but it’s clearly easier to exceed the 25,000 MSU threshold in a 31-day month, when compared with a 30, 29 or 28 day month!  From a dispassionate viewpoint, I can’t see any reason why the 20% discount can’t be applied when the 25,000 MSU threshold is exceeded, without a financial commitment form the customer.  This would be a truly win-win situation for the customer and IBM, as the customer doesn’t have to concern themselves about exceeding the arbitrary 25,000 MSU threshold and IBM have delivered a usable and attractive pricing mechanism for the desired New Application workload.

The definition of a New Application workload is forever thus, based upon a qualified and verified workload by IBM, assigned a Solution ID for SCRT classification purposes, integrating CICS, Db2, MQ, IMS or z/OS software.  Therefore existing workloads, potentially classified as legacy will not qualify for this New Application status, but any application re-engineering activities should consider this lower price per MSU approach.  New technologies such as blockchain could easily transform a legacy application and benefit from New Application pricing, while the implementation of DevOps could easily transform non-Production workloads into benefiting from the Application Development and Test Solution Container Pricing mechanism.

In conclusion, MSU management is a very important discipline for any IBM Z user and any lower cost MSU that can be eliminated from the R4HA metric delivers improved TCO.  As always, the actual IBM Z Mainframe user themselves are ideally placed to interact and collaborate with IBM and perhaps tweak these Container Pricing models to make them eminently viable for all parties concerned, strengthening the IBM Z ecosystem and value proposition accordingly.

Are You Ready For z Systems Workload Pricing for Cloud (zWPC) for z/OS?

Recently IBM announced the z Systems Workload Pricing for Cloud (zWPC) for z/OS pricing mechanism, which can minimize the impact of new Public Cloud workload transactions on Sub-Capacity license charges.  Such benefits will be delivered where higher Public Cloud workload transaction volumes may cause a spike in machine utilization.  Of course, if this looks familiar and you have that feeling of déjà vu, this is a very similar mechanism to Mobile Workload Pricing (MWP)…

Put simply, zWPC applies to any organization that has implemented Sub-Capacity pricing via the basic AWLC or AEWLC pricing mechanisms, for the usual MLC software suspects, namely z/OS, CICS, DB2, IMS, MQ and WebSphere Application Server (WAS).  An eligible transaction is one classified as Public Cloud originated, connecting to a z/OS hosted transactional service and/or data source via a REST or SOAP web service.  Public Cloud workloads are defined as transactions processed by named Public Cloud applications transactions identified as originating from a recognized Public Cloud offering, including but not limited to, Amazon Web Services (AWS), Microsoft Azure, IBM Bluemix, et al.

As per MWP, SCRT calculates the R4HA for Public Cloud transaction GP MSU resource usage, subtracting 60% of those values from the traditional Sub-Capacity software eligible MSU metric, with LPAR granularity, for each and every reporting hour.  The software program values for the same hour are aggregated for all Sub-Capacity eligible LPARs, deriving an adjusted Sub-Capacity value for each reporting hour.  Therefore SCRT determines the billable MSU peak for a given MLC software program on a CPC using the adjusted MSU values.  As per MWP, this will only be of benefit, if the Public Cloud originated transactions generate a spike in the current R4HA.

One of the major challenges for implementing MWP was identifying those transactions eligible for consideration.  Very quickly IBM identified this challenge and offered a WorkLoad Manager (WLM) based solution, to simplify reporting for all concerned.  This WLM SPE (OA47042), introduced a new transaction level attribute in WLM classification, allowing for identification of mobile transactions and associated processor consumption.  These Reporting Attributes were classified as NONE, MOBILE, CATEGORYA and CATEGORYB.  Obviously IBM made allowances for future workload classifications, hence it would seem Public Cloud will supplement Mobile transactions.

In a previous z/OS Workload Manager (WLM): Balancing Cost & Performance blog post, we considered the merits of WLM for optimizing z/OS software costs, while maintaining optimal performance.  One must draw one’s own conclusions, but there seemed to be a strong case for WLM reporting to be included in the z/OS MLC Cost Manager toolkit.  The introduction of zWPC, being analogous to MWP, where reporting can be simplified with supplied and supported WLM function, indicates that intelligent and proactive WLM reporting makes sense.  Certainly for 3rd party Soft-Capping solutions, the ability to identify MWP and zWPC eligible transactions in real-time, proactively implementing MSU optimization activities seems mandatory.

The Workload X-Ray (WLXR) solution from zIT Consulting delivers this WLM reporting function, seamlessly integrating with their zDynaCap and zPrice Manager MSU optimization solutions.  Of course, there is always the possibility to create your own bespoke reports to extract the relevant information from SMF records and subsystem diagnostic data, for input to the SCRT process.  However, such a home-grown process will only work on a monthly reporting basis and not integrate with any Soft-Capping MSU management, which will ultimately control z/OS MLC costs.

In conclusion, from a big picture viewpoint, in the last 2 years or so, IBM have introduced several new Sub-Capacity pricing mechanisms to help System z Mainframe users optimize z/OS MLC costs, namely Mobile Workload Pricing (MWP), Country Multiplex Pricing (CMP) and now z Systems Workload Pricing for Cloud (zWPC).  In theory, at least one of these new pricing mechanisms should deliver benefit to the committed System z user, deploying this server for strategic and Mission Critical workloads.  With the undoubted strategic importance associated with Analytics, Blockchain, Cloud, DevOps, Mobile, Social, et al, the landscape for System z workloads is rapidly evolving and potentially impacting those sacrosanct legacy Mission Critical workloads.  Seemingly the realm of possibility exists that Cloud and Mobile originated transactions will dominate access to System z Mainframe System Of Record (SOR) data repositories, which generates a requirement to optimize associated MLC costs accordingly.  Of course, for some System z users, such Cloud and Mobile access might not be on today’s to-do list, but inevitably it’s on the horizon, and so why not implement the instrumentation ability ASAP!

IMS: The First Commercial Database Management Subsystem

If we could put a man on the Moon, could we also create a computer program to track the millions of rocket parts it takes? In 1966, the National Aeronautics and Space Administration (NASA) contractor North American Aviation (AKA Rockwell International) asked IBM that question. In response, IBM launched the world’s first commercial database management system in 1968, called the Information Control System and Data Language/Interface (ICS/DL/I). In 1969, it was renamed Information Management System (IMS).

The IMS architecture has always comprised two functions. Firstly, the database system supporting a hierarchical, tree-like structure data model (AKA IMS/DB). Secondly transaction processing software for handling complex, high-volume transactions, such as order entry, inventory management, payroll and claims processing, airline or hotel reservations, financial applications, and other transaction-oriented applications (AKA IMS/DC or IMS TM).

A unique feature of IMS is its queued system architecture, being a process that receives all transactions as they arrive and holds them until they can be processed. This allows for intelligent and commercial application processing; for example, when an airline agent enters a transaction, the automated transaction manager takes care of updating IMS, so another ticket agent doesn’t sell the same seat.

Some might say that the “business world relies on IMS” as 75+% of top Fortune 1000 companies use IMS to process more than 50 billion transactions a day, managing 15+ Million Gigabytes of mission critical business data.

From my own viewpoint, I have always enjoyed working with IMS and its arguably trail blazing functions, including but not limited to; Checkpoint Restart, Fast Path, Write Ahead Data Set (WADS), Batch Message Processing (BMP), Database Recovery Control (DBRC), et al. Whether System z or Distributed Platform product solutions or not, IMS has introduced many functions that have enhanced and optimized application processing throughout the decades. Is IMS still relevant today?

Industry analysts claim that IMS is the lowest cost transaction and hierarchical database management system for mission critical OLTP. With a TPS (Transactions Per Second) benchmark topping 117,000, IMS delivers industrial strength capabilities for managing and distributing data. IMS delivers mission critical levels of availability, performance, security and scalability. Expansive integration capabilities enable mobile and cloud applications based on IMS assets, enhanced analytics, new application development, SOA exploitation, and more.

In 2013 Gartner stated “by 2016, 40 percent of mobile application development projects will leverage cloud back-end services, causing development leaders to lose control of the pace and path of cloud adoption within their enterprises”. In this timeframe Gartner also stated “hybrid apps, which offer a balance between HTML5-based web apps and native apps, will be used in more than 50 percent of mobile apps by 2016”. Additionally, “While mobile becomes a requirement for everything, there is no single device that will meet all needs. By the end of 2013, mobile phones will overtake PCs as the most common web access device worldwide and by 2016, PC shipments will be less than 50 percent of combined PC and tablet shipments”.

As the original and ground breaking “System Of Record”, combined with industry leading OLTP performance, why wouldn’t a CIO in 2016 consider IMS as the foundation for big data and even cloud based mission critical business applications? With easy and rapid application development via solutions such as RDz and mobile application integration via z/OS Connect, accessing IMS assets has never been easier. Whatever the industry vertical, IMS has facilitated “rocket science and the man on the moon race” since day #1 in the late 1960’s, while leveraging from the unparalleled System z platform for the best scalability and performance attributes in a single footprint. A modicum of lateral thinking should consider IMS as a Service, as well as IaaS and XaaS, for resolving today’s challenges of mobile applications generating unparalleled number of transactions and associated big data requiring analytics to process rapidly evolving business requirements…

How to Connect Mobile Workloads to System z

Despite potential security concerns, primarily data encryption and multiple-factor authentication related, mobile transactions continue to increase their share of the market, accounting for up to half of online transactions. Mobile payments now account for 30%+ of all global online transactions as of Q3 2015, continuing the upward trend experienced for the last several years. Although there are global differences in mobile transaction adoption, all global locations are experiencing rapid growth in mobile transaction adoption. Furthermore, as a general rule of thumb, seemingly ~66% of mobile transactions originate from a smartphone, a ~2:1 ratio when compared with tablet devices. Therefore it seems highly probable that smartphone originated mobile transactions will become the de facto standard for online transactions…

For System z users, the majority of their TCO continues to be IBM MLC software related and seemingly the realm of possibility exists for retail operations to reduce IBM MLC TCO as a result of modernizing their business for this mobile transaction phenomenon. Recognizing the security, scalability and transaction ability of the System z platform, why wouldn’t it be the ideal platform for mobile transactions? Furthermore, deploying mobile workloads that can take advantage of modern low cost System z pricing metrics, namely System z Collocated Application Pricing (zCAP) and Mobile Workload Pricing (MWP) for z/OS, could substantially reduce IBM MLC TCO. In theory, existing legacy applications might become somewhat static in nature, as mobile transactions replace existing traditional transaction mechanisms. Therefore the cost per business transaction reduces, potentially significantly.

So, just how easy is it to connect mobile transactions to the System z platform?

z/OS Connect is a software function engineered to leverage from the Liberty Profile for z/OS, acting as an enabler of connectivity between the mobile environment (client) and the System z platform (host). Put another way, z/OS Connect exposes System z assets for mobile and cloud workloads. Quite simply z/OS Connect delivers JSON (JavaScript Object Notation) and REST (REpresentational State Transfer) functionality to leverage from existing z/OS subsystems (E.g. CICS, IMS, Batch, et al). These traditional System z transaction systems (E.g. CICS, IMS) often integrated with DB2, are repositories for vast amounts of business transactions and data. There is no incremental cost for z/OS Connect usage, being packaged with WebSphere Application Server (WAS), CICS and IMS software products.

z/OS Connect provides a discovery function allowing developers to query services that have been configured for a z/OS Connect instance. A single z/OS Connect REST call returns a list of all configured services and another REST call will return the details of a given service. Importantly, developers only need to know the REST API service and associated JSON requirements to achieve this mobile device to System z interoperability; they do not need to know the underlying CICS or IMS subsystem. z/OS Connect incorporates a data conversion function that maps JSON to the host (I.E. CICS or IMS) data format requirement. Put really simply, when a request is received, z/OS Connect converts the data for CICS or IMS subsystem processing and when a response is produced, z/OS Connect converts the data back to JSON.

From a security viewpoint, standard or bespoke code can be used for control before and after a request is processed, identified as an interceptor. For Security, the calling user identity can be checked against defined roles, determining if they have authority to use z/OS Connect or the configured service. On z/OS the security interface is SAF, supplemented by an External Security Manager (ESM), namely ACF2, RACF or TopSecret. For Audit, request information can be logged via SMF for later analysis. Information about each request is logged, including timestamp, bytes processed, response time and USERID.

To summarize, z/OS Connect is designed to simplify the integration of mobile systems and z/OS assets. Delivering a consistent front-end interface for mobile systems via REST and JSON, z/OS Connect seamlessly integrates with WAS, CICS and IMS subsystems for data processing. In theory, a developer could code a mobile workload application, with no knowledge of the System z platform.

In conclusion, it seems we have to accept the adoption of the smartphone device for processing an ever increasing amount of online transactions. The realm of possibility exists that online transactions (click) will continue to displace traditional and legacy (brick) transactions. Therefore as businesses evolve to accommodate mobile transactions, they should strive to reduce their IBM MLC TCO accordingly, delivering JSON and REST applications that can leverage from optimal cost z/OS MLC software, primarily via the zCAP and MWP pricing mechanisms. z/OS Connect is one such option that simplifies the timely delivery of mobile workload applications.

System z MLC Pricing Increases: Look After The Pennies…

Recently IBM announced ~4% price increases in z/OS Monthly License Charges (MLC) for selected Operating System and Middleware software programs and associated features. Specifically, price increases will apply to the VWLC, AWLC, EWLC, AEWLC, PSLC, FWLC and TWLC pricing metrics. Notably, SDSF price increases will be ~20% with Advanced Function Printing (AFP) product price increases of ~13-24%. In a global economy where inflation rates for The USA and Western Europe are close to 0%, one must draw one’s own conclusions accordingly. Lets’ not forget that product version changes typically have an associated price increase. From a contractual viewpoint, IBM only have to provide 90 days advance notice for such price changes, in this instance, IBM provided 150+ days advanced notice.

Price increases are inevitable and as always, it’s better to be proactive as opposed to reactive to such changes. As always, the old proverbs always make good sense and in this instance, “look after the Pennies and the Pounds will look after themselves”! This periodic IBM price increase is inevitable, but is not the underlying issue for controlling System z software costs. For many years, since 1994 to be precise, when IBM introduced Parallel Sysplex License Charges (PSLC), the need for IBM Mainframe users to minimize MSU usage has been of high if not critical importance. Nothing has changed in this 20+ year period and even though IBM might have introduced Sub-Capacity and specialty engines to minimize chargeable MSU usage, has each and every System z user optimized their MSU usage? Ideally this would not be a rhetorical question, rather being a “Golden Rule”, where despite organic CPU capacity increases of ~10% per annum, a System z environment could maintain near static IBM MLC software costs.

I have written several blog entries and presented on this subject matter over the years, for example:

The simple bottom line is that System z MLC software accounts for ~20-35% of the overall System z TCO, typically being the #1 expenditure item. For that reason alone, it’s incumbent for each and every System z user to safeguard they have the technical and commercial skills in place to manage this cost item, not as an afterthought, but inbuilt into each and every System z process, from application design, through to that often neglected afterthought, application tuning.

Many System z organizations might try to differentiate between a nuance of System and Application tuning, but such a “not my problem” type attitude is not acceptable and will be imposing a significant financial burden on each and every organization.

A dispassionate and pragmatic approach is required for optimizing System z CPU usage. In this timeframe, let’s examine the ~20% SDSF price increase. IBM will quite rightly state that in conjunction with their z/OS 2.2 release, there are significant SDSF product function advancements, including zIIP offload, REXX interoperability and increased information delivery. However are such function improvements over and above the norm and not expected as a Business As Usual (BAU) product improvements, which should be included in the Service & Support (S&S) or Monthly License Charges (MLC) paid for software?

In October 2013 I wrote a blog entry; Mainframe ISV Software: Is Continuous Product Improvement Always Evident? The underlying message was that an ISV should deliver the best product they can, for each and every release, without necessarily increasing software costs. In this particular instance, the product was an SDSF equivalent, namely (E)JES, which many years ago delivered all of the function incorporated in SDSF for z/OS 2.2, but for a fraction of the cost…

As of 1 November 2015, IBM will start billing cycles for Country Multiplex Pricing (CMP), which requires the October 2015 version of SCRT, namely V23R10. A Multiplex is defined as a collection of all System z servers in one country, measured as one System z server for software sub-capacity reporting. Sub-Capacity program utilization peaks across the Multiplex will be measured, as opposed to separate peaks by System z servers. CMP also provides the flexibility to move and run workloads anywhere with the elimination of Sysplex aggregation pricing rules.

Migrating to CMP is focussed on CPU capacity growth and flexibility going forward. Therefore System z users should not expect price reductions for their existing workloads upon CMP deployment. Indeed there are CMP deployment considerations. A CMP MSU baseline (base) needs to be established, where this MSU Base and associated MLC Base Factor is established for each sub-capacity MLC product and each applicable feature code. These MSU and MLC bases represent the previous 3 Month averages reported by SCRT before commencing CMP. Quite simply, to gain the most from CMP, the System z user must safeguard that their R4HA for each and every MLC product is optimized, before setting the CMP baseline, otherwise CMP related cost savings going forward are likely to be null.

From a very high-level management viewpoint, we must observe that IBM are a commercial organization, and although IBM provide mechanisms for controlling cost going forward, only the System z user can optimize System z MLC cost for their organization. Arguably with CMP, Soft-Capping isn’t a consideration, it’s mandatory.

Put very simply, each and every System z user can safeguard that they look after the Pennies (Cents) and the Pounds (Euros, Dollars) will look after themselves by paying careful attention to System z MLC software costs. Setting a baseline of System z MLC costs is mandatory, whether for the first time, or to set a new baseline for CMP deployment. Maintaining or lowering this System z MLC cost baseline should or arguably must be the objective going forward, even when considering 10% organic CPU growth, each and every year. System z decision-makers and managers must commit to such an objective and safeguard the provision of adequately skilled personnel to optimize such a considerable TCO cost line item (I.E. MLC @ ~20-35% of System z TCO). In an ecosystem with technical resources including DBA, Systems Programmer, Capacity Planner, Application Personnel, Performance Tuning, et al, why wouldn’t there be a specialist Software Cost Manager?

Let’s consider how even an inexperienced System z user can maintain a baseline of System z MLC costs, even with organic CPU capacity growth of 10% per annum:

  • System z Server Upgrade: Higher specification CPU chips or Technology Transition Offering (TTO) pricing metrics deliver 10%+ cost per MSU benefits.
  • System z Specialty Engines: Over time, more and more application workload can be offloaded to zIIP processors, with no sub-capacity MLC software charges.
  • System z Software Version Upgrades: Major subsystems such as CICS, DB2, IMS, MQSeries and WebSphere deliver opportunity to lower cost per MSU; safeguard such function exploitation.
  • Application Tuning: Whether SQL, COBOL, Java, et al, or the overall I/O subsystem, safeguard that latest programming techniques and I/O subsystem functions are exploited.
  • New Application Deployment: As and when possible, deploy new or convert existing workloads to benefit from the optimal MLC pricing metric; previously zNALC, nowadays zCAP.
  • Technical & Commercial Skills Currency: Safeguard personnel have the latest System z software pricing knowledge, ideally from an independent 3rd party such as Watson & Walker.

In conclusion, as householders we have the opportunity to optimize our cost expenditure, choosing and switching between various major cost items such as financial, utility and vehicle products. As System z users, we don’t have that option, only IBM provide System z servers and associated base architecture, namely the most expensive MLC software products, z/OS, CICS, DB2, IMS and WebSphere/MQ. However, just as we manage our domestic budgets, reducing power usage, optimizing vehicle TCO and getting more bang from our buck for financial products various, we can and must deliver this same due diligence for our System z MLC TCO. With industry averages of ~$500-$1000 per MSU for z/OS MLC software and associated annual expenditure measured in many millions, why wouldn’t any System z user look to deliver 10%+ cost per MSU optimization, year-on-year for their organization?

Clearly the cost of doing nothing in this instance, is significant, measured in magnitudes of millions, each and every year. Hence for System z MLC TCO optimization, looking after the Pennies is more than worthwhile, while the associated benefit of the Pounds, Euros or Dollars looking after themselves is arguably priceless.

Are You Ready For z/OS Mobile Workload Pricing (MWP)?

Recently IBM announced Mobile Workload Pricing (MWP) for z/OS which can minimize the impact of mobile workloads on Sub-Capacity license charges, delivering optimized pricing for System z environments extending their workloads to incorporate mobile devices.

MWP only applies to Mainframe customers deploying a zEC12 or zBC12 in their enterprise, as per the AWLC or AEWLC (AKA Advanced/Entry Workload License Charges) metric; MWP is also extended if a zEC12 or zBC12 enterprise is deploying a z196 or z114 via the AWLC or AEWLC metric.

The primary consideration for MWP is determining how a Mainframe customer can comply with the tracking requirements for mobile workloads.  On the plus side, MWP does not require an isolation of mobile workload transactions in separate LPARs, using enhanced reporting for software pricing.  This is a major step forward when compared with Integrated Workload Pricing (IWP), which ideally requires large LPAR container structures, minimizing costs for WebSphere workloads, applying to the CICS, IMS and WebSphere MLC software products.  Conversely, MWP includes DB2 in the list of eligible software products for cost reduction.

If a Mainframe customer is eligible for MWP pricing they will then need to utilize the Mobile Workload Reporting Tool (MWRT), which is analogous to the original Sub-Capacity Reporting Tool (SCRT).  This is an either/or situation, the Mainframe customer only submits MCRT reports to IBM if they’re MWP eligible, or the status quo remains, where non-MWP Mainframe customers continue to submit SCRT reports.

The Mainframe customer must track and report General Purpose (GP) CPU time for mobile transactions, reporting those values in a pre-defined format to IBM each month to benefit from MWP.  MWRT utilizes reported mobile transaction data to adjust the Rolling 4 Hour Average (R4HA) Sub-Capacity software eligible MSUs, with LPAR granularity.  Optimizing mobile transactions impact for peak LPAR MSU values delivers benefit when higher mobile transaction volumes generate MSU resource usage peaks (Workload Spikes).

MWRT calculates the R4HA for mobile transaction GP MSU resource usage, subtracting 60% of those values from the traditional Sub-Capacity software eligible MSU metric, with LPAR granularity, for each and every reporting hour.  The software program values for the same hour are aggregated for all Sub-Capacity eligible LPARs, deriving an adjusted Sub-Capacity value for each reporting hour.  Therefore MWRT determines the billable MSU peak for a given MLC software program on a CPC using the adjusted MSU values.

Most committed zSeries Mainframe customers will be deploying CICS, DB2 and WebSphere software, while IT trends dictate that mobile device usage (I.E. Smartphone, Tablet, et al) is increasing.  Therefore most z/OS applications that require such mobile access have evolved accordingly over time.  Therefore it seems to be one of those “No Brainer” type scenarios, where the Mainframe user should plan to benefit from MWP, either as they upgrade to the latest zSeries technology, namely zEC12 or zBC12, or immediately if already deploying a zEC12 or zBC12 server.

The only minor consideration is a requirement for the zEC12 or zBC12 customer to engage their local IBM account team, to determine what data they need to report on mobile transactions for MWP consideration.  This one off task will deliver optimized WLC pricing forever more.

Of course IBM are encouraging customers to consider the Mainframe for new applications, driven by mobile transaction requirements.  Equally, there is no reason why longer term Mainframe customers can’t benefit from MWP, benefitting from reduced MLC costs, a major consideration of Mainframe TCO.

COBOL – A Viable Programming Language?

For the last twenty years or so I have encountered many scenarios where Mainframe users consider migration to a Distributed Systems (E.g. Wintel, UNIX/Linux, et al) platform, where more often than not the primary reasons seems to be “green screen” and/or “COBOL is a declining legacy language” based.

Going back to basics, COBOL is a Common Business Oriented Language, although the naysayers might say COBOL is a Completely Obsolete Business Oriented Language; we will perhaps try to be more dispassionate in this discussion…

Industry Analysts have stated that there are ~220 Billion lines of COBOL code and ~100,000 programmers and that COBOL applications process ~80% of business transactions daily, and that there are ~200 times more COBOL transactions processed daily, when compared with Google searches!  A lot of numbers and statistics, but seemingly COBOL is still widely used and accepted.  Even from a new development viewpoint, ~5 Billion lines of COBOL code per annum (~15% of Annual Global Development) is stated, suggesting that COBOL is not in any way obsolete or legacy, so why is COBOL perceived by some in a dubious manner?

Maybe because COBOL was introduced in 1959 and primarily it is deployed on the Mainframe, and so anything that is 50+ years old and has an association with the Mainframe just has to be dubious, doesn’t it?  Of course not, as this arguably “pioneering” or at least one of the first “widely deployed” programming languages allowed many global and significant businesses grow, in tandem with the IBM Mainframe platform, automating and streamlining business processes, increasing productivity and so on.  So depending on your viewpoint, COBOL was either in the right place at the right time, stimulating the Data Processing (DP) and Information Technology (IT) revolution, or COBOL just got lucky, it was “Hobson’s Choice”…

Although there have been several iterations of COBOL standards (I.E. COBOL-68, COBOL-74, COBOL-85), primarily associated with the American National Standards Institute (ANSI) and more latterly COBOL 2002 (ISO), a COBOL program that was written and compiled on an IBM Mainframe several decades ago, will most likely still run on the latest generation IBM Mainframe.  Put another way, its backwards compatibility ability has been significant, and although there were some migration considerations associated with the Language Environment (LE), the original COBOL Application Development investment has generated a readily usable Return On Investment (ROI) over and over again.  How true is this for other programming languages and computing platforms?  For the avoidance of doubt, a COBOL program that was written in 16-bit, can still run today on a 64-bit platform, and with a modicum of evolution, fully exploit the latest functionality and 64-bit performance, with minimal fuss.  While how many revolutionary or significant upgrades have been required for Commercial Off The Shelf (COTS) software and associated bespoke application development code, to upgrade non-Mainframe platforms from 16-32-64-bit?

So, is COBOL a viable programming language of the future?  One must draw one’s own conclusions, but we can look to recent functional enhancements and statements of direction from an IBM Mainframe viewpoint.

In recent years IBM have actually increased the number of COBOL R&D personnel by a factor of ~100%, while increasing allocated investment, commitment and interest accordingly.  This observation more than any other, suggests that at least from an IBM Mainframe viewpoint, COBOL is an important function.

From a technical function viewpoint, the realm of possibility exists with COBOL, interacting with all 21st century programming and function techniques, dismissing the notion that COBOL can only be considered as a traditional/legacy option for CICS-Batch applications and associated “green screen” environments, for example:

  • Support for CICS integrated translator
  • Support for latest SQL data types in syntax via DB2
  • Support for Java interoperability via object-oriented COBOL syntax
  • Support access for WebSphere enterprise beans
  • Support for Java SDK
  • Support for XML high speed parsing and validation (UTF-8, UTF-16 & various EBCDIC codepages)

From a strategic statement of direction viewpoint, IBM have declared the following major notable activities:

  • Performance and resource utilization optimization, reducing TCO accordingly
  • Improved middleware (I.E. CICS, DB2, IMS, WebSphere) programmability and problem determination
  • Improved capabilities (E.g. XML, Java, et al) for modernizing & creating business critical applications
  • Improved programmer (E.g. Usability and Problem Determination) productivity
  • Source and load (I.E. recompile not required) compatibility (E.g. old programs can call new and vice versa)

Even for those occasions where the IBM Mainframe platform might be decommissioned, COBOL can still be processed on alternative platforms via code migration techniques such as Micro Focus, where such functions and services can be Cloud based.  However, once again, isn’t the IBM Mainframe the ultimate “Cloud” platform, which has arguably been the case “forever thus”?

One must draw one’s own conclusions as to why the Mainframe platform and/or COBOL applications are often considered for replacement via migration, when the Mainframe platform is both strategic and cost efficient.  As with any technology decision, there is no “one size fits all” solution, but perhaps a little education can go a long way, and at least the acceptance that seeming “legacy” technologies are strategic and viable.