IFL – A Cost Efficient zSeries Platform?

In September 2000, IBM introduced the Integrated Facility for Linux (IFL) processor, a specialty engine for and some might say dedicated to running the Linux Operating System.  At the time of this announcement, companion software named S/390 Virtual Image Facility for Linux was introduced to assist in the rapid deployment of IFL configurations, especially for non-Mainframe personnel.  However, this product was quickly discontinued, in favour of the standard z/VM Operating System, which is not difficult to learn and can accommodate hundreds if not thousands of zLinux images.

Today, the IFL is still a processor dedicated to Linux workloads on IBM System z servers.  The IFL is supported by z/VM virtualization and the Linux operating system.  The IFL cannot run other IBM operating systems.  The competitively priced IFL processor is a CPU capacity enabler, exclusively for Linux workloads.  Linux deployment (I.E. SUSE & Red Hat) on IFL’s can reduce expenses in the areas of operational efforts, energy, floor space and especially software.

The IFL provides the following functions and benefits:

  • The IBM Enterprise Linux Server is a dedicated System z Linux server, comprised of only IFL processors
  • No additional IBM software charges for traditional (E.g. z/OS, CICS, DB2, WebSphere, et al) environment
  • Performance improvement for Linux workloads with each successive generation of IFL and System z technology
  • Linux workload on the IFL does not result in increased IBM software charges for traditional System z operating systems and middleware
  • Same functionality as a General Purpose processor on a System z server
  • HiperSockets can be used for communication between Linux images, or Linux and other operating system images on the same System z system
  • z/VM virtualization and most IBM Linux middleware products, plus most vendor software products are priced per processor (core) according to the System z IBM International Program License Agreement (IPLA).  IPLA products have a one-time-charge (OTC) and an annual (optional) maintenance charge, called Subscription & Support
  • Supported by the current z/VM virtualization and IBM Wave for z/VM software versions
  • Always a full capacity processor, independent of the capacity of the other processors in the server
  • Orderable as a System z hardware feature. The number of orderable IFL features varies by the server model and configuration
  • Designed to operate asynchronously with other General Purpose processors
  • Managed by PR/SM in logical partition with dedicated or shared processors. The implementation of an IFL requires a Logical Partition (LPAR) definition, where following normal LPAR activation procedure, LPAR defined with an IFL cannot be shared with a general purpose processor.

There will always be the debate as to which processor and associated server type (E.g. x86, POWER, SPARC) is the most cost efficient, but there is no doubt that the ability to accommodate hundreds if not thousands of zLinux instances in one zServer environmental (E.g. Power, Cooling, Floor Space, et al) friendly footprint, with software pricing per core is worthy of consideration.

Adoption for zLinux has been steady and especially in the emerging territories where it’s not unusual for zSeries deployments to be totally zLinux (I.E. IBM Enterprise Linux Server) based.  Moreover, the majority of large and traditional IBM Mainframe users (I.E. z/OS) have installed at least one IFL, if only to evaluate the z/VM and zLinux offering.  Many have deployed the IFL and associated zLinux solution for business requirements.

Therefore, if one of the major cost benefit features of IFL is optimized software costs; can the IFL processor be considered for other workloads, originating from the traditional zSeries (I.E. z/OS) environments?

Proximal Systems Corporation (PSC) is a company with a solution that transparently offloads data processing from IBM Mainframes to Distributed Systems, with an objective of reducing software cost, while maintaining or improving performance.  The company name is derived from the concept of bringing disparate computing systems into close proximity, functionally speaking, providing totally seamless and transparent interoperability.  The result is a unified computing complex within which various tasks can be easily migrated between systems to their most cost efficient operating environment, while still being able to interoperate as if they were all hosted together on the same system.

The PSC Proxy Coupling Technology allows for a CPU orientated task to be offloaded from one system to another by means of an associated proxy task, which has an identical interface as the task to be offloaded, but delegates the majority of the processing to an offloaded task on another system.  The primary objective of this function are for the cost savings and/or performance improvements that might be delivered by migrating tasks to systems that are able to execute those tasks more efficiently.

The fact that the proxy task maintains the same interface as the application being replaced is crucial; as many past Mainframe migration projects have failed due to insurmountable interoperability problems between the Mainframe and Distributed Systems servers (I.E. Windows, Linux, UNIX, et al).  Proxy Coupling Technology offers a solution to this long-standing challenge.  In theory, this allows for the transparent offload of a traditional z/OS workload (E.g. Sort) from General Purpose (GP) processors, to a less expensive (E.g. IFL) alternative…

In the first instance, the Proxy Coupling Technology offloads General Purpose CPU workload associated with the z/OS sort (I.E. CA Sort, DFSORT, Syncsort) function, to another platform (E.g. IFL).  For IFL based implementations, HyperScokets are utilized to transfer data at memory speeds from the z/OS task to zLinux on the IFL, where the sort operation completes, while the resulting z/OS task and associated data are maintained, as per normal.  From an IFL viewpoint, Ahlsort software performs the sort operation, being a sort solution that maintains compatibility with the majority of z/OS sort function (I.E. Control Card Syntax).  Therefore, this is a transparent implementation, where the only consideration is how much CPU capacity is required for the offload function (E.g. IFL, x86).  The benefits are reduced z/OS MSU usage for the sort function, which can be quite significant, as most business data (E.g. Database Offloads, Customer Orientated, et al) is sorted on a daily if not more frequent basis.

Just as IBM introduced the zAAP on zIIP capability, which allowed some customers to more easily justify a specialty engine (I.E. zIIP), combining workloads to exploit the full capability of the specialty engine; in theory the same ethos applies with the Proxy Coupling Technology.  For the avoidance of doubt, workloads that can be processed on an IFL, such as z/OS sort tasks, can assist in delivering higher Return On Investment (ROI) levels for the IFL, for example:

  • Reduced z/OS WLC MSU usage (I.E. Sort function offload) and associated software costs savings
  • IFL processors run at Full Speed and do not add to traditional workload (I.E. z/OS) software costs
  • Utilize any spare IFL CPU resource not used, releasing General Purpose CPU resource for other work

In conclusion, the Proxy Coupling Technology offers a proposition that is similar to the IBM philosophy of reducing z/OS software costs via specialty engines.  Seemingly to date, primarily only the zIIP and zAAP specialty engines were available to optimize CPU usage for z/OS workloads.  Offloading CPU cycles and thus MSU workload to IFL makes sense, utilizing a cost efficient and indeed a full power CPU engine, where for cost reasons, maybe the majority of z/OS customers don’t deploy the “highest” derivative of General Purpose CPU engine available to them.  On the face of it, the realm of possibility exists for other workloads to benefit from z/OS to IFL CPU offload, following sort, which seems to make sense as the first workload to utilize this solution.

Apple Style Meets IBM Substance

It was the early 1980’s when IBM first announced the Personal Computer (PC), a major breakthrough for delivering affordable and practical computing into the home.  One of the primary features of this computing evolution was the “open architecture” of the PC, built from off-the-shelf and commodity components.  Of course, we all know that around this time, DOS became MS-DOS via Bill Gates and Microsoft, where the rest as they say, is history!

At this time the IBM Mainframe (1964) had nearly 2 decades longevity and was already proving a scalable, secure and reliable platform.  So here we are, some 3 decades later, where Apple and IBM have announced a Global Partnership to Transform Enterprise Mobility.

Whatever your opinion of Apple technology, in the last decade or so they have undoubtedly delivered slick design and style for mobile devices, namely the smartphone and tablet.  Therefore whether the Enterprise accept the premise or not, Bring Your Own Device (BYOD) is inevitable, where employees expect to use their personal devices in the workplace.

IBM have continued to be a dominant force in the Enterprise market, whether with Mainframe technology or not, while establishing a credible presence in the Cloud market space.  As always the world of IT is constantly changing and even though IBM sold its PC business to Lenovo in 2004; some 10 years later, as part of the exclusive IBM MobileFirst for iOS agreement, IBM will sell iPhones and iPads with industry-specific solutions to business clients worldwide.

So what role if any will the IBM zSeries platform play in this Apple deal?  As always, the zSeries platform will deliver enterprise scalability and strength for Security, Database and Messaging integration, but beyond these features, I’m not so sure.  Of course, from a data presentation viewpoint, nothing changes, iOS integration and the ability to present Mainframe originated data remains forever thus for Apple and indeed all other mobile devices.  Similarly from a business transaction viewpoint, the zSeries platform participates in the delivery of mobile support, where from an IBM technology viewpoint, the Worklight solution is one example of an end-to-end integrated development studio software product.

Despite the obvious benefits for Apple, gaining access to the Enterprise via IBM technology and their customer base, and for IBM, delivering the market leading mobile technology into their customer base, what does this mean for the Enterprise?

Business as usual mostly, but Identity & Access Management (IAM) would appear to be a significant challenge.  Firstly, rightly or wrongly, most people don’t consider Apple software to have any security exposures, as the market place for iOS security solutions (E.g. Anti-Virus, Malware, zero day exploits, et al) is limited?  However, one might ponder why the Windows Operating System became such a target for the hacker.  Said hacker might be an opportunist, just because they can, or something more sinister, trying to gain government or business secrets.  So, if the Apple smartphone and tablet devices become ubiquitous if not de facto in the Enterprise, how long will it be before security exposures for iOS and related apps become common place?

I’m open-minded about BYOD (or am I)?  My heart tells me, yes, let the workers use their own device in the workplace, but my head tells me, no way!  Generally for technology decisions, my head always wins.  In this instance, I don’t think my head has a chance; overwhelming company worker desire to use their own mobile device in the workplace, whether iOS, Android, Java ME, Windows Phone, Blackberry, et al, will win out.  If this is the case, this is perhaps where the maturity and reliability of the IBM zSeries Mainframe can assist.

Therefore, at least for Identity & Access Management (IAM), secure access to the most valuable resource within an organization, the data itself via the zSeries server makes sense.  Whether this is via two if not several factor authentication remains to be seen.  However, I’m much more comfortable with an IAM solution that leverages from a Mainframe External Security Manager (ESM), namely ACF2, RACF or TopSecret, as opposed to a universal log-in via a Social Media web site, such as Facebook.  Just because you can log into an Enterprise and arguably mission critical CRM application, such as Salesforce via Facebook Authentication, doesn’t necessarily mean you should…

Are You Ready For z/OS Mobile Workload Pricing (MWP)?

Recently IBM announced Mobile Workload Pricing (MWP) for z/OS which can minimize the impact of mobile workloads on Sub-Capacity license charges, delivering optimized pricing for System z environments extending their workloads to incorporate mobile devices.

MWP only applies to Mainframe customers deploying a zEC12 or zBC12 in their enterprise, as per the AWLC or AEWLC (AKA Advanced/Entry Workload License Charges) metric; MWP is also extended if a zEC12 or zBC12 enterprise is deploying a z196 or z114 via the AWLC or AEWLC metric.

The primary consideration for MWP is determining how a Mainframe customer can comply with the tracking requirements for mobile workloads.  On the plus side, MWP does not require an isolation of mobile workload transactions in separate LPARs, using enhanced reporting for software pricing.  This is a major step forward when compared with Integrated Workload Pricing (IWP), which ideally requires large LPAR container structures, minimizing costs for WebSphere workloads, applying to the CICS, IMS and WebSphere MLC software products.  Conversely, MWP includes DB2 in the list of eligible software products for cost reduction.

If a Mainframe customer is eligible for MWP pricing they will then need to utilize the Mobile Workload Reporting Tool (MWRT), which is analogous to the original Sub-Capacity Reporting Tool (SCRT).  This is an either/or situation, the Mainframe customer only submits MCRT reports to IBM if they’re MWP eligible, or the status quo remains, where non-MWP Mainframe customers continue to submit SCRT reports.

The Mainframe customer must track and report General Purpose (GP) CPU time for mobile transactions, reporting those values in a pre-defined format to IBM each month to benefit from MWP.  MWRT utilizes reported mobile transaction data to adjust the Rolling 4 Hour Average (R4HA) Sub-Capacity software eligible MSUs, with LPAR granularity.  Optimizing mobile transactions impact for peak LPAR MSU values delivers benefit when higher mobile transaction volumes generate MSU resource usage peaks (Workload Spikes).

MWRT calculates the R4HA for mobile transaction GP MSU resource usage, subtracting 60% of those values from the traditional Sub-Capacity software eligible MSU metric, with LPAR granularity, for each and every reporting hour.  The software program values for the same hour are aggregated for all Sub-Capacity eligible LPARs, deriving an adjusted Sub-Capacity value for each reporting hour.  Therefore MWRT determines the billable MSU peak for a given MLC software program on a CPC using the adjusted MSU values.

Most committed zSeries Mainframe customers will be deploying CICS, DB2 and WebSphere software, while IT trends dictate that mobile device usage (I.E. Smartphone, Tablet, et al) is increasing.  Therefore most z/OS applications that require such mobile access have evolved accordingly over time.  Therefore it seems to be one of those “No Brainer” type scenarios, where the Mainframe user should plan to benefit from MWP, either as they upgrade to the latest zSeries technology, namely zEC12 or zBC12, or immediately if already deploying a zEC12 or zBC12 server.

The only minor consideration is a requirement for the zEC12 or zBC12 customer to engage their local IBM account team, to determine what data they need to report on mobile transactions for MWP consideration.  This one off task will deliver optimized WLC pricing forever more.

Of course IBM are encouraging customers to consider the Mainframe for new applications, driven by mobile transaction requirements.  Equally, there is no reason why longer term Mainframe customers can’t benefit from MWP, benefitting from reduced MLC costs, a major consideration of Mainframe TCO.

z/OS Soft Capping: Balancing Cost & Performance

Historically each and every LPAR was assigned a Relative Weight value; where a more meaningful description would be the initial processing weight. This relative weight value is used to determine which LPAR gains access to resources, where multiple LPARs are competing for the same resource. Being unit-less is one minor challenge of the relative weight value, meaning that it has no explicit CPU capacity or resource value. Typically installations would use a simple multiple of ten metric, most likely 1000, and allocate weights accordingly (E.g. 600=60%, 300=30%, 10=10%, et al). Therefore during periods of resource contention, PR/SM would allocate resources to the requisite LPAR, based upon its relative weight.

Using relative weight to classify all LPARs as equal, at least from a generic class viewpoint, does have some considerations; primarily differentiating between Production and Non-Production workloads. Restricting a workload to its relative weight share of resources is known as Hard Capping. This setting is typically used to restrict Non-Production (E.g. Test) environments to their allocated resource and is also useful for cost control (E.g. Outsourcers), knowing that the LPAR will never consume more than its allocated relative weight allowance.

Hard Capping behaviour changes dependent on the use of the HiperDispatch setting. When HiperDispatch is not chosen, capping is performed at the Logical CP level, where the goal is for each logical CP to receive its relative CP share, based on the relative weight setting. When HiperDispatch is active, vertical as opposed to horizontal CPU management applies. So, a High categorization dictates capping at 100% of the logical CP, whereas a Medium or Low setting allows for resource sharing based on a relative weight per CP basis.

The Intelligent Resource Director (IRD) function provides more advanced relative weight management, automating management of CPU resources and a subset of I/O resources. Workload Manager (WLM) manages physical CPU resource across z/OS images within an LPAR cluster based on service class goals. IRD is implemented as a collaboration between the WLM function and the PR/SM Logical Partitioning (LPAR) hypervisor:

  • Logical CP Management: dynamically allocating logical processors (E.g. Vary On-Line/Off-Line)
  • Relative Weight Management: dynamically redistributing CPU resource as per LPAR weights
  • CHPID Management: dynamically assigning logical channel paths between eligible LPARs

IRD optimizes resource usage, enabling WLM to deliver workload goals.

The use of relative weight in association with Hard Capping and/or IRD/WLM granularity has become somewhat limited for most Mainframe installations with the advent of Sub-Capacity pricing (I.E. MLC via SCRT/R4HA). Primarily because there is no direct correlation to manage CPU resource at a meaningful level, namely the MSU (vis-à-vis CPU MIPS) metric.

Defined Capacity (DC) provides Sub-Capacity CEC pricing by allowing definition of LPAR capacity with a granularity of 1 MSU. In conjunction with the WLM function, the Defined Capacity of an LPAR dictates whether Soft Capping is invoked or not. At this juncture, we should consider how and when WLM measures CPU resource usage and if and when Soft Capping is activated and deactivated:

WLM is responsible for taking MSU utilization samples for each LPAR in 10-second intervals. Every 5 minutes, WLM documents the highest observed MSU sample value from the 10-second interval samples. This process always keeps track of the past 48 updates taken for each LPAR. When the 49th reading is taken, the 1st reading is deleted, and so on. These 48 values continually represent a total of 5 minutes * 48 readings = 240 minutes or the past 4 hours (I.E. R4HA). WLM stores the average of these 48 values in the WLM control block RCT.RCTLACS. Each time RMF (or BMC CMF equivalent) creates a Type 70 record, the SMF70LAC field represents the average of all 48 MSU values for the respective LPAR a particular Type 70 record represents. Hence, we have the “Rolling 4 Hour Average”. RMF gets the value populated in SMF70LAC from RCT.RCTLACS at the time the record is created.

SCRT also uses the Type 70 field SMF70WLA to ensure that the values recorded in SMF70LAC do not exceed the maximum available MSU capacity assigned to an LPAR. If this ever happens (due to Soft Capping or otherwise) SCRT uses the value in SMF70WLA instead of SMF70LAC. Values in SMF70WLA represent the total capacity available to the LPAR.

We should also consider the two possibilities for MLC software payment (I.E. SCRT) based upon MSU resource usage. Quite simply, the MSU value passed for SCRT invoice consideration is the R4HA or the Defined Capacity, whichever is the lowest. Put another way; if the R4HA exceeds Defined Capacity, Soft Capping applies to the LPAR.

The primary disadvantage of Soft Capping is that the Defined Capacity setting is somewhat static; it is manually defined once, maybe several times a day for workloads with distinct characteristics (E.g. On-Line, Batch, et al), but dynamic DC management based upon inter-related LPAR behaviour is at best, evolving. The primary considerations for Soft Capping are:

  • An LPAR can only be managed via Soft Capping or Hard Capping; not both
  • DC rules only applies to General Purpose CP’s (Hard Capping for Specialty Engines is allowed)
  • An LPAR must be defined with shared CP’s (dedicated CP’s not allowed)
  • All LPAR Sub-Capacity eligible products have the same MSU capacity (I.E. DC)

Soft Capping is relatively simple to implement and typically generates MLC software costs savings, with minimal impact.

Group Capacity Limit (GCL) provides an extension to the Defined Capacity (DC) Soft Capping function. GCL allows an MSU limit for total usage of all group LPARs, with a granularity of 1 MSU. The primary considerations for GCL are:

  • Works with DC LPAR capacity settings
  • Target share does not exceed DC
  • Works with IRD
  • Multiple CEC groups allowed; but an LPAR may only be defined to one group
    An LPAR must be defined with shared CP’s, with WAIT COMPLETION = NO specification

It is possible to combine IRD weight management with the GCL function. Based on installation policy, IRD can modify the relative weight setting to redistribute capacity resource within an LPAR cluster.

However, IRD weight management is suspended when GCL is in effect, because LPAR resource entitlement within a capacity group can be (I.E. Pre zxC12) derived from the current weight. Hence the LPAR might get allocated an unacceptable low weight setting, generating a low GCL entitlement.

GCL also allows for MSU to be shared between LPARs in a group, where one LPAR would be a donator and another would be a receiver. Therefore the customer classifies their LPARs accordingly and when a high-priority LPAR requires additional MSU resource, it will be allocated from a lower priority LPAR, if available. This provides a modicum of flexibility, but by definition, peak workloads are not predictable and typically require a significantly higher amount of MSU for a short time period. Typically this requirement will not be satisfied with the GCL function.

Soft Capping techniques, either at the individual (DC) or group (GCL) level deliver cost saving benefit, but a fine granularity of management is required to balance cost saving versus associated performance considerations. The primary challenges associated with Soft Capping are its interactions with workload characteristics and an inability to dynamically manage MSU allocation, in-line with the R4HA. Put another way, the R4HA is derived from 48*5 Minute samples, whereas DC and GCL settings are typically defined on an infrequent (E.g. Monthly or longer) basis.

As z/OS evolves, further in-built function is available to manage MSU capacity. zSeries Capacity Provisioning Manager (CPM) is designed to simplify the management of temporary capacity, defined capacity and group capacity. The scope of z/OS Capacity Provisioning is to address capacity requirements for relatively short term workload fluctuations for which On/Off Capacity on Demand or Soft Capping changes are applicable. CPM is not a replacement for the customer derived Capacity Management process. Capacity Provisioning should not be used for providing additional capacity to systems that have Hard Capping (initial capping or absolute capping) defined.

With the introduction of z/OS 2.1, CPM functionality incorporates Soft Capping support via the DC and GCL functions. CPM functions from a set of installation defined policies and parameters, where the CPM server receives three types of input:

  • Domain Configuration: defines the CPCs and z/OS systems to be managed
  • Policy: contains the information as to which work is eligible, for which conditions and during which timeframes and capacity increases for constrained workloads
  • Parameter: contains environment descriptors (E.g. UNIX Environment, Installation Options, et al)

From a customer viewpoint, policy definition allows them to define the provision of CPU resource:

  • Date & Time: When capacity provisioning is allowed
  • Workload: Which service class qualifies for provisioning?
  • CPU Resource: How much additional MSU capacity can be allocated?

CPM provides more function when compared with Defined Capacity and Group Capacity Limit Soft Capping techniques. Therefore allowing for time schedules to be defined, workloads to be categorized and MSU resource to be allocated in a dynamic and granular manner.

A modicum of complexity exists when considering the arguably most important factor for CPM policy definition, namely the Performance Index (PI):

  • Activation: PI of service class periods must exceed the activation threshold for a specified duration, before the work is considered as eligible.
  • Deactivation: PI of service class periods must fall below the deactivation threshold for a specified duration, before the work is considered as ineligible.
  • Null: If no workload condition is specified a scheduled activation/deactivation is performed; with full capacity as specified in the rule scope, unconditionally at the start and end times of the time condition.

For workload based provisioning it is a necessary condition that the current system Performance Index exceeds the specified customer policy PI metric. One must draw one’s own conclusions regarding PI criteria settings, but to date, they’re largely based on arguably complex mathematical formulae, which perhaps is not practicable, especially from a simple management viewpoint.

With the requisite hardware (I.E. zxC12+) and Operating System levels (I.E. z/OS 1.13+), CPM provides extra functionality for the customer to implement granular Soft Capping techniques to balance cost and performance. When compared with Defined Capacity and Group Capacity Limit techniques, CPM delivers increased granularity for managing capacity dynamically, based on customer derived policies, recognizing time slots, workloads and MSU resource increases accordingly.

From a big picture viewpoint, without doubt, we must recognize the fundamental role that WLM plays in Soft Capping. Quite simply, the 48*5 Minute MSU resource samples dictate whether a workload will be eligible for Soft Capping or not and from a cumulative viewpoint, these MSU samples dictate the R4HA metric. Based on this observation, efficient and functional Soft Capping must be workload based (I.E. WLM Service Class), be dynamic and operational on a 24*7 basis, because workload peaks are never predictable, while balancing MSU resource accordingly. Of course, simplicity of implementation and management, supplemented by meaningful reporting is mandatory.

Once again, observing the 48*5 Minute MSU resource samples from a R4HA viewpoint, if a workload was to increase MSU usage by an average of 50% for 1 Hour (I.E. 12 Samples), and decrease MSU usage by an average of 20% for 2.5 Hours (I.E. 30 Samples), from an average viewpoint, the R4HA has remained static. Therefore an optimum Soft Capping technique needs to recognize WLM service class requirements, reacting in a timely manner, increasing and decreasing MSU usage, to safeguard workload performance for Time Critical workloads, while optimizing SCRT MLC cost.

zDynaCap delivers automated capacity balancing within CPCs, Capacity Groups or Groups of LPARs. Central to zDynaCap are the predefined balancing policies. Within these balancing policies, users define their MSU ranges of Groups and LPARs and also the priorities of the associated LPAR Workload. zDynaCap continually monitors overall usage and compares this to the available capacity and the user defined MSU balancing policies. For example, should a high priority workload on one LPAR not get enough capacity, while a low priority workload on another within the group gets too much capacity, available MSU capacity is distributed according to customer derived balancing policies. Only if there is no leftover capacity to be rescheduled within the defined Group, and if the high or medium priority workload will be slowed down, will zDynaCap add MSU.

With zDynaCap Capacity Balancing, available MSU capacity is balanced within LPAR groups, safeguarding that during peak time the mission critical workload is processed as per business expectations (E.g. SLA/KPI) for the lowest possible MLC cost.

In conclusion, given the significance of IBM MLC software (E.g. z/OS, CICS, DB2, IMS, WebSphere MQ, et al) costs, arguably every Mainframe environment should deploy a capping technique for cost optimization. Hard Capping might work for some, but in all likelihood, Soft Capping is the primary choice for most Mainframe environments. For sure, IBM have delivered several Soft Capping techniques, with varying levels of function and granularity, namely Defined Capacity, Group Capacity Limit (GCL) and the zSeries Capacity Provisioning Manager (CPM). It was forever thus and the ISV community exists because they specialize, architect and deliver specialized solutions and zDynaCap is such a solution, recognizing the fundamental rules of IBM Mainframe Soft Capping, namely the underlying WLM and R4HA foundation.

zIIP Into The Future: Mainframe Specialty Engines Evolution

Sometimes we might lose sight that change can be evolutionary as opposed to revolutionary and this certainly applies to IBM Mainframe specialty engines, for example:

  • 1997: Internal Coupling Facility (ICF)
  • 2000: Integrated Facility for Linux (IFL)
  • 2004: System z Application Assist Processor (zAAP)
  • 2006: System z Integrated Information Processor (zIIP)

To assist with lower IBM software pricing, arguably the ICF offering became the de facto standard for a Mainframe user to be considered “actively coupled”.  Therefore deploying two or more eligible IBM Mainframes, physically attached via coupling links to a common Coupling Facility (I.E. ICF).

The Integrated Facility for Linux (IFL) is a processor dedicated to Linux workloads on IBM System z servers.  The IFL is supported by the z/VM virtualization software and the Linux operating system.  Most customers have at least dabbled into this technology, while some are using this technology extensively, primarily for distributed server consolidation.

Somehow the zAAP specialty engine has become the “black sheep” of the family where the current zEC12 and zBC12 are planned to be the last System z servers to offer support for zAAP specialty engine processors.

As of z/OS V1.11, functionality was delivered enabling zAAP eligible workloads to run on zIIP engines.  This function allowed both zIIP & zAAP-eligible workloads to process on zIIP.  This capability was ideal for customers with insufficient zAAP or zIIP eligible workload to justify a specialty engine.  Whereas the combined eligible workloads increase the ROI metrics for zIIP deployment.  The zAAP specialty engine is primarily targeted for web-based applications and SOA-based technologies, namely Java and XML.

So for z/OS type workloads, we must “zIIP Into The Future”…

Sometimes we need to look at the big picture, where the IBM organization is comprised of many business units, including the Mainframe business unit.  The Mainframe business unit itself contains many groups, including, but not limited to, the Hardware and Software groups.

As we all know, z/OS software TCO is significant and so this translates into higher revenues for the IBM Mainframe software group; but what about the IBM Mainframe hardware group?  Perhaps the specialty engines, primarily in the form of zIIP will generate revenue stream for this business unit.  Along with the introduction of zBC12 & zEC12 servers, IBM increased the zIIP to General Purpose (CP) engines ratio to 2:1; meaning you can have 2 zIIP specialty engines with the same capacity as an associated CP engine.  Previously the maximum ratio allowed was 1:1 (Specialty:CP).

What workloads are zIIP eligible?  Over time and since 2006 the amount of workload that is zIIP eligible has increased, primarily due to software development and upgrade efforts of IBM and the 3rd party ISV community:

  • DB2 for z/OS exploits the zIIP capability for portions of eligible data serving, pureXML and utility workloads
  • Other 3rd party DBMS solutions, including ADABAS & IDMS offload workload to zIIP
  • Most Systems Management tools (E.g. OMEGAMON, MAINVIEW, RMF, SYSVIEW, et al)
  • z/OS XML System Services for eligible XML validating and non-validating workloads
  • Other z/OS functions including /OS Communications Server, Global Mirror, CIM Server, et al

What are the benefits of deploying a zIIP specialty engine?

  • Lower acquisition and maintenance costs, when compared with general CP
  • zIIP engines run at full rated CP speed
  • Offload work (CPU) from General Purpose (CP) engines
  • No cost for Sub-Capacity eligible IBM software (I.E. WLC)

So, one must draw one’s own conclusions, but seemingly the deployment of zIIP engines is a “no brainer”!

Hmmm, once again, evolution is a good thing and the zIIP engine has an 8 year history and its predecessor zAAP, a 10 year history.  This ~10 year period has allowed for user experiences and IBM function developments to evolve a more stable and rounded offering and as previously stated, a product for the IBM Mainframe Hardware group to focus upon.

From a customer viewpoint, zIIP deployment requires a Capacity Planning evolution, which should be reasonably straightforward.  The big difference is the CP to zIIP offload consideration and some of the lessons learned include:

  • Software costs – Multiple-Processors; CP to zIIP Offload Rate; zIIP utilization
  • Hardware costs – Installed Books (total MSU/MIPS capacity); Additional LPAR(s)
  • Peak CPU utilization – Safeguard that zIIP exploitation reduces peak CPU usage
  • CPU per Transaction – Slight increase in CPU (not necessarily elapsed time) as workload switches from CP to zIIP
  • zIIP utilization – Early experiences indicate ~50% zIIP engine busy is a good number

In conclusion, zIIP deployment has been gradual and evolutionary, but many factors indicate that zIIP is here to stay and it is the future.  Seemingly from an IBM viewpoint, with benefit for the Mainframe Hardware Group in terms of the eradication of the zAAP engine, the increase in CP:zIIP ratio to 2:1 and the associated customer benefits of Sub-Capacity software pricing.  From a customer viewpoint, ignoring these pointers might not be wise, as z/OS software costs are significant and CPU resource requirements keep increasing.  Adding extra zIIP CPU capacity reduces hardware and associated software costs and so this is the “no brainer” observation that can’t be ignored for much longer…

The IBM Mainframe – 50 Years & Counting

On 7 April 1964 IBM announced the System/360, which is now recognized as the first IBM Mainframe computer system.  IBM Board Chairman Thomas J. Watson Jr. called the event the most important product announcement in the company’s history.  At a press conference at the IBM Poughkeepsie facilities, Mr. Watson said:

“System/360 represents a sharp departure from concepts of the past in designing and building computers. It is the product of an international effort in IBM’s laboratories and plants and is the first time IBM has redesigned the basic internal architecture of its computers in a decade. The result will be more computer productivity at lower cost than ever before. This is the beginning of a new generation, not only of computers, but of their application in business, science and government.”

More than 100,000 businessmen in 165 American cities today attended meetings at which System/360 was announced.  50 years later, I wonder whether there are 100,000 people that work with the IBM Mainframe in The USA and maybe globally…

During this 50 year evolution, the IBM Mainframe has seen opinion polarize, sometimes from the same person:

  • In March 1991, Stewart Alsop stated “I predict that the last mainframe will be unplugged on March 15, 1996.”
  • In February 2002, Stewart Alsop stated “It’s clear that corporate customers still like to have centrally controlled, very predictable, reliable computing systems, exactly the kind of systems that IBM specializes in.”

Obviously the IBM Mainframe server is still here and just like in 1964, in the early 1990’s it did evolve into just another server on the distributed network and the use of routers, incorporating POSIX compliance and so on…

As we all know, the IBM Mainframe has always evolved, continues to evolve and in theory, and often in real-life, can run any workload.

Let’s reprise some of the notable IBM Mainframe models and associated functions since April 1964:

Family Name Announced Notable Function Introduction
S/360 April 1964 24-bit addressing (32-bit architecture)
S/360 August 1965 Virtual storage
S/360 January 1968 High speed cache
S/370 June 1970 Disk & printer support
S/370 August 1972 Virtual storage & multi-processor support
S/370 XA June 1983 Extended storage 24-bit/31-bit addressing
S/390 ESA September 1990 ESA & OS/390 operating systems
zSeries (zArchitecture) October 2000 z operating systems, 24/31/64-bit   addressing supported concurrently
zSeries z9 EC July 2005 zIIP specialty engine
zSeries z10 EC February 2008 High capacity/performance (quad core CPU chip)
z196 (zEnterprise) July 2010 96-way core design & distributed systems integration (zBX)
zEC12 August 2012 Integrated platform for cloud computing, integrated OLTP & data warehousing

It’s interesting to note that the purchase price of an IBM mainframe is about the same, comparing 1964 to 2014, let’s say~$100,000.  Of course, you can’t compare the feeds and speeds of these machines, they’re exponentially different.  However, just as the S/360 in 1964 played a pivotal part in shaping data processing for that decade, subsequent evolutions of the IBM Mainframe follow in that tradition, lowering the cost of IT and simplifying business management.

I’m sure a lot of us have enjoyed our time working with the IBM Mainframe server and long may that be the case, for future generations of IT professionals.

Mainframe Server Planning: Vendor Interaction

In the last few weeks I have encountered a couple of scenarios regarding Mainframe Server upgrades that have surprised me somewhat.  The first was at the annual UK GSE conference during November 2013, where one of the largest UK Mainframe customers stated “we had problems regarding the capacity sizing of the IBM Mainframe server installed and our vendor was not very helpful in resolving this challenge with us”.  The second was a European customer with 2 aging servers deployed, z9 BC, and they had asked their IBM Mainframe server vendor to provide an upgrade quotation.  The server vendor duly replied, providing a like-for-like upgrade quotation, 2 new zBC12 servers, which at first glance seemed to be a valid configuration.

The one thing in common for these 2 vastly different Mainframe customers, the first very large, the second quite small, is that inadvertently they didn’t necessarily engage their respective vendors with the best set of questions or indeed terms of reference; while the vendors might say “ask me no questions and I’ll tell you no lies”…

For the 2nd scenario, I was asked to quickly review the configuration provided.  My first observation was to consolidate both workloads on 1 server.  The customer confirmed, there was no business reason to have 2 servers, it was historic, and there wasn’t even a SYSPLEX between the 2 z9 BC servers.  The historic reason for the 2 z9 BC servers was the number of General Purpose (GP) engines supported.  My second observation was that software licensing could be simplified and optimized with aggregated MSU and use of the AEWLC pricing model.  So within ~1 hour, the customer had a significant potential to dramatically reduce costs.

We then suggested an analysis of their configuration with 2 software products, PerfTechPro for z/OS and zDynaCap.  They already had the SMF data, so using the simulation abilities of these products, the customer quickly confirmed they could consolidate their workloads onto 1 zBC12, deploy zIIP processors to offload ~15% CPU usage from GP, and control MSU allocation with zDynaCap, saving another ~10% of CPU.  For this customer, a small investment in software products reduced their server upgrade costs by ~€400,000 in year 1, with similar software savings, each and every year forever more.  Although they didn’t have the skills in-house from a Mainframe Capacity Planning and software licensing viewpoint, this customer did eventually ask the right questions, and the rest as they say is history!

No man or indeed Mainframe customer is an island, so don’t be afraid to ask questions of your vendors or business partners!

From a cost viewpoint, both long-term (TCO) and day 1 (TCA), the requirement to deploy the optimum Mainframe server configuration from a capacity viewpoint cannot be under estimated, both in terms of hardware costs, but more importantly, associated software costs.  It therefore follows that Mainframe Capacity Planning and Mainframe Software Licensing knowledge is imperative, but I’m not so sure there are that many Mainframe customers that have clearly defined job roles for such disciplines.

To generalize, always a dangerous thing, typically the larger Mainframe customer does have skilled and seasoned personnel for the Capacity Planning discipline, while the smaller Mainframe user might rely on a generic Systems Programmer or maybe even rely on their vendor to size their Mainframe servers.  From a Mainframe software licensing viewpoint, there seems to be no general rule-of-thumb, as sometimes the smaller customer has significant knowledge and experience, whereas the larger Mainframe customer might not.  Bottom Line: If the Mainframe customer doesn’t allocate the optimum capacity and associated software licensing metrics for their installation, problems will arise, probably for several years or more!

Are there any simple solutions or processes that can assist Mainframe customers?

The first and most simple observation is to engage your vendor and safeguard that they generate the final Mainframe server configuration that is used for Purchase Order activities.  For sure, the customer will have their capacity plan and perhaps a “draft” server configuration, but even in these instances, the vendor should QA this data, refining the bill of materials (E.g. Hardware) accordingly.  Therefore an iterative process occurs between customer and vendor, but the vendor is the one that confirms the agreed configuration is fit for purpose.  In the unlikely event there are challenges in the future, the customer can work with their vendor to find a solution, as opposed to the example stated above where the vendor left their customer somewhat isolated.

The second observation is leverage from the tools and processes that are available, both generally available and internal for vendor pre sales personnel.  Seemingly everybody likes something for nothing and so the ability to deploy “free” tools will appeal to most.

For Mainframe Capacity Planning, in addition to the standard in-house processes, whether bespoke (E.g. SAS, MXG, MICS based) or a packaged product, there are other additional tools available, primarily from IBM:

zPCR (Processor Capacity Reference) is a generally available Windows PC based tool, designed to provide capacity planning insight for IBM System z processors running various z/OS, z/VM, z/VSE, Linux, zAware, and CFCC workload environments on partitioned hardware.  Capacity results are based on IBM’s most recently published LSPR data for z/OS.  Capacity is presented relative to a user-selected Reference-CPU, which may be assigned any capacity scaling-factor and metric.

zCP3000 (Performance Analysis and Capacity Planning) is an IBM internal tool, Windows PC based, designed to for performance analysis and capacity planning simulations for IBM System z processors, running various SCP and workload environments.  It can also be used to graphically analyse logically partitioned processors and DASD configurations.  Input normally comes from the customer’s system logs via a separate tool (I.E. z/OS SMF via CP2KEXTR, VM Monitor via CP3KVMXT, VSE CPUMON via VSE2EDF).

zPSG (Processor Selection Guide) is an IBM internal tool, Windows PC based, designed to provide sizing approximations for IBM System z processors intended to host a new application, implemented using popular, commercially available software products (E.g. WebSphere, DB2, ODM, Linux Apache Server).

zSoftCap (Software Migration Capacity Planning Aid) is a generally available Windows PC based tool, designed to assess the effect on IBM System z processor capacity, when planning to upgrade to a more current operating system version and/or major subsystems versions (E.g. Batch, CICS, DB2, IMS, Web and System).  zSoftCap assumes that the hardware configuration remains constant while the software version or release changes.  The capacity implication of an upgrade for the software components can be assessed independently or in any combination.

zBNA (System z Batch Network Analysis) is a generally available Windows PC based tool, designed to understand the batch window, for example:

  • Perform “what if” analysis and estimate the CPU upgrade effect on batch window
  • Identify job time sequences based on a graphical view
  • Filter jobs by attributes like CPU time / intensity, job class, service class, et al
  • Review the resource consumption of all the batch jobs
  • Drill down to the individual steps to see the resource usage
  • Identify candidate jobs for running on different processors
  • Identify jobs with speed of engine concerns (top tasks %)

BWATOOL (Batch Workload Analysis Tool) is an IBM internal tool, Windows PC based, designed to analyse SMF type 30 and 70 data, producing a report showing how long batch jobs run on the currently installed processor.  Both CPU time and elapsed time are reported. Similar results can then be projected for any IBM System z processor model. Basic questions that can be answered by BWATOOL include:

  • What jobs are good candidates for running on any given processor?
  • How much would jobs benefit from running on a faster processor?
  • For jobs within a critical path (batch window), what overall change in elapsed time might occur with a new processor?

zMCAT (Migration Capacity Analysis Tool) is an IBM internal tool, Windows PC based, designed to compare the performance of production workloads before and after migration of the system image to a new processor, even if the number of engines on the processor has changed.  Workloads for which performance is to be analysed must be carefully chosen because the power comparison may vary considerably due to differing use of system services, I/O rate, instruction mix, storage reference patterns, et al.  This is why customer experiences are unique from an internal throughput ratio (ITRR) based on LSPR benchmark data.

zTPM (Tivoli Performance Modeler) is an IBM internal tool, Windows PC based designed to let you build a model of a z/OS based IBM System z processor, and then run various “what if scenarios”.  zTPM uses simulation techniques to let you model the impact of changes on individual workload performance.  zTPM uses RMF or CMF reports as input.  Based on these reports, zTPM can create summary charts showing LPAR as well as workload utilization.  An automated Build function lets you build a model that represents the system for any reporting interval.  Once the model is built, you can make changes to see the impact on workload performance.  zTPM is also available as an IBM software product offering.

Therefore there are numerous tools available from IBM to assist their customers determine optimum Mainframe server capacity requirements.  Some of these tools are generally available without engaging the IBM account team, but others are internal to IBM, and for that reason alone, Mainframe customers must engage their IBM Mainframe account team to participate in their capacity planning activities.  Additionally, as the only supplier of Mainframe Servers, IBM have a wealth of knowledge and indeed a responsibility and generally a willingness to assist their customers deploy the right Mainframe server configuration from day 1.

As a customer, don’t be afraid to engage external 3rd parties to perform a sanity check of your thinking and activities, clearly IBM as they will be fulfilling your IBM Mainframe server order.  However, consider engaging other capacity/performance and software licensing specialists as their experience incorporates many customers, as opposed to an insular view.  Moreover, such 3rd parties probably utilize their own software tools or products to assist in this most important of disciplines.

In conclusion, as always, the worst question is the one not asked, and for this most fundamental of processes, not collaborating with your vendor and the wider community, might leave you as an individual exposed and isolated, and your company exposed to the consequences of an undersized or oversized Mainframe sever configuration…

21st Century Mainframe Capacity Planning Requirements

With nearly 5 decades of longevity the IBM Mainframe has changed beyond recognition in terms of CPU capacity and performance capability.  The Capacity Planning discipline for the IBM Mainframe server became more advanced and proactive in the early 1990’s, perhaps coinciding with the introduction of Parallel Sysplex structures associated with the MVS/ESA operating system.  Therefore the requirement to measure and model the impact of workload movement between LPAR and CPC structures became important, if not mandatory.

The fundamental building-block for Mainframe CPU usage analysis is SMF Type 7n records (I.E. RMF or CMF), where this data was typically processed by MXG, MICS and maybe CIMS (acquired by IBM), generally using SAS for reporting purposes.  Other tools, including but not limited to, BEST/1 (acquired by BMC) and PERFMAN (acquired by ASG) also offered capacity planning and performance management solutions.  Therefore, for 20+ years the fundamental Mainframe CPU usage data and associated tools have remained largely the same.  However, maybe the IBM Mainframe server has changed, both in terms of underlying CPU chip technology and customer workload deployment…

I often hear capacity planners state something along the lines of “I can report on the past with 100% accuracy, but predicting the future might prove to be a little more difficult”!  Once again, going back to the early 1990’s, the IBM Mainframe had a typical if not generic workload profile deployment, namely On-Line Transaction Processing (E.g. CICS, IMS DC) and related Database Management Subsystems (E.g. DB2, IMS DB) with Batch Processing.  This somewhat limited workload profile simplified the Capacity Planning process, applying estimates of growth based on current usage.  However, when the Mainframe became more pervasive, taking on new workloads, how was the capacity planner supposed to estimate CPU requirements for their new business application workload?

IBM introduced the Large Systems Performance Reference (LSPR) methodology, designed to provide relative processor capacity data for IBM System/370, System/390 and z/Architecture processors.  All LSPR data is based on a set of measured benchmarks and analysis, covering a variety of System Control Program (SCP) and workload environments.  LSPR data is intended to be used to estimate the capacity expectation for a production workload when considering a move to a new processor.  Although LSPR data is provided on an “as is” basis, with no warranty, it at least provides the Mainframe Capacity Planner with some insight into their CPU sizing challenge.  For many years, LSPR provided the only other data source, as well as RMF (CMF) for Mainframe CPU sizing.  However, is there a more accurate data source, perhaps based on real-life customer data?

With the introduction of the IBM System z10 server (February 2008), a new function CPU MF (CPU Measurement Facility) was incorporated.  Let’s not forget, z10 is now an n-2 technology, having been superseded by the z196/z114 and the latest zBC12/zEC12 generation of servers.  So each and every committed Mainframe customer should be positioned to benefit from the CPU MF function.

CPU MF provides optional hardware assisted collections of information about logical CPU activity executed over a specified interval in selected Logical Partitions (LPARs).  The CPU MF counters function is intended to be run on a constant basis to collect long-term performance data (I.E. SMF Record 113), in a similar manner to how you collect other performance data.  Therefore this data source can be deployed to further refine the accuracy of Mainframe CPU capacity planning projections.  Let’s not forget:

The primary on-going requirement for Mainframe Capacity Planning is to minimize any over or under capacity provision from forecast predictions, used for Mainframe server acquisition purposes”

Mainframe chip technology has also changed in complexity, especially with the latest iterations of CPU chips associated with the z10 server (E.g. POWER 6) onwards, incorporating many layers of cache memory.  Workload capacity performance will be quite sensitive to how deep into the memory hierarchy the processor must go to retrieve the workload’s instructions and data for execution.  Best performance occurs when the instructions and data are found in the cache(s) nearest the processor so that little time is spent waiting prior to execution; as instructions and data must be retrieved from farther out in the hierarchy, the processor spends more time waiting for their arrival.

As workloads are moved between processors with different memory hierarchy designs, performance will vary as the average time to retrieve instructions and data from within the memory hierarchy will vary.  Additionally, once on a processor this component will continue to vary significantly as the location of a workload’s instructions and data within the memory hierarchy is affected by many factors including; locality of reference, IO rate, competition from other resources (E.g. Applications, LPARs, et al), and so on…

The most performance sensitive area of the memory hierarchy is the activity to the memory nest, namely, the distribution of activity to the shared caches and memory.  IBM introduced new terminology, namely Relative Nest Intensity (RNI), indicating the level of activity to this part of the memory hierarchy.  Using data from CPU MF, the RNI of the workload running in an LPAR may be calculated.  The higher the RNI, the deeper into the memory hierarchy the processor must go to retrieve the instructions and data for that workload.

Therefore the Mainframe Capacity Planner does have various data sources available to forecast how an existing or new workload might perform on an upgraded processor (CPC), further refining their CPU capacity requirement forecast.  As always, the final stage in a Mainframe Capacity Planning process is to input the forecast data into the IBM Processor Capacity Reference (zPCR) tool, to determine the exact model and associated resource configuration options for their unique business workload mix.

To summarize, does your Mainframe Capacity Planning process incorporate all of these CPU sizing data sources, in an easy-to-use and cost efficient manner?

Founded by former IBM staffers and capacity planning and performance management industry veterans William Shelden, PhD, and William Hart, PerfTechPro is designed to deliver sophisticated, affordable, easy-to-use solutions for IT management professionals looking for fast, insightful help without high-cost, complex and time-consuming purchasing and licensing requirements.

PerfTechPro for z/OS is a Capacity Planning and Performance Measurement tool specifically designed for the cost conscious and savvy 21st Century data centre.  PerfTechPro for z/OS is the next evolution in Mainframe Capacity Planning tools, having been architected from ground zero using the latest techniques.  PerfTechPro for z/OS provides sophisticated capacity and performance management capabilities, affordable by any sized data centre:

  • Clean, intuitive, easy-to-use interface and graphical representations, for example:
    • Consolidated instance lists guide users to make informed selections
    • Descriptive dialog boxes detail your configuration
    • Anticipates, pre-loads data to speed retrieval, reporting and analysis
    • Automated data management
  • Forecasting and modelling
  • Non-proprietary database, enabling data use outside of PerfTechPro
  • Capable of automated collection, analysis and reporting of SMF 113 records produced by the IBM CPU Measurement Facility (CPU MF)
  • Supports measurement, management of zAAP & zIIP Specialty Engines
  • Automated analysis and management of all key capacity and performance metrics, for example:
    • GPP Utilization of All LPARs
    • MIPS Usage by CPU
    • DASD Response Times
    • Address Spaces Dispatched and Waiting 

PerfTechPro for z/OS also simplifies the data management process associated with Mainframe Capacity Planning.  Using a streamlined process on the z/OS host, PerfTechPro extracts and formats the data required from various SMF sources (E.g. SMF Type 7n, Type 113); delivering an optimized Performance Data Base (PDB) for use by the Windows based GUI.  This optimized file safeguards fast processing during the reporting and forecasting activities, while simplifying any data aggregation processes (E.g. Weekly, Monthly, et al).  Moreover, PerfTechPro allows this data to be stored in non-proprietary (E.g. Microsoft Access, SQL Server, MySQL, Oracle) and multiple database structures, as and if required.

PerfTechPro for z/OS is a simple-to-use and cost-efficient solution, allowing customers to quickly save time and money from their Capacity Planning and Performance Measurement solution.  Ultimately the bottom line objective for PerfTechPro for z/OS is to provide a best-of-breed solution for a very competitive cost. PerfTechPro for z/OS delivers business value by:

  • Ensuring enterprise zSeries Mainframe server resources are being used efficiently
  • Maximizing opportunities for cost-savings
  • Anticipating & responding to increased demand on resources
  • Reducing costs by exploiting periods of lower resource demand
  • Discerning underlying causes of performance and capacity issues
  • Eliminating time-consuming manual tracking, recording and analysis
  • Implementing disciplined management of valuable business resources

In conclusion, the Mainframe Capacity Planning process continues to evolve, forever striving to reduce any discrepancies in CPU requirements forecasting, which of course, have a high associated cost consideration.  Integrating CPU MF (SMF Type 113) must be a mandatory requirement, safeguarding that CPU Sizing, Forecasting, Modelling and Correlation Analysis activities are optimized.  Additionally, the actual process of Mainframe Capacity Planning is an activity that requires great skill and considerable associated responsibility.  A modern day solution such as PerfTechPro for z/OS is worthy of consideration, having been designed by a team with a heritage in delivering Mainframe Capacity Planning solutions, architecting function compatible with modern day functionality, while considering the latest technology zSeries CPU chip design considerations.