IBM System z PartnerWorld Solution Development Evolution

Currently there are in excess of 2,300 companies delivering solutions for IBM System z listed in the IBM Global Solutions Directory. Considering the number of global System z customers, currently estimated as ~4,000, this is quite a good ratio! It’s also evidence of the significant ability of this System z ecosystem to deliver innovation and support to said customer base. Maybe we should consider how these System z solution delivery businesses develop and maintain their software, hardware and service offerings…

Obviously to develop, support and enhance an IBM Mainframe software or hardware product, access to an IBM Mainframe is a mandatory requirement. In the 1980’s, procuring an IBM Mainframe was an expensive undertaking, hence the number of IBM Mainframe IHV (Hardware) or ISV (Software) partners was limited. Therefore we should not overlook the evolution that has taken place in the last 25 years or so, delivering the significant, diverse, innovative and global System z ecosystem in place today.

In the early 1990’s the IBM Advanced Workstations Systems Division (AWS) worked on delivering complete compatibility with existing IBM Mainframe operating systems and software, delivering this function in the S/390 Processor Card. Later iterations of this S/390 Processor Card offered plug compatibility with RISC and PC server architectures, packaged as R/390 and P/390 servers respectively. In essence these R/390 and P/390 server solutions delivered “A Mainframe In A Box”. Put another way, the entire IBM Mainframe infrastructure including CPU, Memory, I/O Subsystem, Consoles, Disk, Tape, Networking Interfaces, et al, were all contained within the one PC or RISC based server footprint. Some of the software modules we might be familiar with for delivering this functionality are AWSDISK, AWSPRINT and AWSTAPE, where the respective function is denoted by the module name.

Therefore with the R/390 and P/390, subsequently followed by the S/390 Integrated Server and then MP3000, low cost access to IBM Mainframe servers was possible. However, let’s not forget that in conjunction with hardware compatibility, low cost access to existing IBM Mainframe operating systems and software was also required. This software access was delivered by the Application Developers Controlled Distributions (ADCD), incorporating a package of the majority of IBM Operating System and supporting subsystem program products. Therefore, once a business proved its intentions in developing a software or hardware solution for the IBM Mainframe, they gained very low cost access to said IBM Mainframe software. Without doubt, the innovation of the S/390 Processor Card and Application Developers Controlled Distributions (ADCD) resources, allows the System z community to benefit from the related ecosystem in place today.

This IBM Mainframe emulation capability provided the opportunity for other 3rd party supplier to deliver x86 servers that supported the IBM PartnerWorld for Developers (PWD) ADCD initiative. For example, FLEX-ES from Fundamental Software.

Currently, IBM deliver the System z Personal Development Tool (zPDT) for ADCD access, while many ISV’s and IHV’s now actually deploy an official IBM System z server, for example, zBC12, as the cost of Mainframe servers has reduced substantially in the last decade or so. Optionally, recognizing the virtualization capabilities of System z and higher speed network access, System z development can now be achieved remotely. The System z Remote Development Program (zRDP) for z/OS, z/VM and z/VSE provides qualified partners with remote access to supported generally available and supported operating systems and software products. Additionally, IBM has built a number of Innovation centres globally (I.E. Africa & Middle East, Asia Pacific, Europe, Latin America, North America), facilitating the possibility for System z innovation with local resources.

An example of the diversity and innovation of the System z ecosystem is the SVA zHosting concept, allowing an IBM PartnerWorld for Developers (PWD) member and/or Independent Software Vendor (ISV) the ability to port existing or install new development environments into a local fully certified IBM System z Mainframe data centre, in this case, located in Germany.

In conclusion, as other IT technologies have evolved, IBM have provided a cost-efficient environment, encouraging and maintaining the IBM System z Mainframe ecosystem. Firstly in the 1990’s with full emulation for RISC and PC based servers and more latterly in the 21st Century with remote access. This low cost access to full System z capability, safeguards that the System z ecosystem remains significant, current, diverse, while the realm of possibility for innovation exists.

Java: Is System z A Viable Server Platform?

As long ago as 1997, IBM integrated Java into their IBM Mainframe platform, in those days via the then flagship OS/390 Operating System. As with any new technology, perhaps the initial OS/390 Java integration offerings were not perfect, but some ~20 years later, a lot has changed…

In 2000, IBM Java SDK 1.3.1 delivered z/OS and Linux on z support, quickly followed by 64-bit z/OS support in 2003 via SDK 1.4. In 2004 Java Virtual Machine (JVM) and JIT (Just-In-Time) compiler technology support was provided, while Java code has always exploited IBM specialty engines, primarily zAAP initially and now via zIIP and the zAAP on zIIP capability. Put simply, IBM continues to invest aggressively in Java for System z, demonstrating a history of innovation and performance improvements, up to and including the latest z13 server.

So why should a 21st century business consider the System z platform for Java workloads?

Arguably the primary reason is a rapidly emerging requirement for the true 24*7*365 workload, which cannot accommodate a batch window, where Java is ideally placed to serve both batch and OLTP workloads. Put another way, the need to process batch work has not gone away, whereas a requirement to process batch work concurrently with OLTP services has emerged. Of course, traditionally the typical System z enterprise might have two sets of IT staff for OLTP and batch workloads, typically in the IT Support and Application Management teams, whereas via Java and a workload centric approach, separate batch and OLTP support personnel are not necessarily required.

For the System z platform, Java support has always been incorporated into the core architectural building blocks, namely z/OS, CICS, DB2, IMS, WebSphere, Batch Runtime, et al. Therefore there are no functional reasons why new applications or indeed existing applications cannot be engineered using the pervasive Java programming language and deployed on the System z platform.

Quite simply, Java is a critically important language for IBM System z. Java has become foundational for data serving and transaction serving, the traditional strengths of IBM System z. WebSphere applications written in Java and processing via System z, benefit from a key advantage through co-location. This delivers better response times, greater throughput and reduced system complexity when driving CICS, DB2 and IMS transactions.

Java is also critical for enabling next generation workloads in the IBM defined Cloud, Analytics, Mobile & Security (CAMS) framework. Cloud and mobile applications can access z/OS data and transactions via z/OS Connect and other WebSphere solutions, all inherently Java based. Java on System z also provides a full set of cryptographic functions to implement secure solutions. A key strength of Java applications is the ability to immediately benefit from the latest hardware performance improvements using the Just In-Time (JIT) compiler incorporated in the latest IBM Java SDK releases.

Let’s not forget, there are many other good reasons why Java might be considered as a viable application programming language:

  • Personnel Skills Availability: Java is typically ranked in the top 3 of most widely used programming languages; therefore personnel availability is abundant and cost efficient.
  • Application Code Portability: Recognizing Java bytecode and associated JVM functionality, no matter what the platform (E.g. Wintel, X86 Linux, UNIX, z/OS, Linux on System z, et al), the Java application code should process without consideration.
  • Application Tooling Support: Application Development tools have evolved to the point of true platform independence, where Application Programmers just create their code, they don’t necessarily know or sometimes care, where that code will execute. Let’s not forget the simplification of Java code for OLTP and batch workloads, reducing associated IT lifecycle support costs.
  • TCO Efficiencies: Simplified Application Development and deployment reduces associated cost, while reducing implementation time for mission-critical workloads. Java exploitation of the zAAP (zAAP on zIIP) safeguards low software costs and optimized processing times (I.E. Sub-Capacity specialty engines run at full speed).

With the announcement of the zEC12 server, notable Java enhancements included:

  • Hardware Transaction Memory (HTM) – Better concurrency for multi-threaded applications
  • Run-Time Instrumentation (RI) – Innovation of a new hardware facility designed for managed runtimes
  • 2 GB Page Frames – Improved performance targeting 64-bit heaps
  • Pageable 1 MB Large Pages (Flash Express) – Better versatility of managing memory
  • New Software Hints/Directives – Data usage intent improves cache management; Branch pre-load improves branch prediction
  • New Trap Instructions – Reduce implicit bounds/null checks overhead

In summary, System z users can expect up to 60% throughput performance improvement amongst Java workloads measured with zEC12 and the IBM Java 7 SR3 SDK.

IBM z13 and the IBM Java 8 SDK deliver improved Java performance, including Single Instruction Multiple Data (SIMD) vector engine, Simultaneous Multi-Threading (SMT) and improved CP Assist for Cryptographic Function (CPACF). Delivering up to 2X improvement in throughput-per-core for security-enabled applications and up to 50% improvement for other generic applications.

Other z13 Java functional and performance improvements include:

  • Secure Application Serving – Application serving with Secure Socket Layers (SSL) will exploit the new Java 8 Clear Key CPACF and SIMD vector instructions for string manipulation. An additional 75% performance improvement for Java 8 on z13 with SMT versus Java 8 on zEC12.
  • Business Rules Processing – Business rules processing with Java 8 takes advantage of the SIMD vector instructions and SMT for zIIP specialty engines on z13 to achieve significant improvements in throughput-per-core. An additional 37% performance improvement from z13 SMT zIIPs with Java 8 versus Java 8 on zEC12.
  • Specific z/OS Java 8 Exploitation of z13 SIMD – Java 8 exploits the new z13 SIMD vector hardware instructions for Java libraries and functions. These SIMD vector hardware instructions on z13 for improved performance, where specific idioms/operations were improved by between 2X and 60X. Performance benefits for real life Java applications will be dependent on how frequently these idiom/operations are used.

In conclusion, the IBM commitment to Java on System z is clearly evident and the cost, performance and security proposition becomes compelling on the latest zEC12 and z13 Mainframe servers. The pervasive deployment of Java as a universal IT programming language dictates that programmer availability will never be an issue, and platform independence dictates that Java applications can be created and processed on any platform. Let’s not forget, the strong single thread performance and I/O scalability of System z as a significant differentiator when comparing Java performance on any IT platform.

Moreover, as always, perhaps the business dictates what platform is the most suitable for business applications. The evolution to a combined OLTP and batch workload for the 21st Century 24*7*365 mission critical business application, ideally places Java as an eminently viable programming language. Therefore there is no requirement to reengineer any existing System z application, or to find an alternative platform for new business functions. As always, the System z Mainframe platform should never be overlooked…

CICS: The Best Enterprise Transaction Server & So Much More…

In one form or another, CICS has been available since the mid-1960’s, just about as long as the IBM Mainframe server that recently celebrated its 50th anniversary, released in 1964.  From a deployment viewpoint, 90%+ of Fortune 500 companies deploy CICS, primarily for its robust and often unbeatable ability to deliver sub-second response times for numerous application transactions.  Whether a large or small IBM Mainframe user, CICS delivers an enterprise class solution for a myriad of business types and arguably at one time or another, nearly every committed IBM Mainframe installation has implemented CICS.

In the past few decades I have encountered many failed IBM Mainframe migration projects and more often than not, the primary reason for platform migration failure was the inability of the target platform to deliver consistent sub-second transaction response times for a plethora of mission critical business applications.  Similarly, it often follows that if a non-Mainframe environment has been heavily configured to handle the CICS transaction workload, it often fails with the subsequent batch processing, suffering from significantly elongated elapsed times.

CICS has its foundations as an enterprise class transaction server, capturing data input for subsequent storage and retrieval in Database Management Subsystems, but let’s not forget, CICS can do so much more…

Let’s not forget, in 2001 CICS Transaction Server (TS) 2.1 for z/OS introduced the foundation for web services support, and a capability for CICS transactions to be invoked via HTTP requests.  There have been numerous enhancements since, too many to mention, which have evolved CICS into a fully-rounded family of solutions, allowing for cradle to grave application design and delivery.  However, let’s just take some time to review what CICS TS Version 5 has delivered, and how this might benefit the 21st Century business.

Recognizing the IBM defined Cloud, Analytics, Mobile, Social & Security (CAMSS) initiative, CICS is integral to such a business facing initiative, primarily from a Cloud interoperability viewpoint.

The CICS V5.2 Application Server has a capability to host multiple applications and multiple versions of the same application, simultaneously, primarily due to the substantial increase of platform scalability.  Similarly, a heterogeneous code environment offers application development personnel a single environment to work seamlessly with Java and other legacy languages, such as COBOL, C/C++, PL/I, et al.

Combining this heterogeneous code environment with Cloud enablement allows for new application version deployment, without a requirement to disable or remove the previous version from Production processing.  Regardless of the underlying application source code base, end users can access an application without service interruption, as they transition to the new application version.  Similarly, user workloads can be seamlessly redirected to a previous working version of application code, presuming new version errors.

The CICS V5.2 application server delivers a powerful hosting environment for all business applications, new or old.  Application Development teams can provision applications for design and testing within a “real life working” infrastructure, removing the application when testing is complete, or promoting said application ASAP, for mission critical business usage.  This delivers a significant business improvement in application availability, minimizing service downtime.  Therefore, applications stay as current and relevant as possible, reducing the risk of business service outages, delivering better and consistent end user experiences.

As per the IBM zSeries Mainframe heritage, a standard resilience feature of the CICS V5.2 Application Server is an inherent capability to perform, even in the event of a problem scenario.  Cloud enablement delivers a clustering capability, which handles both system and application level failure scenarios.  Seamless and timely problem resolution dictates less down time, delivering more business availability, instilling a high sense of confidence in end users and consumers alike.

Noting the Security aspect of the IBM CAMSS initiative and the ever present cybersecurity risk to us all, CICS Application Server also delivers on this front.  Safeguarding that application enhancements can be brought online ASAP and securely, CICS V5.2 Application Server seamlessly integrates with various security software and languages, including the latest WebSphere Application Server (WAS) Liberty Profile, allowing for the portability of Java Enterprise Edition Web applications.  Enhanced security capabilities also include Java Naming and Directory Interface (JNDI), Bean Validation, JDBC Type 2 Data Sources and the Java Transaction API (JTAPI).  SSL support incorporated within the Liberty JVM server HTTP listener is extended to support key certificates stored in System Authorization Facility (SAF) key rings, delivering SSL server authentication.

Like it or not, Cloud computing is a rapidly evolving technology, where the Cloud is integrating increasingly more applications and associated services, delivering cost savings and scalability accordingly.  Of course, for true enterprise class scalability and cost efficiency, arguably the IBM zSeries Mainframe is an ideal platform for Cloud technologies.  Therefore with some slightly modified thinking, organizations can deploy Cloud based solutions and benefit from application promotion benefits, especially with a technology such as CICS.

There is a great presentation named Five Compelling Reasons for Creating a CICS Cloud that provides robust working examples of how to increase application availability, with real-life application development scenarios.

In conclusion, CICS continues to evolve and not only is it the best Enterprise Class Transaction Server, the family of CICS products including its Application Server deliver a 21st Century Cloud computing compatible platform, for the most demanding of business requirements.  Whether considering the IBM defined Cloud, Analytics, Mobile, Social & Security (CAMSS) initiative or the more traditional Reliability, Availability and Serviceability (RAS) attributes of the zSeries Mainframe server, the latest version of CICS facilitates:

  • Rapid Application Development: Agile methodologies for rapid development, irrespective of programming language (E.g. Java, COBOL, C/C++, PL/I, et al).
  • Seamless & Timely Application Deployment: More frequent application updates, minimizing downtime and associated cost, while leveraging from Cloud functionality, to deploy new applications, application enhancements or bug fixes, side-by-side with existing real-life Production workloads.

z13: A Digital Business Ready Solution?

As per the usual next generation zSeries Server release, IBM announced their latest evolution on 13 January 2015, namely the z13. IBM describe this platform as the most powerful and secure system ever built:

  • First system able to process 2.5 billion transactions per day, built for mobile economy
  • Makes possible real-time encryption on all mobile transactions at scale
  • First mainframe system with embedded analytics providing real time transaction insights 17X faster than compared competitive systems at a fraction of the cost

At first glance, feeds and speeds generally don’t enthuse the audience, but if we dig deeper and acknowledge other recent IBM developments incorporating Apple, Twitter and Data Analytics announcements, we perhaps can draw some better business-facing conclusions. IBM have a clearly defined Cloud, Analytics, Mobile, Social & Security (CAMSS) initiative, seemingly based upon the IDC 3rd platform defined as Social, Mobile, Analytics & Cloud (SMAC).

Industry analysts predict that in the next 3 years and by 2017, SMAC (CAMSS) expenditure will account for 25%+ of total enterprise software market revenue, doubling from ~12% in 2012. In simple terms, this new expenditure opportunity represents $100+ Billion revenue. We can imagine that all major ISV’s will be wanting their share of this market…

Whichever classification you choose, IBM CAMSS or IDC SMAC, IT infrastructures and associated investment currently are and certainly will be heavily influenced by this new world computing paradigm. Like it or not, an ability to perform a transaction anywhere (Mobile), keeping everything simple and networked (Social Media), real time prediction of future customer requirements (Analytics), available anywhere (Mobile), for an alleged fraction of the cost (Cloud), makes sense for the 21st Century business. Ignore this new technology evolution at your peril as it will impact each and every area of the IT enterprise and associated resources, primarily software and supporting hardware.

Did you notice the difference between the IBM classification and IDC? IDC have not considered Security to be a consideration factor worthy of acronym (SMAC) inclusion. In today’s world of cybersecurity, that might be somewhat of an oversight, but we must assume that IDC consider cybersecurity to be a consideration for all of the Analytics, Cloud, Mobile & Social aspects, which of course it is!

If we consider the relative merits of technology platforms from a security viewpoint, the IBM z13 delivers EAL5+ security certification, whereas other non-Mainframe platforms can only currently claim EAL4+ certification.

It is estimated that 55%+ of enterprise (mission critical) transactions are processed by the IBM Mainframe, but this is based on pre mobile workloads. It therefore makes commercial sense for IBM to safeguard their flagship platform not only maintains the existing IBM Mainframe customer base, but captures new and mobile centric workloads.

Having considered the business requirements for today’s IT business, let’s now classify the new features of the z13 platform:

  • Up to 40% more total system capacity compared to the zEC12.
  • Up to 10 terabytes (TB) of available Redundant Array of Independent Memory (RAIM) real memory per server.
  • Cryptographic performance improvements with new Crypto Express5S.
  • Economies of scale with simultaneous multithreading delivering more throughput for Linux and zIIP-eligible workloads.
  • Improved performance of complex mathematical models, perfect for analytics processing, with Single Instruction Multiple Data (SIMD).
  • IBM zAware cutting-edge pattern recognition analytics for fast insight into system health extended to Linux on z Systems.
  • A reduction in elapsed time for I/O-bound batch jobs with new FICON Express16S versus FICON Express8S.
  • Support for larger memory configurations planned to be supported on z/OS systems, which can be used to improve transaction response times, lower CPU costs, simplify capacity planning and ease deploying memory-intensive workloads. (The IBM z13 offers up to 10 TB memory.)
  • I/O service time improvement when writing data remotely using the new zHPF Extended Distance II.
  • Support for up to 256 coupling CHPIDs, which provides enhanced connectivity and scalability for a growing number of coupling channel types.
  • IBM Integrated Coupling Adapter (ICA SR), which offers greater short reach coupling connectivity than existing link technologies and enables greater overall coupling connectivity per IBM z13 than prior server generations.
  • Capability to extend z/OS workload management policies into the SAN fabric.
  • New rack-mounted Hardware Management Console (HMC), helping to save space in the data center.
  • Non-raised floor option, offering flexible possibilities for the data center.
  • Optional water cooling, providing the ability to cool systems with user-chilled water.
  • Optional high-voltage dc power, which can help IBM z Systems clients save on their power bills.
  • Optional top exit power and I/O cabling designed to provide increased flexibility.
  • New IBM z BladeCenter Extension (zBX) Model 004 in support of heterogeneous resources managed by IBM z Unified Resource Manager.

As we all know, Moore’s Law had to end sometime soon and this is true for System z CPU chips. The zEC12 CPU was often claimed to be the fastest commercial processor, with a 32nm core and a 5.5 GHz rating. The z13 chip runs a 22 nm core at a 5 GHz, at first glance ~10% slower than the zEC12. The new z13 chip delivers a ~10% performance increase, due to advances in core design, with better branch prediction and pipelining in the core. Noteworthy, is the slightly slower clock speed of the z13 chip, reducing heat output, probably signifying that ~5 GHz is the ceiling for CPU chips in the near future.

However, for z13, the doubling of performance still apples for many other resources:

  • Cryptographic coprocessors performance (~2*)
  • Channel speed (~2*)
  • I/O bandwidth (~2*)
  • Memory/Cache performance (~2*)
  • Memory capacity (~3*)

Once again, classifying these technological advances in terms of mobile business, the z13 delivers real-time encryption of mobile transactions, protecting transaction data, delivering consistent response times for a quality customer experience. Overall, IBM claims the z13 delivers a potential for ~36% better response time, ~61% better throughput and ~17% lower cost per mobile transaction.

A major and subtle change introduced with the z13 is Simultaneous MultiThreading (SMT). SMT allows 2 active instruction streams per core, each dynamically sharing the core’s execution resources. SMT will be available in IBM z13 for workloads running on the Integrated Facility for Linux (IFL) and the IBM z Integrated Information Processor (zIIP).

Each software Operating System/Hypervisor has the ability to intelligently drive SMT in a way that is best for its unique requirements. z/OS SMT management consistently drives the cores to high thread density, in an effort to reduce SMT variability and deliver repeatable performance across varying CPU utilization, thus providing more predictable SMT capacity. z/VM SMT management optimizes throughput by spreading a workload over the available cores until it demands the additional SMT capacity.

From a capacity planning and performance measurement viewpoint, just a slight note of caution. Although the z13 CPU chip delivers increased CPU capacity, the raw speed is slower and there are considerations for SMT. A former IBM staffer, Bob Rogers has written a great article on this SMT subject matter, which should be on your reading list!

In conclusion, the z13 announcement is another step forward for zSeries Mainframes. If you consider this announcement as just another next generation zSeries Mainframe announcement, you’re not treating your business or yourself with the respect they deserve. Instead, please consider this z13 announcement as an evolution from an enterprise solution delivery viewpoint. Primarily, consider the 21st century business keywords, in no particular order, of Analytics, Cloud, Mobile, Social & Security.

Big Data: Is the zSeries Mainframe A Viable Platform?

Noting that ~80% of global corporate data is still managed by IBM Mainframes, doesn’t it make sense that processing this mission critical data should remain local, whenever practicable and pragmatic?

Industry Analyst’s estimate that 90%+ of existing IT budget expenditure is expended on the maintenance of existing applications and their supporting infrastructure. A significant factor is the siloed, duplicated and complex nature of these existing IT environments. Repeating this often unnecessary data duplication and processing for big data implementations will only exacerbate this significant TCO expenditure. Therefore it is of primary importance to consider big data from a strategic rather than a purely expedient tactical perspective. Put another way, if big data could be accessed and processed by the incumbent IBM Mainframe environment, why create another silo environment, requiring more servers, storage, software and associated maintenance expenditure?

It is estimated that each and every day another ~2.5 Exabyte’s (2.5 quintillion bytes) of data is created, meaning that ~90% of electronic data stored, has been created in the last two years alone. This data comes from numerous sources, largely Internet and mobile telephony based, including social media sources, digital pictures and videos, financial transaction records, cell phone generated, naming but a few.

Industry Analyst’s estimate that only ~1% of global data is currently analysed, leaving massive scope for growth in this functional area, namely big data analytics. Obviously this scope dictates exponential and arguably uncontrolled growth in deployment of big data analytics solutions, generating significant risk that big data projects will lack management oversight, spiralling out of control from a cost viewpoint.

It therefore follows that big data initiatives require careful and strategic planning, not only for short-term immediate requirements, but also for future big data projects that can already be perceived and forecasted. Moreover, in addition, there needs to be a strategic, scalable, cost efficient and secure infrastructure in place, managing the interrelationship and interdependencies, between mission critical data stored on the IBM Mainframe and big data being created from Internet and mobile technologies.

Without such a diligent and structured management framework, IT infrastructure expenditure costs (TCO) will increase, efficiency reduce, with the inevitable consequence of siloed environments, with duplication of resources, namely servers, software, storage, et al. As always, we must always apply lessons learned from past experiences to avoid these inefficiencies.

Hadoop is seemingly the big data buzzword, being an open source software framework for storing and processing big data in a distributed environment on large clusters of commodity hardware. Ultimately Hadoop delivers two primary functions, massive data storage and faster in memory I/O processing.

In conclusion, the underlying question remains, can mission critical IBM Mainframe data be “coupled” with big data, typically originating from Internet and mobile platforms, to deliver an integrated single image view of customer and/or product data, for business benefit?

IBM offers an integrated solution, namely the zEnterprise Analytics System (I.E. 9700, 9710), comprising hardware (E.g. z196/zEC12 or z114/zBC 12 Server plus DS8870 Disk) and software (E.g. Optimized z/OS software stack), combined with optional services. Primarily data analytics is delivered by the IBM DB2 Analytics Accelerator solution, incorporating Netezza 1000 product function, allowing for intelligent and rapid in-memory data analytics via the DB2 RDBMS. Therefore existing zSeries Mainframe customers can supplement their current IBM Mainframe infrastructure with the IBM DB2 Analytics Accelerator solution, while the realm of possibility exists for a zSeries Mainframe to be deployed for new workloads, via the zEnterprise Analytics System.

Resource and cost efficiencies are delivered by combining z/OS and Linux on zEnterprise solutions. Data transfer is reduced by keeping data analytics in the same environment as the mission critical source data (I.E. z/OS) using hypersockets to process the data between the IBM z/OS and Linux on zEnterprise systems. Overall TCO efficiencies are delivered by optimizing lower cost Linux on zEnterprise systems resources, where for Sub Capacity z/OS customers, no software charges will be incurred for associated CPU processing. Therefore leveraging from existing zEnterprise infrastructure resources, including people and processes to deploy and support expanding data analytics requirements.

zSeries Mainframe big data analytics solutions, whether via the packaged zEnterprise Analytics System or via the IBM DB2 Analytics solution deliver benefits including:

  • Optimized I/O Processing: Reducing the complexity and cost of data storage and associated processing by bringing data transformation and analytic processes to the data origin (I.E. zSeries Mainframe)
  • Enterprise Wide Data Availability: Safeguarding operational data accessibility to many users in a timely and cost efficient manner without impacting core business processes
  • Near Real Time Data Processing: Delivering near real time operational analytics with minimal latency and superior Quality of Service (QoS) attributes (I.E. RAS – Reliability, Availability, Serviceability)

Syncsort also provide their DMX-h ETL solution to integrate IBM mainframe data with Hadoop technologies. Syncsort DMX-h ETL incorporates a library of Use Case Accelerators to implement common ETL tasks including Mainframe data access, change data capture (CDC), joins, web log aggregations, et al. Implementing a more traditional ETL approach, offloading big data batch workload from the Mainframe to Hadoop platforms, reducing Mainframe MIPS accordingly. Obviously ETL solutions have a long-term history, typically associated with Business Intelligence, Data Warehouse, et al. One must draw one’s own conclusions as to whether ETL solutions contribute to the complexity and cost of managing mission critical business data…

From a business viewpoint, big data analytics delivers benefits, including but not limited to:

  • Optimized & Faster Decision Making: Performing real time analysis of customer transaction and activity data, feedback (E.g. survey and experience) data, et al, can dramatically reduce customer attrition, maintaining existing customer loyalty, applying these lessons learned for attracting new customers.
  • New Products & Services: Customer’s and associated market research have always provided valuable insight into driving innovation, but these traditional processes are time consuming and error prone. Rapidly analysing real life customer data from Internet and mobile sources, delivers an opportunity to offer a new product and/or service, seemingly specialized to their personal individual requirements.
  • Cost Reduction: Performed well, clearly big data analytics can deliver significant cost reduction for the business, reducing product/service development time, while retaining existing customers and attracting new customers. However, done badly, data analytics could be a significant drain on the IT expenditure budget

As always, the zSeries Mainframe delivers an integrated, scalable, secure and cost efficient solution for big data initiatives, even Hadoop, typically perceived as a Distributed Systems solution. Without doubt, big data solutions will be implemented by each and every major global company in the short-term, while pragmatic and careful planning will reduce the associated IT implementation and administration cost. With a legacy of several decades or more delivering enterprise wide solutions, arguably seasoned IBM Mainframe personnel are ideally placed to participate in the design and delivery of big data analytics projects!

Revisiting The zSeries Mainframe Storage Hierarchy

Recommendation: The next time you perform a zSeries Mainframe server upgrade, consider adding Flash Express cards, for an extra 1.4-5.6 TB of memory speed storage. Similarly, the next time you perform a zSeries Mainframe DASD subsystem upgrade, consider adding as much SSD (flash memory) capability that you can afford and justify. Both upgrades will deliver significant performance and business benefits, arguably for minimal cost, when considered as a several year TCO investment.

Conceptually the zSeries Mainframe storage hierarchy has comprised the same layers for many decades, while performance and capacity attributes have dramatically increased over time. Although System/390 introduced the concept of Expanded Storage (I.E. Hiperspace, Data Space) in 1990 and there have been various implementations of SSD (E.g. StorageTek 4080), the ability to transparently implement significant capacity memory layers has only recently become possible.

Let’s not forget, the closer data is to that most precious and expensive of resources, namely CPU, the faster it will process. When revisiting the traditional storage hierarchy, we can now consider two new layers, namely Flash Express and Solid State Drive (SSD):

zSeries Storage Hierarchy

I have previously written about the Flash Express layer. Flash Express is a new memory layer within the zSeries Mainframe storage hierarchy, which can be considered as either a Solid State Drive (SSD) or Storage Class Memory (SCM) technology. Flash Express is integrated on PCI Express attached RAID 10 Cards, packaged as a two card pair, each with a 1.4 TB capacity per mirrored card pair. A maximum of 4 card pairs can be configured, delivering up to 5.6 TB of memory capacity, assigned to LPAR resources, just like main memory.

The simplest function to benefit from Flash Express memory would be SVC dump processing, substantially reducing dump capture time.

Flash Express can also be deployed to replace z/OS disk paging, substantially reducing the response time associated (I.E. ~5-20 μs vs. ~10 ms). The benefit for z/OS paging is not the replacement of memory paging, but replacing disk paging with Flash Express storage. Flash Express is suitable for workloads that can tolerate paging, but will not benefit workloads that cannot tolerate paging activity. The fundamental z/OS design for Flash Express memory will not completely remove any virtual storage constraints created by a paging spike, although a modicum of scalability relief is expected due to the faster I/O associated with Flash Express memory.

In conjunction with Flash Express, there were advancements in the Real Storage Management (RSM) function, including pageable 1MB Large Page Support. Large Pages (1MB) deliver benefit, with increased performance, decreasing the number of Translation Lookaside Buffer (TLB) misses that an application incurs, reducing time when converting virtual addresses into physical addresses and reduced real storage usage to maintain DAT structures. The use of Large Pages typically deliver Internal Throughput Rate (ITR) performance benefits of ~1% for IMS, ~3% for DB2 and ~5% for Java workloads.

Although SSD (flash) storage might have been selectively deployed in the zSeries Mainframe Data Centre for the last 5 years or so, the ever increasing requirement for increased Quality of Service (QoS) in terms of data availability and ultra-fast transaction response times dictate the increased usage of SSD architectures. Entire DASD subsystems can be built upon SSD technologies, or more likely, hybrid subsystems, containing both SSD and traditional HDD technologies. This storage subsystem evolution allows organizations to gain significant competitive advantages, delivering new services for existing and more importantly, new customers alike.

Using SSD disk subsystems, overcomes the limitations of traditional spinning hard disk drives. However, not every enterprise application needs this ultra-high performance; since flash storage still costs more than spinning drives for the same capacity, organizations must be mindful of expenditure and now much flash memory (SSD) they deploy; as always, flexibility is key.

Complete or hybrid SSD I/O subsystems deliver performance and economic advantages for your mission critical business environment:

  • Green Data Centre: ~25-60% energy reduction (flash memory vs. spinning disk)
  • Data Centre Space: ~20-40% smaller footprint (memory cards vs. Hard Disk Drives)
  • Optimal Performance: Consistent ~1-3 ms access (Hard Disk Drives @ ~10 ms)

The utopia is for a self-tuning disk subsystem, automatically redirecting I/O between SSD and HDD, based on file performance and overridden, as and when required, by storage policies. Whether EMC, HDS (HP OEM) or IBM, this self-tuning ability is evolving, while each disk vendor has their own implementation. However, whatever your choice of disk subsystem, the ability to incorporate SSD into your storage hierarchy, either full or partial is evident.

In conclusion, ~25 years ago, the zSeries Mainframe user benefitted from faster performance via System/390 Expanded Storage and disk subsystems with cache and DASD Fast Write memory buffers. The cost of such memory storage was a major consideration then, but with good I/O tuning disciplines, the savvy zSeries Mainframe user benefitted from these technology advancements. Flash Express and SSD deliver the potential to deliver increased performance, for a relatively low cost, and now is the time to embrace these technologies. Ignore the storage hierarchy at your peril and as I previously documented, optimal I/O performance always delivers significant benefit.

IFL – A Cost Efficient zSeries Platform?

In September 2000, IBM introduced the Integrated Facility for Linux (IFL) processor, a specialty engine for and some might say dedicated to running the Linux Operating System.  At the time of this announcement, companion software named S/390 Virtual Image Facility for Linux was introduced to assist in the rapid deployment of IFL configurations, especially for non-Mainframe personnel.  However, this product was quickly discontinued, in favour of the standard z/VM Operating System, which is not difficult to learn and can accommodate hundreds if not thousands of zLinux images.

Today, the IFL is still a processor dedicated to Linux workloads on IBM System z servers.  The IFL is supported by z/VM virtualization and the Linux operating system.  The IFL cannot run other IBM operating systems.  The competitively priced IFL processor is a CPU capacity enabler, exclusively for Linux workloads.  Linux deployment (I.E. SUSE & Red Hat) on IFL’s can reduce expenses in the areas of operational efforts, energy, floor space and especially software.

The IFL provides the following functions and benefits:

  • The IBM Enterprise Linux Server is a dedicated System z Linux server, comprised of only IFL processors
  • No additional IBM software charges for traditional (E.g. z/OS, CICS, DB2, WebSphere, et al) environment
  • Performance improvement for Linux workloads with each successive generation of IFL and System z technology
  • Linux workload on the IFL does not result in increased IBM software charges for traditional System z operating systems and middleware
  • Same functionality as a General Purpose processor on a System z server
  • HiperSockets can be used for communication between Linux images, or Linux and other operating system images on the same System z system
  • z/VM virtualization and most IBM Linux middleware products, plus most vendor software products are priced per processor (core) according to the System z IBM International Program License Agreement (IPLA).  IPLA products have a one-time-charge (OTC) and an annual (optional) maintenance charge, called Subscription & Support
  • Supported by the current z/VM virtualization and IBM Wave for z/VM software versions
  • Always a full capacity processor, independent of the capacity of the other processors in the server
  • Orderable as a System z hardware feature. The number of orderable IFL features varies by the server model and configuration
  • Designed to operate asynchronously with other General Purpose processors
  • Managed by PR/SM in logical partition with dedicated or shared processors. The implementation of an IFL requires a Logical Partition (LPAR) definition, where following normal LPAR activation procedure, LPAR defined with an IFL cannot be shared with a general purpose processor.

There will always be the debate as to which processor and associated server type (E.g. x86, POWER, SPARC) is the most cost efficient, but there is no doubt that the ability to accommodate hundreds if not thousands of zLinux instances in one zServer environmental (E.g. Power, Cooling, Floor Space, et al) friendly footprint, with software pricing per core is worthy of consideration.

Adoption for zLinux has been steady and especially in the emerging territories where it’s not unusual for zSeries deployments to be totally zLinux (I.E. IBM Enterprise Linux Server) based.  Moreover, the majority of large and traditional IBM Mainframe users (I.E. z/OS) have installed at least one IFL, if only to evaluate the z/VM and zLinux offering.  Many have deployed the IFL and associated zLinux solution for business requirements.

Therefore, if one of the major cost benefit features of IFL is optimized software costs; can the IFL processor be considered for other workloads, originating from the traditional zSeries (I.E. z/OS) environments?

Proximal Systems Corporation (PSC) is a company with a solution that transparently offloads data processing from IBM Mainframes to Distributed Systems, with an objective of reducing software cost, while maintaining or improving performance.  The company name is derived from the concept of bringing disparate computing systems into close proximity, functionally speaking, providing totally seamless and transparent interoperability.  The result is a unified computing complex within which various tasks can be easily migrated between systems to their most cost efficient operating environment, while still being able to interoperate as if they were all hosted together on the same system.

The PSC Proxy Coupling Technology allows for a CPU orientated task to be offloaded from one system to another by means of an associated proxy task, which has an identical interface as the task to be offloaded, but delegates the majority of the processing to an offloaded task on another system.  The primary objective of this function are for the cost savings and/or performance improvements that might be delivered by migrating tasks to systems that are able to execute those tasks more efficiently.

The fact that the proxy task maintains the same interface as the application being replaced is crucial; as many past Mainframe migration projects have failed due to insurmountable interoperability problems between the Mainframe and Distributed Systems servers (I.E. Windows, Linux, UNIX, et al).  Proxy Coupling Technology offers a solution to this long-standing challenge.  In theory, this allows for the transparent offload of a traditional z/OS workload (E.g. Sort) from General Purpose (GP) processors, to a less expensive (E.g. IFL) alternative…

In the first instance, the Proxy Coupling Technology offloads General Purpose CPU workload associated with the z/OS sort (I.E. CA Sort, DFSORT, Syncsort) function, to another platform (E.g. IFL).  For IFL based implementations, HyperScokets are utilized to transfer data at memory speeds from the z/OS task to zLinux on the IFL, where the sort operation completes, while the resulting z/OS task and associated data are maintained, as per normal.  From an IFL viewpoint, Ahlsort software performs the sort operation, being a sort solution that maintains compatibility with the majority of z/OS sort function (I.E. Control Card Syntax).  Therefore, this is a transparent implementation, where the only consideration is how much CPU capacity is required for the offload function (E.g. IFL, x86).  The benefits are reduced z/OS MSU usage for the sort function, which can be quite significant, as most business data (E.g. Database Offloads, Customer Orientated, et al) is sorted on a daily if not more frequent basis.

Just as IBM introduced the zAAP on zIIP capability, which allowed some customers to more easily justify a specialty engine (I.E. zIIP), combining workloads to exploit the full capability of the specialty engine; in theory the same ethos applies with the Proxy Coupling Technology.  For the avoidance of doubt, workloads that can be processed on an IFL, such as z/OS sort tasks, can assist in delivering higher Return On Investment (ROI) levels for the IFL, for example:

  • Reduced z/OS WLC MSU usage (I.E. Sort function offload) and associated software costs savings
  • IFL processors run at Full Speed and do not add to traditional workload (I.E. z/OS) software costs
  • Utilize any spare IFL CPU resource not used, releasing General Purpose CPU resource for other work

In conclusion, the Proxy Coupling Technology offers a proposition that is similar to the IBM philosophy of reducing z/OS software costs via specialty engines.  Seemingly to date, primarily only the zIIP and zAAP specialty engines were available to optimize CPU usage for z/OS workloads.  Offloading CPU cycles and thus MSU workload to IFL makes sense, utilizing a cost efficient and indeed a full power CPU engine, where for cost reasons, maybe the majority of z/OS customers don’t deploy the “highest” derivative of General Purpose CPU engine available to them.  On the face of it, the realm of possibility exists for other workloads to benefit from z/OS to IFL CPU offload, following sort, which seems to make sense as the first workload to utilize this solution.

Apple Style Meets IBM Substance

It was the early 1980’s when IBM first announced the Personal Computer (PC), a major breakthrough for delivering affordable and practical computing into the home.  One of the primary features of this computing evolution was the “open architecture” of the PC, built from off-the-shelf and commodity components.  Of course, we all know that around this time, DOS became MS-DOS via Bill Gates and Microsoft, where the rest as they say, is history!

At this time the IBM Mainframe (1964) had nearly 2 decades longevity and was already proving a scalable, secure and reliable platform.  So here we are, some 3 decades later, where Apple and IBM have announced a Global Partnership to Transform Enterprise Mobility.

Whatever your opinion of Apple technology, in the last decade or so they have undoubtedly delivered slick design and style for mobile devices, namely the smartphone and tablet.  Therefore whether the Enterprise accept the premise or not, Bring Your Own Device (BYOD) is inevitable, where employees expect to use their personal devices in the workplace.

IBM have continued to be a dominant force in the Enterprise market, whether with Mainframe technology or not, while establishing a credible presence in the Cloud market space.  As always the world of IT is constantly changing and even though IBM sold its PC business to Lenovo in 2004; some 10 years later, as part of the exclusive IBM MobileFirst for iOS agreement, IBM will sell iPhones and iPads with industry-specific solutions to business clients worldwide.

So what role if any will the IBM zSeries platform play in this Apple deal?  As always, the zSeries platform will deliver enterprise scalability and strength for Security, Database and Messaging integration, but beyond these features, I’m not so sure.  Of course, from a data presentation viewpoint, nothing changes, iOS integration and the ability to present Mainframe originated data remains forever thus for Apple and indeed all other mobile devices.  Similarly from a business transaction viewpoint, the zSeries platform participates in the delivery of mobile support, where from an IBM technology viewpoint, the Worklight solution is one example of an end-to-end integrated development studio software product.

Despite the obvious benefits for Apple, gaining access to the Enterprise via IBM technology and their customer base, and for IBM, delivering the market leading mobile technology into their customer base, what does this mean for the Enterprise?

Business as usual mostly, but Identity & Access Management (IAM) would appear to be a significant challenge.  Firstly, rightly or wrongly, most people don’t consider Apple software to have any security exposures, as the market place for iOS security solutions (E.g. Anti-Virus, Malware, zero day exploits, et al) is limited?  However, one might ponder why the Windows Operating System became such a target for the hacker.  Said hacker might be an opportunist, just because they can, or something more sinister, trying to gain government or business secrets.  So, if the Apple smartphone and tablet devices become ubiquitous if not de facto in the Enterprise, how long will it be before security exposures for iOS and related apps become common place?

I’m open-minded about BYOD (or am I)?  My heart tells me, yes, let the workers use their own device in the workplace, but my head tells me, no way!  Generally for technology decisions, my head always wins.  In this instance, I don’t think my head has a chance; overwhelming company worker desire to use their own mobile device in the workplace, whether iOS, Android, Java ME, Windows Phone, Blackberry, et al, will win out.  If this is the case, this is perhaps where the maturity and reliability of the IBM zSeries Mainframe can assist.

Therefore, at least for Identity & Access Management (IAM), secure access to the most valuable resource within an organization, the data itself via the zSeries server makes sense.  Whether this is via two if not several factor authentication remains to be seen.  However, I’m much more comfortable with an IAM solution that leverages from a Mainframe External Security Manager (ESM), namely ACF2, RACF or TopSecret, as opposed to a universal log-in via a Social Media web site, such as Facebook.  Just because you can log into an Enterprise and arguably mission critical CRM application, such as Salesforce via Facebook Authentication, doesn’t necessarily mean you should…

zIIP Into The Future: Mainframe Specialty Engines Evolution

Sometimes we might lose sight that change can be evolutionary as opposed to revolutionary and this certainly applies to IBM Mainframe specialty engines, for example:

  • 1997: Internal Coupling Facility (ICF)
  • 2000: Integrated Facility for Linux (IFL)
  • 2004: System z Application Assist Processor (zAAP)
  • 2006: System z Integrated Information Processor (zIIP)

To assist with lower IBM software pricing, arguably the ICF offering became the de facto standard for a Mainframe user to be considered “actively coupled”.  Therefore deploying two or more eligible IBM Mainframes, physically attached via coupling links to a common Coupling Facility (I.E. ICF).

The Integrated Facility for Linux (IFL) is a processor dedicated to Linux workloads on IBM System z servers.  The IFL is supported by the z/VM virtualization software and the Linux operating system.  Most customers have at least dabbled into this technology, while some are using this technology extensively, primarily for distributed server consolidation.

Somehow the zAAP specialty engine has become the “black sheep” of the family where the current zEC12 and zBC12 are planned to be the last System z servers to offer support for zAAP specialty engine processors.

As of z/OS V1.11, functionality was delivered enabling zAAP eligible workloads to run on zIIP engines.  This function allowed both zIIP & zAAP-eligible workloads to process on zIIP.  This capability was ideal for customers with insufficient zAAP or zIIP eligible workload to justify a specialty engine.  Whereas the combined eligible workloads increase the ROI metrics for zIIP deployment.  The zAAP specialty engine is primarily targeted for web-based applications and SOA-based technologies, namely Java and XML.

So for z/OS type workloads, we must “zIIP Into The Future”…

Sometimes we need to look at the big picture, where the IBM organization is comprised of many business units, including the Mainframe business unit.  The Mainframe business unit itself contains many groups, including, but not limited to, the Hardware and Software groups.

As we all know, z/OS software TCO is significant and so this translates into higher revenues for the IBM Mainframe software group; but what about the IBM Mainframe hardware group?  Perhaps the specialty engines, primarily in the form of zIIP will generate revenue stream for this business unit.  Along with the introduction of zBC12 & zEC12 servers, IBM increased the zIIP to General Purpose (CP) engines ratio to 2:1; meaning you can have 2 zIIP specialty engines with the same capacity as an associated CP engine.  Previously the maximum ratio allowed was 1:1 (Specialty:CP).

What workloads are zIIP eligible?  Over time and since 2006 the amount of workload that is zIIP eligible has increased, primarily due to software development and upgrade efforts of IBM and the 3rd party ISV community:

  • DB2 for z/OS exploits the zIIP capability for portions of eligible data serving, pureXML and utility workloads
  • Other 3rd party DBMS solutions, including ADABAS & IDMS offload workload to zIIP
  • Most Systems Management tools (E.g. OMEGAMON, MAINVIEW, RMF, SYSVIEW, et al)
  • z/OS XML System Services for eligible XML validating and non-validating workloads
  • Other z/OS functions including /OS Communications Server, Global Mirror, CIM Server, et al

What are the benefits of deploying a zIIP specialty engine?

  • Lower acquisition and maintenance costs, when compared with general CP
  • zIIP engines run at full rated CP speed
  • Offload work (CPU) from General Purpose (CP) engines
  • No cost for Sub-Capacity eligible IBM software (I.E. WLC)

So, one must draw one’s own conclusions, but seemingly the deployment of zIIP engines is a “no brainer”!

Hmmm, once again, evolution is a good thing and the zIIP engine has an 8 year history and its predecessor zAAP, a 10 year history.  This ~10 year period has allowed for user experiences and IBM function developments to evolve a more stable and rounded offering and as previously stated, a product for the IBM Mainframe Hardware group to focus upon.

From a customer viewpoint, zIIP deployment requires a Capacity Planning evolution, which should be reasonably straightforward.  The big difference is the CP to zIIP offload consideration and some of the lessons learned include:

  • Software costs – Multiple-Processors; CP to zIIP Offload Rate; zIIP utilization
  • Hardware costs – Installed Books (total MSU/MIPS capacity); Additional LPAR(s)
  • Peak CPU utilization – Safeguard that zIIP exploitation reduces peak CPU usage
  • CPU per Transaction – Slight increase in CPU (not necessarily elapsed time) as workload switches from CP to zIIP
  • zIIP utilization – Early experiences indicate ~50% zIIP engine busy is a good number

In conclusion, zIIP deployment has been gradual and evolutionary, but many factors indicate that zIIP is here to stay and it is the future.  Seemingly from an IBM viewpoint, with benefit for the Mainframe Hardware Group in terms of the eradication of the zAAP engine, the increase in CP:zIIP ratio to 2:1 and the associated customer benefits of Sub-Capacity software pricing.  From a customer viewpoint, ignoring these pointers might not be wise, as z/OS software costs are significant and CPU resource requirements keep increasing.  Adding extra zIIP CPU capacity reduces hardware and associated software costs and so this is the “no brainer” observation that can’t be ignored for much longer…