Enterprise Executive - 2017: Issue 2
ITOA And Mainframe: Why It Matters In 2017–18
Steven Menges 2017-04-12 02:27:11
Organizations are adopting the principles and tools of ITOA (IT Operations Analytics) such that the ITOA market is projected in one recent industry report to grow from $2.2 billion to $9.8 billion by 2020. And while many of the early adopters are not yet enterprises with mainframes, that is changing, so you need to understand what ITOA is, why it is relevant, how it is adopted, how it is used (and by whom) and why “ITOA” and “mainframe” are becoming almost synonymous. One could argue that 2017-18 is the period during which your company or organization will either lead, compete or be left behind when it comes to leveraging ITOA. The time to “lead” in this area is this year as I can attest personally to the many dozens of your peers who are already engaged in ITOA projects powered by machine data. What Is ITOA ITOA [ahy-toh-ah] noun: ITOA (an acronym for IT operations analytics) The process of analyzing, managing and recognizing the patterns and anomalies out of high volume IT data The approach being adopted by leading enterprises to reduce MTTR, ensure SLA (service level agreement) adherence, reduce operation costs and drive both strategy development and innovation efforts. That same study points out that the ITOA space is dominated by “real-time log analytics,” also known as machine data-powered Big Data solutions (with application performance management checking in below it), so we’ll focus there for enterprise ITOA. Why ITOA? Well, the prospect of being able to simplify and improve the IT operations function and simultaneously gain operational insights just by analyzing your own machine data is pretty tantalizing. Unless your enterprise environment is relatively static and your volume of operational data (and complexity) is not increasing, you will know that change is happening and that you will need to change as well. Big Data analytics has come of age (finally) and the ITOA use case is the one that applies to every type of business, government and other organization of any real size. With the help of ITOA solutions, enterprises can quickly spot anomalies, patterns, etc. to get to the root cause of problems (or potential problems) exponentially faster than otherwise possible, and do so with the analytics tools readily at hand. They can also use this Big Data to do more effective capacity planning and to better predict future resource requirements, issues and even costs (such that they can do something to reduce or prevent them). The ability to visualize all parts of your sprawling systems and solve IT-related problems faster, regardless of scale or complexity, is compelling to every vice president of IT infrastructure or IT operations, every CIO and everybody else responsible for making IT work every day for the business or mission. What is really driving adoption at an amazing rate is further promise of using that same data to predict and automatically address and prevent problems. Most organizations prefer to get the analytical results in the form of interactive charts, dashboards and graphs that broaden the pool of potential users who can use the modern ITOA tool and get the value easily and directly. The learning curve for users is minimal. And that matters. So the “why” is abundantly clear. How Is ITOA Analysis Done? Logs, logs, logs—a stream of logs such that all the machine data information is aggregated, indexed, housed and uniquely accessible and searchable for the purposes of visualizations, reports and more. Log data basically provides the information of the IT activities across an organization. The types and sources of these machine-generated files are many and varied, including system logs, web server logs and application logs. The proprietary elements of the indexing and searching are a big part of the “how.” So is the collection of all the “machine data” from all sources, then getting them to the ITOA tool on a “real-time” basis. Collecting SMF files, logs and other machine data in the z/OS environment; correlating that data in an ITOA tool, in real-time, with data from the rest of your enterprise and subjecting all that data to operational analytics: That is the “how.” Who “Uses” ITOA Okay, but who exactly gets to use these ITOA solutions and exploit their insights? The answer is that there are two primary groups which have users and both relate to the mainframe, or the “glass house” as it also known. One is the business IT or Big Data team (which typically sits on the distributed-systems side of the house or in a line of business (LOB), and the second is the mainframe IT team. IT operations on the business IT side of the house is typically responsible for the network, storage, systems and facilities in the IT environment. That function sometimes also owns problem management and helpdesk operations, and they also often own risk and compliance matters, fully or partially. IT ops will tell you they focus on IT’s “value to the business”—that they are the real drivers of both innovation and strategy. Whether mainframes are uppermost in their minds when they make these statements is another question. It can vary from one company to the next, but either way, they have big and evolving needs that can be addressed by ITOA, and their eyes get wide when they see what an ITOA solution in 2017 can do (which is almost immediately after standing it up in most cases). The people in mainframe IT, by contrast, are using ITOA for day-to-day performance tuning and widely varying other purposes. These users were not targeted by ITOA vendors and their use cases, etc. evolved on their own. They typically are a few clever mainframers not satisfied with their current performance monitors, capacity planning and IT troubleshooting tools. When they see what modern ITOA tools can do, they look beyond that and envision new ways to view, analyze and improve their operations and perhaps find new, better approaches to capacity planning, QA performance testing and other functions. Once all that log data is “searchable,” consisting of both historical and real-time data, ITOA apps can quickly be customized to create dashboards that deliver exactly what is needed, and allow for deeper dives when necessary. For the mainframe IT function, ITOA dashboards can be tailored to address that long-standing issue of how to “modernize the mainframe” so that new or less experienced mainframe staff can do real work and add value right away. That’s the power of current ITOA technology for mainframe and distributed audiences alike. Why Now (for mainframe Enterprises Like Yours)? For the past few years, many organizations, particularly in banking, insurance, manufacturing and government, have been experimenting with Big Data analytics tools for ITOA. All have something in common— the z/OS mainframe. Because industry leaders like Splunk, Hadoop and Elastic have free versions that let organizations see what the visualizations and dashboards look like when fed with real-time machine data, one or more of these ITOA solutions is likely already running somewhere in your organization with a data scientist or even an IT Infrastructure or IT operations professional at the helm. Add to that reality that there is now the capability for enterprises to stream mainframe machine data, also via a free version, such that it can be correlated with its distributed and cloud counterparts, and you can see how the “proof of concept” phase for ITOA can be complete or well underway in many places (albeit often by Big Data specialists who may not even be aware the organization has a mainframe). What they do understand is that by analyzing all your relevant machine data, you see the bigger picture of what is happening in your environment and make better decisions. You can’t manage what you can’t see, and since your enterprise’s IT operations span both mainframe and distributed (and/or cloud) computing, it needs an ITOA solution that does as well. What’s Next for You If you’re not yet engaged in an ITOA or other IT operational analytics project with a vendor like Splunk, it is strongly suggested that you seek out your Big Data team, see what they’re evaluating or using, and remind them that likely half the log or “machine” data they need to drive the insights they and their LOBs want (and that you can also use), is currently trapped in your mainframe. Those SYSLOG, Log4J, USS, RMF, SYSOUT and SMF logs, which can reveal such things as Abend-AID data, are essential for valuable ITOA insights, actionable improvements and even for compliance. Then give them the news that Big Data-powered ITOA, based on ALL the relevant data in your enterprise, is now possible and can even be trialed for free (with a little help from the mainframe team, of course). If your organization doesn’t have any Big Data efforts underway, that’s your opportunity to help drive an effort that can have a big impact on key areas that you, your colleagues and your company care about. So, to recap, 2017-18 is a critical period for ITOA adoption during which your organization has the opportunity to lead and get the operational and competitive advantages of your custom dashboard and apps to drive differentiation or to compete. It’s the year to join the early adopters or be left behind when it comes to leveraging ITOA, and to drive significant performance and cost improvements both for mainframe and overall IT operations. Given the easy on-ramps and rapid “timeto- value,” getting left behind from the ITOA train just doesn’t seem like a viable option, does it? Steven Menges is a B2B innovator and products executive with 20 years’ progressive experience, He owns the product management and marketing functions for Big Iron and Big Iron-to-Big Data solutions at Syncsort. A frequent industry author and speaker, he is an Adjunct Professor and Capstone/Thesis advisor at the NYU Master’s in Management and Systems and Master’s in Integrated Marketing programs and is the co-developer of the Business-to-Business Marketing Maturity Model. Email: email@example.com
Published by Enterprise Systems Media. View All Articles.