Cal Braunstein 2017-04-12 02:33:15
IN many industry sectors, critical system of record data are contained in massive mainframe databases, creating bottlenecks and hindering an enterprise’s shift to a digital economy. Some firms have solved the problem by creating clones or caches to handle the real-time transactions and then, in batch mode, reconcile the updates with the mainframe database systems of record. This kludgy approach may solve the performance problem but creates other issues. A better method is to make the data available from behind the mainframe applications without moving or copying the data. This methodology will enable orders of magnitude more transactions and accelerate the enterprise’s move into the digital economy, reduce costs and virtually eliminate reconciliation. IT executives and architects need to enable the transformational shift to the digital economy before customers find alternative suppliers. Therefore, they should develop and execute a low-impact, tactical move that unshackles the mainframe databases so that distributed applications and microservices can have access to the critical data. In many industries, such as financial services and healthcare, more than 80 percent of the structured data still resides on a mainframe. While this design worked well 20 or more years ago when the architecture was first put in place, times have changed significantly—and the shift to the digital economy is massively changing the workload topology again. When the architecture was first established the read/write ratios were in the 4:1 range; today, in the digital world where individuals can check their accounts multiple times per day, the read/write ratios have exploded to more than 1000:1. This transactional overload has created mainframe application bottlenecks and impaired an organization’s ability to create end-user apps that are responsive to users without overhauling or jerry-rigging the underlying data center infrastructure. The traditional Band-Aid To satisfy the business needs IT architects have traditionally created a number of caches or cloned copies of the mainframe databases that can be used for each of the distributed applications involved. Since the mainframe is the only server with a shared-everything database architecture, this means that there is a requirement to make unique in-memory caches or cloned databases for each application and server (logical or physical) that needs access to the data. Moreover, these new databases are not read-only. Thus, at the end of the day—or at some points during the day— synchronization tasks must be executed to get all the databases in sync and reconcile all the differences. This is not much of a problem where the number of transactions are small. But when there are hundreds or thousands of applications and databases and the number of transactions during the day becomes quite large, it can create a large corporate risk exposure. Moreover, there may not be time enough during the batch window to reconcile all the differences and get all the databases back in sync before the process begins all over again—raising the risk level even more. The open Access Alternative The bottlenecks to mainframe databases are caused by all the requests being funneled and queued through the mainframe applications (see the circled 1 in Figure 1). This needs not be the case. This dependency is a relic of a legacy architecture built decades ago. As stated above the vast majority of transactional requests are read-only, which means these requests could be offloaded from the existing mainframe without modifying the architecture to include database locking. To open the access to the mainframe databases entails creating a back-end mainframe database server (logical or physical) combined with high performance flash (see the circled 2 in Figure 1) and a load balancer (see the circled 3 in Figure 1) that can route all read-only requests to alternate paths. Then one can activate multiple copies of the mainframe applications to handle the read-only requests that necessitate manipulation by the application prior to presentation to the front-end applications (see the circled 4 in Figure 1). All other read-only requests can go directly to the database server via API or SOA interfaces (see the circled 5 and 6 in Figure 1). These changes will enable thousands of simultaneous database requests and eliminate the bottlenecks without the need for cloning database copies and experiencing reconciliation and synchronization headaches. Summary The new entrants in the digital economy are driving down transactional costs, which are requiring incumbent organizations to transform their processes and lower their cost structures. Thus, the traditional method of just adding new applications to the existing workloads—which lengthens latency, causes bottlenecks, increases costs and exposes the company to sync and reconciliation risks—is no longer a feasible approach. Business executives will have to redefine their business models to support the new methodologies and low-cost competitors. By employing an open access mainframe database architecture, IT executives and architects will then be able to take the first steps toward becoming a digital enterprise even though most of the data resides in monolithic mainframe databases. IT executives should consider the open access mainframe database model as a way to satisfy current business demands without creating undue risks. Once the mainframe data is no longer bounded by access through existing mainframe applications, IT executives and architects can begin to develop and implement new target architectures that can keep the enterprise competitive over the long term. Cal Braunstein is CEO and executive director of research for Robert Frances Group. Additional relevant research and consulting services are available. Email: firstname.lastname@example.org
Published by Enterprise Systems Media. View All Articles.
This page can be found at http://ourdigitalmags.com/article/Eliminating+Mainframe+Data+Bot+Tle+Ne+CKS/2761247/399971/article.html.