Enterprise Tech Journal 2016: Issue 5 : Page 55

the broader requirements of our business processes and the data they are accumulating. So, the new attention to data management is a good thing and very timely given the data burgeon. The first problem is figuring out how to classify data. IBM’s ancient model for data management, ILM, held that to manage data assets through their useful life, it was first necessary to classify the data—to group similar data into classes that would receive roughly similar data protection and data security and data archiving services for roughly the same amount of time. There are methods and techniques proposed for doing this, but the best practice has yet to reveal itself. For now, we are focusing a lot on metadata—data about data that is usually part of the file or object structure—to signpost data class and management requirements. Sifting through zettabytes worth of data from Internet of Things, mobile commerce and the data democracy (the conversion of analog data into digital forms) will be a Herculean undertaking. That is part of the reason why “cognitive” has come into vogue as a descriptor for data management. We will need petaflops and beyond of processing cycles to parse all the metadata, roughly approximating the human brain, in order to manage all the bits (and to help identify data that can be purged, too). Good work is being done in this space by IBM and also by smaller players with fresh ideas, like StrongBox Data Solutions (SDS) out of Montreal. SDS is working on a technology platform called StrongLINK that not only uses cognitive processes to classify data, but also to collect real-time status on storage devices and interconnects so data is moved in the most efficient way between storage tiers and so load is balanced efficiently between storage area networks. For all its emphasis on Watson and cognitive computing, IBM missed the boat on the term “cognitive data management,” but I am sure they will shortly catch up. The StrongLINK technology actually fulfills all the requirements for IBM’s original ILM model. It enables data classification, and also storage classification and provides a policy matrix for determining where to place which class of data at any given time or under specified conditions. It also finally delivers a data mover for behind-the-scenes migration of data from one physical kit to another, whether on premise or in a cloud somewhere. In short, it blows the socks off the last effort you may recall in this realm, EMC’s ILM in the early Aughties—which provided a data mover and left the heavy lifting of data classification, storage classification and lifecycle management policy formulation up to the consumer. So, prepare to be inundated by cognitive data management everywhere. It is the new frontline in the battle for IT efficiency. ETJ Jon William Toigo is a 30-year veteran of IT and the author of 17 books. He is also CEO and managing principal partner of Toigo Partners International, an analysis and consulting firm servicing the technology consumer in seven countries. Email: jtoigo@toigopartners.com • 2 016 : I s s u e 5 • 55 E n te rp r i s e T e c h J o u rn a l

Previous Page  Next Page


Publication List
Using a screen reader? Click Here