Site icon Cloudian

Enterprise Storage – Stop the Madness

Guest Blog Post by John Bennett

Recently I was visiting my favorite co-location data center in Tokyo when I saw two young technologists attempting to push a heavily laden cart of brand new gear, still in neonatal ESD bags, over a door jam. Their ID badges revealed them as employees of a well-known global investment bank. In a thinly veiled maneuver to satisfy my curiosity, I offered to help. After a few tedious moments we had surmounted the obstacle. Panting a bit, the two young men thanked me profusely for lending an extra back to their burden. It was then that I realized what I had been lifting.  A brand new disk array with Fibre Channel storage processors.

Fibre Channel… in 2014.

Well, I thought, perhaps they were adding storage to a mainframe, or it was an upgrade to an existing solution. My curiosity piqued, I asked.

No, they said. It was storage for a new component of a customer facing web application.

The exchange bothered me for the rest of the afternoon. When I arrived at the office the next day, I penned some rough specifications, put in a request for a budgetary quotation and scribbled out a high-level WBS and rough order of magnitude estimate for a project to deliver 100TB of replicated, geographically diverse disk using a similar technology to what I had seen the day before.

A couple of days later the numbers came back from the storage vendor. When I put it all together, what I discovered was shocking. The effective all-in 5 year cost of ownership for the disk array I had pushed over a 1 cm piece of aluminum the day before was somewhere around $2.3 million USD. This includes the cost of the array, networking, installation and project labor, cabling, rack space, power and maintenance.

Most of us have had to help a business executive through technology sticker shock before. I’m sure this project had been no exception. These conversations typically contain catchphrases like “investing in scalability” and “enterprise-grade availability and fault tolerance” and typically last as long as it takes for the person holding the purse strings to glaze over and open their wallet. But at this point we’ve been preaching about the cost savings of virtualization and private clouds for well over a decade. How many of us are still spending megabucks on old legacy or, even worse, new legacy disk arrays and SAN fabric switches? When will our adherence to now ancient technologies become an existential risk to the enterprise technologist as a species? A reasonable argument could be made that we’ve all been made obsolete and we just don’t know it yet.

We have to stop the storage madness before it’s too late.

The narrative of the counter argument goes something like this: Infrastructure, properly run, is a utility service. As such it is largely defined by the requirements of the layers of the technology stack that depend on it. Infrastructure technologists can only independently change their part of the investment insofar as that change doesn’t impact the layers above it. Put another way, no matter how much shiny new whiz-bang object store capacity I can offer in my data center, it does absolutely no good if the business I support runs on applications that were written when monolithic RDBMSs dominated the earth. In this context, it’s understandable why some might think enterprise storage a lost cause. I’d like to argue that enterprise storage presents a ripe opportunity to add value.

The fact of the matter is that now is the time to be vocal about next-generation infrastructure technologies of all stripes. “Big Data” is no longer just a buzz word. It’s a reality, and often an unwelcome one for established firms. Pressure from without as more agile cloud-native ventures close in on the market share of more mature firms is converging with pressure from within to add features and capacity to legacy BI and CRM systems.  Legacy platforms the world over are straining under impossible loads and technology departments are straining under demands to meet bottom line targets that simply can’t be met with technology architectures from 1988.

As it was when mainframes gave way to midrange UNIX and when the Internet changed everything forever, the big winners were the ones who could optimize technological transformations for their stakeholders. For enterprise storage a similar change is happening right now. The past has shown us that leading a revolution is much preferable to being swept away by it.

About Author

John is a technologist with 20 years of experience on the front lines of technology infrastructure and operations. John’s focus is the application of scientific,  data-driven quality management techniques to high-risk technology operations. He is currently located in Tokyo. 

Exit mobile version