Outsourcing was an easy win to reduce IT Costs. And executives loved it. They loved it so much that giants like Accenture now rely on outsourcing for almost half their revenue (in March, Accenture posted $3.91 B in outsourcing net revenues, which was 47% of their overall). Even Gartner’s Hype Cycle shows that outsourcing is reaching its maturity.
Former Infosys CEO Vihal Sikka said, “We will not survive if we remain in the constricted space of doing as we are told, depending solely on cost arbitrage.” That is, there’s significantly less money in future “labor arbitrage”, and if companies – especially Global System Integrators – want to continue the kind of 16% per annum growth curves they’ve seen, they have to turn to a different types of arbitrage. Loosely stated, arbitrage is just the buying and selling of assets in different markets or forms to take advantage of price differences. Labor was cheap in India, and expensive in America. So, voila, outsourcing made sense.
A new kind of arbitrage is emerging: Data Arbitrage. What exactly do I mean by data arbitrage? I mean that today we pay a hefty price to get our data where we want it when we want it; and there’s a significant price difference for delivering that same data using a DataOps solution. Same asset. Radically different price. Huge opportunity to leverage the difference.
Specifically, what’s the data arbitrage opportunity for application testing? Consider 5 arbitrage opportunities:
Data is impersonal. The data most testers use is either shared among many people (because cost forces them to use too few environments) or made personal at enormous cost (because making 100 copies of data for 100 testers isn’t free).
Arbitrage opportunity: If we can give every tester their own environment but do so at an extremely low cost, we get the benefit of de-coupling different testing pathways without the cost of proliferating hardware and storage to support those pathways. Through data virtualization, DataOps tools can accomplish exactly that.
Data is insecure. The dirty secret of many IT shops is that they pay lip service to masking – either they use crude homegrown solutions rife with security holes, or they find ways to “exempt” themselves through exceptions. Further, those that do mask well usually don’t mask often because of the delay it imposes on getting data and the enormous expense of keeping masked copies around.
Arbitrage opportunity: If we can consistently mask every non-production environment before handoff, we significantly reduce risk. If we can mask continuously, we can mask often. And, if we can provide those masked environments without the cost of proliferating hardware and storage, we get the much lower risk data that doesn’t impede our application delivery pipeline. Through integrated masking and through features that support distributed referential integrity (making sure that a name or a number is masked consistently across heterogeneous data sources), DataOps tools can accomplish exactly that.
Data is tethered. It’s not that we never move it. It’s that once we put data on a host and in a database server, it’s hard to disentangle it from the host and server we put it in. And since it is hard, we make more barriers to movement to mitigate the risk.
Arbitrage opportunity: If we can radically lower the cost of data mobility in time (moving the time pointer on a dataset) and in space (moving the dataset from one host/server to another), we unlock a productivity avalanche. A tester can promote a 5 Tb applications from one environment to another in minutes not days. If mobility is easy, and copies are almost free, then a tester can share a bug with a developer almost instantly and then continue working in another pathway without fear of data loss. The arbitrage opportunity for driving down the cost of context switch and the cost of error to near zero can’t be understated. Breaking all of the key dependencies in the Testers workflow has enormous speed and quality consequences because no one depends on anyone else anymore. And that dramatically lower total cost of Concept-to-Realization for any feature of an application. DataOps tools make near zero-cost near zero-time context switches a reality, and that means the enormous cost of error we experience and the controls we have around can both be driven down.
Data is heavyweight. Data size will get bigger. And the bigger it is, the harder it is to move and the higher the cost of error when something goes wrong. Thus, the enormous timelines and contingency plans for anything that looks like migration from one platform to another or one place to another.
Arbitrage opportunity: If we can not only reduce the cost of data mobility, but the cost of data provision as well, then we apply the benefits of data mobility to data provision as well. Not only can we move datasets in time and space with radical ease, we can create net new datasets in the same timeframes. That’s data elasticity on demand. DataOps tools give you that on-demand elasticity. It’s like VMWare for data. Spin it up here; spin it down there; Repeat.
Data is passive. While, yes, we do update our data every day – most of that big blob of data we manage is static. It doesn’t change much, and it doesn’t do much.
Arbitrage opportunity: If we can turn all of the “dead” data we have lying around in copy after copy into live, shared, active data we get the benefit that our data is being used to its maximum value, which reduces our storage cost for sure but also makes it dead simple to move groups of related data together very rapidly (such as we might do to get to the cloud, or migrate data centers, etc.). With DataOps tools, moving a single database or moving a family of 100 related databases is within a few %ge points of being the same operation in terms of cost and time. Our concern is a lot more focused on how related the data sets are, and not how large they are, because every bit and byte is used to its maximum advantage.
How big is the arbitrage opportunity? Business owners should pay attention. Numbers from dozens of real customers and real projects show project timeline savings in the 30 to 50% range, significantly increased testing density, a massive left shift in testing defects (including a net reduction in defects by 40% or more) on top of storage costs falling 80% or more. Imagine getting a real SAP or Oracle E*Business Suite project or migration done in half the time. Now imagine getting it done without any perceptible errors post-launch. What’s that worth to your business?
Data is fast and simple with DataOps.
Data is impersonal, insecure, expensive, tethered, heavyweight, and passive without it.
If you don’t have a DataOps tool and a strategy to help you exploit Data Arbitrage, get one.