You know the story. Internet connectivity, the digitization of everything, the increasing number of customer touchpoints, and the proliferation of data pouring in from an array of sensors and smart devices have created a data “embarrassment of riches.” In the utility industry alone, smart meters are generating 35,000 meter-reads per customer per year, and that doesn’t even take into account what the Internet of Things (IoT) will unlock inside the home and throughout the grid.
But I don’t want to bore you with exabytes. We’ve all heard some version of “The enterprise is drowning in data.” We get it. But I do bring this all up for a reason.
Why have we been storing all this data in the first place if not to make our businesses better?
Yet as obvious as that sounds, study after study has found a significant gap between the promise of Big Data – more data-driven decision making, better predictions, more proactive customer service, a nimbler enterprise – and actually using it well to improve business performance.
So why the gap? Or put another way, why are businesses finding it so hard to make their data useful?
Some of the challenges are fairly intuitive. Large companies have a lot of data, and it’s often scattered across legacy systems and business units, making it hard to find, corral, and integrate. Then there is the issue of data quality – so much data is incorrect, incomplete or hard to verify – especially when vast data sets come to a company through acquisition.
Here’s an equally large stumbling block, one that is less intuitive: the very technology designed to solve the problem. Ironically, we have the technology today to make data useful, but a lot about the way the industry has packaged and delivered it has gotten in the way.
A client of ours tells the story of a predictive-analytics vendor pushing a full-scale deployment of its software when neither the vendor nor the client knew yet what problem had to be solved. That is a telling episode.
We’ve learned there is a time and a place for such “full-scale deployment” of predictive analytics – that it is a great goal to have, but a tough place from which to start.
We find it much better to work together with a client to identify one problem in one area of the business and train everyone’s collective predictive data-science sights on solving that one problem. Then build on that success. Locking into a proprietary solution first for that makes no sense, and, in fact, precludes success – but oftentimes that’s what our industry has espoused.
Another hurdle encountered by organizations looking to unlock the value of their data is the lack of a dedicated data-science team. And while the current scramble to hire individual data analysts and data scientists may be a step in the right direction, it falls short of what is really needed – a diverse and experienced data-science team.
This diversity is critical and, unfortunately, missing at most companies, a diversity that brings in the freshest thinking to every predictive-analytics challenge. From game theory to machine learning to artificial intelligence, it takes a multi-talented data-science team to canvas the waterfront of new ideas and approaches shaping predictive science and apply them successfully.
At TROVE, we’ve addressed these problems, and a gamut of others, with the release of our new Platform. Everything about it has been designed specifically to make it easier for clients to get to useful. You can read about the TROVE Platform here, but, before you do, let me leave you with a few thoughts about making data useful:
Every company is on its way to becoming a data company. But putting data to work, making it useful, is not easy. It’s not a capability you simply buy off the shelf – it’s one you invest in, learn from, tinker with and improve upon over time. Doing so requires the right platform – a fusion of the right tools, processes, experience and people – to solve the right problems, those that move the business.