Schumpeter wrote a column in last week’s Economist that did an excellent job in summarizing many of the big data sources that I have addressed on this blog.  It is based on a McKinsey Global Institute write-up, “Big data:  The next frontier for innovation, competition, and productivity.”

Schumpeter references smart systems, mobiles, tera-connections, online behavior analytics, and logistics.  And Schumpeter introduces a couple of questions:  will big data trample on the little guy, and is this data really useful?

The second question is critical to big data implementations in the future:  this stuff is not going to be easy or cheap to model, collect, load, clean and analyze.  Mistakes in these areas are kinda embarrassing when trying to load petabytes of data:  no redo’s allowed.  And Schumpeter also questions the certainty of value: “Data-heads frequently allow the beauty of their mathematical models to obscure the unreliability of the numbers they feed into them…  They can also miss the big picture in their pursuit of ever mote granular data.”

Of course, I would never argue against the value of granular data. Most of the positive surprises I have experienced at customer sites have been directly a result of analysis on new greater levels of detail data that was never available before due to the huge volumes of data required:  volumes that were not possible before the data warehouse was created. 

I pretty sure the “data-heads” have got it right on the value of big data, just as they did with the value of today’s modern data warehouse.