Why Have Open Data Initiatives Not Been More Successful?
It never fails to fascinate how differently do Open Data practitioners look at APIs, compared to applied API practitioners (i.e. people who build APIs for specific business needs).
Having spent years in the Open Data world, where I designed and built APIs for large data-publishers such as The World Bank, I sincerely appreciate the “liberate the data” mantra. Unfortunately, the data “liberation” alone often falls short of the intended goals.
APIs shouldn’t be viewed as simply windows into data. Neither are APIs a slicing mechanism for data. I think both views greatly oversimplify matters. They diminish APIs to being a form of “typed, searchable CSVs on the Internet”. APIs can be, and should be, so much more!
APIs are not merely windows into your datasets. Neither should they be glorified http searches against typed “CSVs”.
Clayton Christensen (the author of the seminal Innovator’s Dilemma book) created a very interesting framework: Jobs to Be Done. The take-away of this framework, as it can be applied to Open Data, is that: people rarely need access to raw data per se. Data is just means to something else. People need certain jobs to be done. APIs that are windows to raw data do not directly get any jobs done.
Publishing massive amounts of government data is definitely a positive initiative. However, simply publishing raw data has not lead to the kind of innovation that people have hoped for. There has been some healthy debate regarding the cause of this shortcoming. My theory is that: just posting data online (even searchable), without tailoring it to any jobs to be done, is simply not very helpful. Not for the majority of consumers, at least.
I hope there will be more debate in this regard, in the future, and am eagerly looking forward to it.