In 2008, Chris Anderson – who was the editor-in-chief of Wired magazine at the time wrote the controversial article The End Of Theory. Anderson was fascinated by the rapid emergence of the technology known today as ‘Big Data’. Thanks to major advances in computing power and cloud storage, the latter allowed incredible amounts of data to be processed. 

A kind of unbridled optimism sprang up about what Big Data could mean for mankind, science, and society. The basis of the scientific method is the concept of a ‘model’. And as British statistician George Box once claimed: « All models are wrong, but some are useful« . Anderson also predicted this: « We will, in the end, no longer need models. In the era of Big Data, the figures will speak for themselves. » 

Meanwhile, Big Data technology has evolved in quantum-leaps over the past decade. Fueled by the huge amounts of addictive financial flows in the world of advertising, the ‘art’ emerged to analyze all customer information and data and to predict, more accurately than ever, their purchasing behavior. This allowed companies and brands to respond to a client’s needs and thus influence that behavior. We all know that companies like Google and Facebook have turned that technology into the most profitable economic machine since the oil business. Our firm belief in algorithms and the capacity to understand ‘intuitively’ what would happen next grew stronger than ever. 

But then COVID-19 came along, and we all had to face the fact that we were not very good at detecting this kind of ‘Black Swans’. Algorithms that have been trained to work with historical data and which, primarily, recognize and predict known patterns, become hopelessly derailed if something totally unpredictable happens. 

And yet... in the last two months, the amateur statistician in us all has been reawakened. To a man, we read and watch the latest mortality rates, how many people have been admitted to hospital and what percentage are in ICU. We have watched the curves develop and made all kinds of conclusions. On Facebook, then, heroic discussions took place about the ‘R-factor’. Scientists, via social media, did their best to help us understand and contextualize the data. In my view we can learn a great deal from this chapter and, as I see it, there are three key takeaways: 

The first of these forms the base: the quality of the collected data is crucially important. If we do not get the right data, then analysis is a waste of time. The old saying clearly applies here: garbage in – garbage out. We all remember collectively plotting out the infection curves, eager to see when they might flatten when, all of a sudden, in my home country Belgium, we were told that hundreds of deaths in nursing homes had been left out.  

The second element is the possibility of finding out more, with more data. Big Data can indeed help to map the patterns more effectively, to highlight correlations in the data and to better predict the next strategic steps. That is why testing is so crucial. At present, it is as clear as mud how many of us are actually Covid-19 positive and not just in Belgium either. Measurement is knowledge. More widespread testing is more knowledge. 

The final element is the ability – with all that data, analysis, and correlation – to make the right decisions. Everywhere in the world, we have seen policymakers struggle to draw conclusions, we have seen them linger and dawdle, and it was painfully clear how they lacked the competence to show decisiveness in the face of incomplete data. 

If you put those three elements side by side, then you have to wonder how the ‘contact-tracing’ is going to happen, over the next weeks and months. People are very worried about their privacy when it comes to apps that are tracing how much they have come in contact with infected people. I totally get that, but this is not a ‘normal’ situation and if Corona teaches us one thing it’s that data has indeed become crucial in every facet of our lives. Companies will need to harness the power of data even more in order to understand their customers, to manage supply chains, to make their strategy agile and flexible. But, above all, business managers will need to have the know-how and expertise to make decisions with ever more data, albeit it incomplete. 

That applies, in particular, to our governments. No other organization sits on so much data as they do. If we were to employ that better, then governments all over the world could become much quicker, much more efficient, and more decisive. Investing in better data processing, more capacity to spot causal links and, above all, in the expertise of policymakers to make information-based decisions, fit for this century, is crucial. Big Data has suddenly become deadly serious. It’s about Serious Data, today.