Back in January, my wife and I were driving back from an evening event. Heading down the highway to home, she let out a gasp, "Look at all the apples!" Thousands of apples were strewn along the shoulder over about a quarter of a mile. Amazingly, the roadway was free of them and their potentially slick mess. As we passed them by I couldn't help but wonder how they'd gotten there and who was going to clean them up. We remarked on the oddity for a few minutes, but soon arrived home and got distracted by other things.
This story returned to my mind recently as I was thinking about the difference between evidence and data. Shortly, the ideas of knowing and understanding started poking in from the sides and I figured I had something to write down.
Big Data is all the rage right now and I agree that it's fascinating to explore the ways we can visualize and interact with unbelievable amounts of information. This trend is also shining new light on quantitative analysis and data-driven decision-making. Some of what I see and hear, near me and generally, is concerning, though, as a highly empathetic people-focused designer. Before I say why, I want to make clear that I love examining and playing with data. Having lots of strong data is the kind of aid we only dreamt of as recently as 7 years ago. Making this information visible and accessible is a huge boon to our craft. But it bears wielding thoughtfully and keep in mind some critical considerations.
First, there's a difference between data and evidence. Consider the apples. The only thing I knew after driving past them was that they were there and there were a lot of them. What didn't I know? How they got there. Were people involved? Who? Was it an accident or on purpose? Did the apples come from a truck? Did the driver know what happened? ...You get the idea. I couldn't have decided anything from the data except that apples were on the road and I needed to be careful. The data gave me no evidence of source or cause. Some other sensory element would be needed, like fruit cargo boxes or a truck with a blowout. I should point out here, though, that sometimes data tells us something useful that we weren't looking for. Evidence tells us that probably something specific happened, but then we find data that helps us see how. However, sometimes the data simply seems to be interesting, but, as in an example from the video above using "I just landed in..." tweets to understand people movements, we can link the merely novel to the useful, such as looking for ways that diseases spread through transience. But most of the time something more is needed, something to point in a direction of being able to tell what happened, which leads me to...
...the second point, that evidence is a precursor to the more important stages of knowing and understanding. When the degree of certainty rises due to stronger links between data and evidence, we can begin to know what happened or is happening. Once we begin to know that evidence, traces of people and events, means something, that in turn begins to point to characteristics surrounding the something such as intent and method. Then we can get glimpses of why and start to truly understand what led to the production of the data, which in turn can inform effective decision-making toward useful solutions. Data itself cannot really ever tell us why something happened. It needs additional contextual information accompanying it, sometimes a great amount of it. Proceeding without the contextual information can be very risky, even dangerous.
That's because these elements: data, evidence, knowing, and understanding, all work together, like a formula. Though we almost always have varying amounts of each and rarely know when we have sufficient, let alone all, that we need to have in order to confidently move in a certain direction, we have to strive to have as much of each as possible. Perhaps the elements work best in ratio to each other. Lesser amounts of one or more should maybe make us less certain. It's like exploring when a sense or two is malfunctioning or missing. We will be missing input that could be crucial to the best outcome.
Also, within that formula, in a mixture of knowing and understanding, there's the quasi-independent substance of empathy. It doesn't need much, or any, data to be activated. Which is interesting, because often when big data is touted as a salvation tool, empathy is often left out of the discussion. It might be that data vs. empathy is the new design vs. code. Like the old fight though, the new one is a false argument. Both are needed: empathy can tell us when our data might be skewed, myopic, or incomplete. Data can provide empathy with certainty. Empathy is almost always an invaluable asset and can often be a failsafe against disastrous action.
So, where am I going with all this? Basically to this point: Big Data is a great tool. Like we always seem to do with great tools, though, we are tempted to weight it with too much potential too early. What we really ought to do is learn how to use it well before we declare what it's really capable of. Let's learn what it will and won't do through wise and iterative experiments, ones in which we acknowledge the context, evidence, knowing, and understanding that are or aren't there. Ones that include empathy for the people represented by the data. Most of all, don't throw away other tools in the toolkit just to make room for this one. Not everything is a nail for the Big Data hammer to pound on.
I'll have more to say on this later.