Big Data and the end of Free Will

Share this page...

Rev. Clay Nelson

Listen

or download the MP3

Read below, or download the PDF

Rev. clay Nelson © 9 July 2017

In the mid-’90s I had oversight of fifty congregations spread over the bottom quarter of California. Amongst other duties I was charged with helping them grow and better meet their missions in their particular contexts. To do that I engaged a company that would help provide up-to-date information about the communities they served. To begin, I had to drive to each church and put three antennae on the roof of my car and wait for a black box to tell me it had found three satellites above the horizon. Then I pressed a button that would record the position of that church on the globe. This was GPS in 1995. Today, I would just go to Google maps which could give me the exact latitude and longitude of each. Then I sent the box back to the company who downloaded the information. In return, they sent me dozens of maps for each church showing the demographics of the neighbourhoods that surrounded them. I remember being fascinated that so much data could be provided, but that was nothing compared with what is available today. Now those maps would tell me the demographics of each residence on the street not just the neighbourhood. This is the world of Big Data we now live in. Like it or not it is here. Silicon Valley calls it Dataism.

It has some major implications beyond our privacy, which is frequently our concern. Big Data is shifting the paradigm of how the world works and our place in it and how much free will might exist in it.

For thousands of years humans believed that authority came from the gods. Then, during the modern era, humanism gradually shifted authority from deities to people. Jean-Jacques Rousseau summed it up when looking for the rules of conduct in life, he found them “in the depths of my heart, traced by nature in characters which nothing can efface. I need only consult myself with regard to what I wish to do; what I feel to be good is good, what I feel to be bad is bad.” Humanist thinkers such as Rousseau convinced us that our own feelings and desires were the ultimate source of meaning, and that our free will was, therefore, the highest authority of all. Now, a fresh shift is taking place. According to acclaimed Israeli historian, Yuval Noah Harari, “Just as divine authority was legitimised by religious mythologies, and human authority was legitimised by humanist ideologies, so high-tech gurus and Silicon Valley prophets are creating a new universal narrative that legitimises the authority of algorithms and Big Data. In its extreme form, proponents of the Dataist worldview perceive the entire universe as a flow of data, see organisms as little more than biochemical algorithms and believe that humanity’s cosmic vocation is to create an all-encompassing data-processing system — and then merge into it.”

This is a pretty radical shift. It seems pretty obvious to us that we have free will. When I have made a decision, say, to come to church this morning, I feel that I could have chosen to do something else. Yet many philosophers say this instinct is wrong. According to their view, free will is a figment of our imagination. No one has it or ever will. Rather our choices are either determined—necessary outcomes of the events that have happened in the past—or they are ­random.

Our intuitions about free will, however, challenge this nihilistic view. We could, of course, simply dismiss our intuitions as wrong. But psychology suggests that doing so would be premature: our hunches often track the truth pretty well. For example, if you do not know the answer to a question on a test, your first guess is more likely to be right. In both philosophy and science, we may feel there is something fishy about an argument or an experiment before we can identify exactly what the problem is.

The debate over free will is one example in which our intuitions conflict with scientific and philosophical arguments. Something similar holds for intuitions about consciousness, morality, and a host of other existential concerns. Typically, philosophers deal with these issues through careful thought and discourse with other theorists. In the past decade, however, a small group of philosophers have adopted more data-driven methods to illuminate some of these confounding questions. These experimental philosophers administer surveys, measure reaction times and image brains to understand the sources of our instincts. If we can figure out why we feel we have free will, for example, or why we think that consciousness consists of something more than patterns of neural activity in our brain, we might know whether to give credence to those feelings. That is, if we can show that our intuitions about free will emerge from an untrustworthy process, we may decide not to trust those beliefs.

Jerry Coyne, an evolutionary biologist at the University of Chicago, defined free will as the possibility that, after making a decision, you could have chosen otherwise. But a “decision,” Coyne argues, is merely a series of electrical and chemical impulses between molecules in the brain — molecules whose configuration is predetermined by genes and environment. Though each decision is the outcome of an immensely complicated series of chemical reactions, those reactions are governed by the laws of physics and could not possibly turn out differently. “Like the output of a programmed computer, only one choice is ever physically possible: the one you made,” Coyne wrote.

The implications are vast. If we could map your brain in its entirety and know your complete genetic makeup, Coyne suggests we’d be able to predict with 100% accuracy, in theory, your response to any given situation.

Dataists believe that given enough biometric data and computing power, this all-encompassing system could understand humans much better than we understand ourselves. Once that happens, humans will lose their authority, and humanist practices such as democratic elections will become as obsolete as rain dances and flint knives.

When Michael Gove announced his short-lived candidacy to become Britain’s prime minister in the wake of the Brexit vote, he explained: “In every step in my political life I have asked myself one question, ‘What is the right thing to do? What does your heart tell you?’” That’s why, according to Gove, he had fought so hard for Brexit, and that’s why he felt compelled to backstab his erstwhile ally Boris Johnson and bid for the alpha-dog position himself — because his heart told him to do it. Gove is not alone in listening to his heart in critical moments. For the past few centuries humanism has seen the human heart as the supreme source of authority not merely in politics but in every other field of activity. From infancy, we are bombarded with a barrage of humanist slogans counselling us: “Listen to yourself, be true to yourself, trust yourself, follow your heart, do what feels good.”

In politics, we believe that authority depends on the free choices of ordinary voters. In market economics, we maintain that the customer is always right. Humanist art thinks that beauty is in the eye of the beholder; humanist education teaches us to think for ourselves; and humanist ethics advise us that if it feels good, we should go ahead and do it. Of course, humanist ethics often run into difficulties in situations when something that makes me feel good makes you feel bad.

Harari gives an example, “Every year for the past decade the Israeli LGBT community has held a gay parade in the streets of Jerusalem. It is a unique day of harmony in this conflict-riven city, because it is the one occasion when religious Jews, Muslims and Christians suddenly find a common cause — they all fume in accord against the gay parade. What’s really interesting, though, is the argument the religious fanatics use. They don’t say: ‘You shouldn’t hold a gay parade because God forbids homosexuality.’ Rather, they explain to every available microphone and TV camera that ‘seeing a gay parade passing through the holy city of Jerusalem hurts our feelings. Just as gay people want us to respect their feelings, they should respect ours.’”

It doesn’t matter what you think about this particular conundrum; it is far more important to understand that in a humanist society, ethical and political debates are conducted in the name of conflicting human feelings, rather than in the name of divine commandments. We are already becoming tiny chips inside a giant system that nobody really understands Yet humanism is now facing an existential challenge and the idea of “free will” is under threat. Scientific insights into the way our brains and bodies work suggest that our feelings are not some uniquely human spiritual quality. Rather, they are biochemical mechanisms that all mammals and birds use in order to make decisions by quickly calculating probabilities of survival and reproduction.

When we see a lion, fear arises because a biochemical algorithm calculates the relevant data and concludes that the probability of death is high. Similarly, feelings of sexual attraction arise when other biochemical algorithms calculate that a nearby individual offers a high probability for successful mating. These biochemical algorithms have evolved and improved through millions of years of evolution.

Even though humanists were wrong to think that our feelings reflected some mysterious “free will”, up until now humanism still made very good practical sense. For although there was nothing magical about our feelings, they were nevertheless the best method in the universe for making decisions — and no outside system could hope to understand my feelings better than me.

However, in a world of Google and Facebook humanism loses its practical advantages. We are being swamped by two scientific tidal waves. Biologists and neuroscientists are deciphering the mysteries of the human body, brain and feelings while computer scientists are giving us previously unimaginable data-processing power. Together you get external systems that can understand our feelings better that we can. Once Big Data knows me better than I know myself, authority will shift from humans to algorithms. Once that has happened Big Brother becomes empowered. In simple ways, it has already happened.

There was a time when I wanted a book I would go to a bookstore. I’d wander the aisles, flip open the books to read the first few sentences, check out the cover until one of them grabbed me. Now, I more often go to Amazon online and a message pops up and tells me what book I might like based on other books I have bought. This is just the beginning. Kindle is able to constantly collect data on their users while they are reading books. It can monitor which parts of the book you read quickly, and which slowly; on which page you took a break, and on which sentence you abandoned the book, never to pick it up again. If Kindle was to be upgraded with face recognition software and biometric sensors connected to your iWatch, it would know how each sentence influenced your heart rate and blood pressure. It would know what made you laugh, made you sad, made you angry. Soon, books will read you while you are reading them. Eventually, Amazon could choose books for you with uncanny precision.

Take this to its logical conclusion, algorithms would become your ultimate authority. Don’t know who to marry? What job to take? Where to go on holiday? Ask Google. It will know us better than we know ourselves. Then post the answer on Facebook so it can join with all the other billions of bits billions of others are contributing in social media, emails, phone calls and blogs to this this system we don’t understand. The Dataist would say we don’t have to understand it, just answer our email faster.

I’m not sure I am enamoured with this brave new world. While I like having sufficient data to answer my questions, I’m less interested in being reduced to data myself, often unwittingly. And while some philosophers argue that my intuition that I have free will is an illusion, I prefer to think I still am a higher authority on deciding the greater good for me than Google is or will be. To do that I will have to know myself better than an algorithm does. I guess I have no other choice.