Ethics and the Algorithm

Behind every piece of code that drives our decisions is a human making judgments — about what matters and what does not.

Reading Time: 3 min 

Topics

Frontiers

An MIT SMR initiative exploring how technology is reshaping the practice of management.
More in this series
Permissions and PDF Download

Editor’s Note: This article is one of a special series of 14 commissioned essays MIT Sloan Management Review is publishing to celebrate the launch of our new Frontiers initiative. Each essay gives the author’s response to this question:

“Within the next five years, how will technology change the practice of management in a way we have not yet witnessed?”

Are we designing algorithms, or are algorithms designing us? How sure are you that you are directing your own behavior? Or are your actions a product of a context that has been carefully shaped by data, analysis, and code?

Advances in information technology certainly create benefits for how we live. We can access more customized services and recommendations; we can outsource mundane tasks like driving, vacuuming floors, buying groceries, and picking up food. But there are potential costs as well. Concerns over the future of jobs have led to discussions about a universal basic income — in other words, a salary just for being human. Concerns over the changing nature of social interaction have covered topics ranging from how to put your phone down and have a face-to-face conversation with someone to the power dynamics of a society where many people are plugged into virtual reality headsets. Underlying these issues is a concern for our own agency: How will we shape our futures? What kind of world will information technology help us create?

Advances in information technology have made the use of data — principally data about our own behaviors — ubiquitous in the online experience. Companies tailor their offerings based on the technology we employ — for example, the travel website Orbitz a few years ago was discovered to be steering Mac users to higher-priced travel services than it was PC users. Dating sites like eHarmony and Tinder suggest partners based on both our stated and implied preferences. News stories are suggested based on our previous reading habits and our social network activities. Yahoo, Facebook, and Google tailor the order, display, and ease of choices to influence us to spend more time on their platforms, so they can collect even more data and further intermediate our daily transactions.

Increasingly, our physical world is also being influenced by data. Consider self-driving cars or virtual assistants like Siri and Amazon’s Echo. There are even children’s toys like Hello Barbie that listen, record, and analyze your child’s speech and then customize interactions to fit your child.

As our lives become deeply influenced by algorithms, we should ask: What kind of effect will this have?

First, it’s important to note that the software code used to make judgments about us based on our preferences for shoes or how we get to work is written by human beings, who are making choices about what that data means and how it should shape our behavior. That code is not value neutral — it contains many judgments about who we are, who we should become, and how we should live. Should we have access to many choices, or should we be subtly influenced to buy from a particular online vendor?

Think of the ethical challenges of coding the algorithm for a self-driving car. Under certain unfortunate circumstances, where an accident cannot be avoided, the algorithm that runs the car will presumably have to make a choice about whether to sacrifice its occupants or risk harming — maybe even fatally — passengers in other cars or pedestrians. How should developers write this code? Despite our advances in information technology, data collection, and analysis, our judgments about morality and ethics are just as important as ever — maybe even more important.

We need to figure out how to have better conversations about the role of purpose, ethics, and values in this technological world, rather than simply assuming that these issues have been solved or that they don’t exist because “it’s just an algorithm.” Questions about the judgments implicit in machine-driven decisions are more important than ever if we are to choose how to live a good life. Understanding how ethics affect the algorithms and how these algorithms affect our ethics is one of the biggest challenges of our times.

This article was originally published on July 13, 2016. It has been updated to reflect edits made for its inclusion in our Fall 2016 print edition.

Topics

Frontiers

An MIT SMR initiative exploring how technology is reshaping the practice of management.
More in this series

Reprint #:

58106

More Like This

Add a comment

You must to post a comment.

First time here? Sign up for a free account: Comment on articles and get access to many more articles.