Why ‘Explicit Uncertainty’ Matters for the Future of Ethical Technology

What if algorithms were built around users’ objectives rather than the company’s end goals?

Reading Time: 4 min 

Topics

Developing an Ethical Technology Mindset

The demands of a digitized workforce put transparency, ethics, and fairness at the top of executive agendas. This MIT SMR Executive Guide explores how managers and organizations can apply principles of ethical and trustworthy technology in engaging with customers, employees, and other stakeholders.

Brought to you by

Deloitte
More in this series
Permissions and PDF Download

Image courtesy of Laura Wentzel

The biggest concerns over AI today are not about dystopian visions of robot overlords controlling humanity. Instead, they’re about machines turbocharging bad human behavior. Social media algorithms are one of the most prominent examples.

Take YouTube, which over the years has implemented features and recommendation engines geared toward keeping people glued to their screens. As The New York Times reported in 2019, many content creators on the far right learned that they could tweak their content offerings to make them more appealing to the algorithm and drive many users to watch progressively more extreme content. YouTube has taken action in response, including efforts to remove hate speech. An independently published study in 2019 claimed that YouTube’s algorithm was doing a good job of discouraging viewers from watching “radicalizing or extremist content.” Still, as recently as July 2021, new research found that YouTube was still sowing division and helping to spread harmful disinformation.

Twitter and Facebook have faced similar controversies. They’ve also taken similar steps to address misinformation and hateful content. But the initial issue remains: The business objective is to keep users on the platform. Some users and content creators will take advantage of these business models to push problematic content.

Algorithms like YouTube’s recommendation engine are programmed with an end goal: engagement. Here, machine learning adapts and optimizes based on user behavior to accomplish that goal. If certain content spurs higher engagement, the algorithm may naturally recommend that same content to other people, all in service of that goal.

This can have far-ranging effects for society. As Sen. Chris Coons of Delaware put it in April 2021 when executives from YouTube, Facebook, and Twitter were testifying before Congress, “These algorithms are amplifying misinformation, feeding political polarization, and making us more distracted and isolated.”

To address this issue, companies and leaders must consider the ethical implications of technology-driven business models. In the example of social media, how differently might an algorithm work if it instead had no end goal?

Avoiding Fixed Objectives

In a report for the Center for Human-Compatible AI, we call for a new model for AI. It’s built around what may seem like a radical idea: explicit uncertainty. Using this model, the algorithm has no intrinsic objective. Instead, its job is to discover the objectives of its users at every step.

When AI is built with this model, it’s less likely to do severe damage. One user watching and liking a piece of content would not lead the algorithm to recommend that same content to millions of others, because there would not be a fixed goal on user engagement.

Under this model, the algorithm is more like an open book for each user. To find out users’ preferences, it might prompt them relatively frequently, especially in the early days of usage, asking them what they’d like to see. For example, a video or article might be accompanied by the prompt, “Would you like to see a fact check on this topic from one of the following sources?” If the user selected an option, the AI would learn that the user liked that kind of offering and could subsequently suggest similar resources. To some extent, we can see this currently with navigation systems that ask, “Do you want an alternate route to save 20 minutes?” This experience prompts users to stop and consider what’s best and most useful for them.

Or imagine a factory in which an algorithm operates machinery. If the algorithm has a fixed objective of cranking out as many products as possible, it might start moving too quickly, risking more workplace accidents and injuries. If instead the algorithm had no fixed objective to begin with, it would learn from the workers’ behaviors how to operate in concert with them, by optimizing product output at the speed they preferred.

By designing algorithms in this more user-centric way, an organization could make it much more difficult for people with nefarious purposes to game the system and affect what other users experience.

Embracing a Bigger Role for Humans

By designing algorithms in this more user-centric way, an organization could make it much more difficult for people with nefarious purposes to game the system and affect what other users experience. And the people who build these algorithms could incorporate ways to discover user preferences and perhaps even to encourage users to actively express their own preferences.

Of course, human biases and failings will always be a problem. It’s impossible to mitigate every potential harm for every product humans create, AI included.

But this new model would help by instituting explicit uncertainty as a norm. This moves companies and leaders to think through decisions and business-model choices with more long-term thinking around users and potential unintended consequences.

We can make AI more agile and responsive to individuals’ needs and subject to their choices. With algorithms increasingly powering our world, it’s time to change course.

Topics

Developing an Ethical Technology Mindset

The demands of a digitized workforce put transparency, ethics, and fairness at the top of executive agendas. This MIT SMR Executive Guide explores how managers and organizations can apply principles of ethical and trustworthy technology in engaging with customers, employees, and other stakeholders.

Brought to you by

Deloitte
More in this series

Reprint #:

63231

More Like This

Add a comment

You must to post a comment.

First time here? Sign up for a free account: Comment on articles and get access to many more articles.