Designing AI Systems That Customers Won’t Hate
Privacy concerns get most of the attention from tech skeptics, but powerful predictive algorithms can generate serious resistance by threatening consumer autonomy. Three safeguards can help.
Topics
The nexus of big data analytics, machine learning, and AI may be the brightest spot in the global economy right now. McKinsey Global Research estimates that the use of AI will add as much as $13 trillion to global GDP by 2030.1 The noneconomic benefits to humankind will be equally dramatic, leading to a world that is safer (by reducing destructive human error) and offers people a better quality of life (by reducing the time they spend on tedious tasks such as driving and shopping). Even if the coming automation-driven disruption of labor markets is as serious as many fear, we are still, on balance, likely to be better off than today.
But not everyone is convinced. Negative predictions center on two overarching concerns that are related yet distinct. First, there is the issue of data privacy. After all, AI runs on data, and people are understandably uneasy about the things that automation technologies are learning about them, and how their private information might be used. Privacy in the digital age has been extensively researched and written about, and companies are devoting increasing attention to allaying their customers’ fears.2
However, there is another consideration that many companies have yet to seriously think about: autonomy. Though autonomous technology has a large and growing range of potential applications, when taken too far, it also may threaten users’ sense of autonomy and free will, or their belief that they can decide how to pursue their lives freely. A recent study found that when customers believed their future choices could be predicted based on their past choices, they chose less-preferred options. In other words, consumers violated their own preferences to reestablish their sense of autonomy by not choosing predictably.
The conflicting relationship between technology and free will is not new; it was described by Fyodor Dostoevsky’s Underground Man, the alienated protagonist of the 1864 novel Notes From Underground. Even in a utopian society, he argued, humans would rebel purely to prove “that men are still men and not the keys of a piano.”
Get Updates on Transformative Leadership
Evidence-based resources that can help you lead your team more effectively, delivered to your inbox monthly.
Please enter a valid email address
Thank you for signing up
Unfortunately, the perceived threat that people feel is likely to worsen as technological innovation accelerates and autonomous devices move into new areas of customers’ lives. We recently reached this conclusion after surveying diverse perspectives from philosophy, marketing, economics, and other fields.3 The good news is that our analysis also identified recommendations for how companies can develop AI technologies that their customers won’t hate. Those recommendations fall into three categories.
Safeguard Your Customers’ Individuality
A key aspect of autonomy is individuality — the idea that each of us has a unique core identity, and that we’re not interchangeable. This idea is especially prominent in, though not confined to, Western developed societies. But now, we live in an age of algorithms, which can draw highly accurate predictions about individual preferences and behaviors based on data sourced from the masses and can challenge consumers’ sense of individuality.
For example, companies are increasingly using AI-enabled customer service bots to furnish basic responses to problems and questions. For many customers, the bots are more convenient; they can resolve issues for many more people in far less time than a call center representative could. However, some consumers resist interacting with bots. When people need help with a problem, many crave personal attention. They have likely been conditioned by negative experiences with earlier, less-effective technologies such as call centers that route them through a standardized set of menu options and never let them speak to an actual person. More broadly, many people dislike being treated as though their problems are generic enough to warrant standardized solutions from a bot.
One recent study found that potential patients preferred to be seen by a human doctor or nurse rather than a robot, even when they were informed that the robot would provide a more accurate diagnosis, because they felt the machine would not be able to account for their unique situations.4 (Admittedly, health care decisions have considerably higher stakes than the typical consumer purchase, but the same principle applies.)
Some technology advocates believe that this problem may resolve itself, because AI applications could soon advance so far that consumers will not be able to distinguish between a human and a robot. But even this strategy has the potential to backfire, as happened when Google debuted Duplex, a virtual assistant capable of mimicking human speech with uncanny accuracy, which swiftly received a public backlash.5
But that kind of convincing may not be necessary when the right design measures are taken. For instance, AI bots can be built to assuage autonomy-based fears by taking smaller steps to personalize each encounter. Even a modest personal touch can have an outsize effect: A 2016 study found that consumers showed a higher affinity for a host of products, such as coffee and fountain pens, when the manufacturer (e.g., a human barista or corporate representative) addressed them by their first names.6 In other ongoing research, customers in randomized controlled trials made fewer calls to a service center when an algorithm provided simple explanations for why it had rejected customer transactions, compared with when it provided no explanation.7
Similarly, companies can safeguard customers’ individuality by designing interactions — even with an AI-enabled bot — in ways that give users some sense of agency over the process. It’s akin to the well-known Ikea effect, in which the effort of personally assembling flat-packed furniture helps buyers feel more bonded to their purchases. When companies attempt to make consumer interactions extremely smooth and convenient, they could be missing out on opportunities to let consumers express their individuality and autonomy — and thus deepen their connection with the product or service. Artificial intelligence is more palatable when it allows users to feel that their experience is somehow theirs and no one else’s.
Some companies may be tempted to short-circuit this phenomenon by offering consumers the appearance of interaction, rather than the actual thing.8 In one research study, patients in a dental office were soothed by the mere presence of a button they could press to communicate that they were in pain. Even though the buttons were rarely used, the sense of autonomy they provided led to a consistent reduction in the level of pain that patients felt.9
The takeaway is that completely ineffectual choice-making levers may pacify people’s desire to exercise control over their environments. However, most customers are unlikely to appreciate feeling duped — in the unlikely event that deception is exposed — and controls with some minor impact are preferred over entirely illusory ones. Ultimately, how individual engagement is solicited matters considerably less than whether it is.
Safeguard Your Customers’ Freedom Not to Be Predictable
A key aspect of freedom is the ability to change one’s mind and abandon preestablished behavior patterns — what philosophers call “the freedom to do otherwise.” For AI developers, this presents an inherent problem, because even the most finely honed machine learning algorithms will find it difficult to anticipate human caprice. The more that consumers feel a company expects them to act by following their past choices (for example, receiving highly specific product recommendations based on their past shopping and browsing behavior), the more some will want to break with those choices.
Canny messaging can help. Our ongoing studies show that consumers’ perceived threats to their autonomy declined when their behavior was described as “consistent” (which has a positive connotation) rather than “predictable” (which is negative). Given that difference, companies are less likely to trigger consumers’ resistance with recommendations if they’re positioned as invitations to complete an ongoing task or acquire more in-depth expertise. (For example, Amazon could solicit consumers who have bought only part of the Lord of the Rings book series to “continue exploring Tolkien” or “learn all there is to know about the series.”) Such an approach would implicitly reward consumers for continuing to follow a chosen path, rather than pushing them to deviate from it in order to assert their autonomy.
That said, sometimes deviating from past habits is in consumers’ best interest. For example, consider a newly health-conscious consumer who recently cut back on sugary sodas due to health concerns. If that person arrived home to find that Amazon’s algorithms had automatically ordered bottles of his former favorite soft drink — based on previous purchases — he might feel that the technology was undermining his willpower. Similarly, a smoker could feel annoyed, and possibly find it more difficult to quit, if she received ongoing promotions for e-cigarettes via app notifications or emails.
Yet instead of being part of the problem, AI could help consumers break with undesirable habits — and boost their feelings of autonomy and consumer loyalty. Indeed, our research suggests that algorithms can be trained to identify people who are ready to make positive lifestyle changes but need a little help to actually do so. Consumers in this category exhibit telltale buying behaviors that researchers have dubbed “precommitment.”10 In one such behavior, people who fear that they may lack enough willpower about some temptation may create self-imposed constraints that make succumbing to that temptation difficult or costly. For example, a precommitting customer will buy unhealthy snacks in smaller, more expensive quantities rather than cost-efficient, jumbo sizes in order to limit consumption. A potent example of this can be seen with stickK.com, a website that offers people trying to achieve difficult goals a commitment device to raise the stakes for themselves.
Companies that want to capitalize on precommitment behavior signals can set up algorithms to identify consumers who can then be offered more formal and rigorous options designed to help them make lasting lifestyle changes. Also, companies could nudge onboarding consumers to declare their long-term goals and higher aspirations for their lives. Of course, the recommendations themselves need to be sensitive and thoughtful. Also, the language used should clearly convey that this is an optional feature that can be disabled or deleted at any time. As people then use the product or platform, AI could help identify junctures at which consumers may be receptive to try options in line with their stated long-term goals.
Research suggests that when people look at their behavior, they become more aligned with their more deep-seated preferences. One study on American voters found that although they believe they should consider the content of a policy more than the political party championing it, they still tend to vote along party lines.11 Yet after being asked to reflect on this disparity, participants were more likely to vote based on policies, which were better aligned with their actual underlying preferences. Merely reflecting on their behaviors made the voters more attuned to how they were making choices. A typical algorithm trained on past consumer voting habits would not detect this possibility. Giving consumers increased opportunities to declare their aspirations — or so-called metapreferences — can, therefore, make AI beneficial for people’s well-being, in addition to serving as a tool to clarify and simplify choices by allowing consumers to realize and express their freedom to do otherwise.
Safeguard Your Customers’ Privacy
Privacy and autonomy are discrete but related concepts that overlap each other. A lack of confidentiality restricts people’s options to do things that they might not want others to know about. In order to feel secure in their autonomy, people must have a certain degree of privacy. The privacy settings supplied by Facebook, Google, and other internet-enabled services provide a modicum of protection, assuming one knows how to navigate them.
More generally, the internet blurs the line between a person’s public and private existence. As we go about our business on our digital devices, we automatically transmit revealing information about ourselves — through Google searches, Tinder matches, YouTube views, Siri queries, Alexa requests, and just about everything else we do online, which can be intercepted and used to our disadvantage. That we nonetheless continue to confide so blithely in our devices is a testament to the seductive skills of today’s engineers and designers, as well as to the human tendency to overlook increasingly glaring problems — that is, until a tipping point is reached.
We are now past that tipping point, as awareness grows that the threats to individual privacy online don’t merely come from cybercriminals. Last year, The New York Times reported that at least 75 companies were collecting extremely precise geolocation data for up to 200 million mobile devices in the United States, with individual users being tracked up to 14,000 times each day.12 Researchers at Northeastern University found that more than 9,000 Android apps were authorized to access the microphone and camera for devices on which they were installed; moreover, several apps were seen liberally helping themselves to personal activity data and passing it to third parties.13 YouTube was recently slapped with a $170 million fine by the U.S. Federal Trade Commission for gathering data on children under 13 years of age, which was used to fine-tune the delivery of online ads.14
When algorithms make incorrect inferences about people, the result can be silly or annoying, but it can also be heartbreaking. In 2018, a video editor at The Washington Post illustrated this after her child was stillborn and she continued to receive online ads for pregnancy products.15 Thanks to social media, people are increasingly empowered to make these kinds of glaring privacy violations public, causing significant blowback for companies whose AI solutions wind up inflicting emotional harm on consumers.
As digital privacy concerns reach a tipping point, the balance of power is shifting away from tech companies and in favor of users. (See, for example, the European Union’s General Data Protection Regulation, or GDPR.) Instead of resisting this shift, we suggest that companies embrace it and start actively pursuing ways to give users greater control over how their data gets used. Apple CEO Tim Cook has already tried to differentiate the brand based on how it handles customer privacy.16 If executed correctly, more equitable and transparent privacy policies can benefit everyone involved.
Safeguarding both privacy and autonomy requires rethinking not only how sensitive information is obtained, but also how it is ultimately used. A few notorious incidents notwithstanding, algorithms are generally getting better at anticipating what users may want and need based on information such as internet searches and social media likes. But companies should be wary of acting too aggressively upon these data points and should continue to foster a feeling of autonomy for users to formulate their preferences confidentially.
An ongoing research project suggests that autonomy threats are stronger when consumers feel that they are being watched as they make up their minds, compared with instances in which their decided-upon preferences are revealed to others. When it comes to recommendation systems, for example, messages such as “based on your previous purchases” can thus be less threatening to consumers’ sense of autonomy than ones such as “based on your previous browsing history.” Our findings suggest that tracking completed purchases (which represent openly declared desires for products or services) may be fair game for consumers in a way that potentially ambiguous online monitoring is not.
Appeasing the Underground Man
Let us be crystal clear: We are not suggesting that algorithm-based innovation should stop or slow down. Rather, we firmly believe that this technology is on track to improve our lives and economies in far-reaching ways not yet imagined.
However, companies can no longer afford to ignore the fear and suspicion with which the public increasingly views this technology. Unless it addresses the threat that technology poses to the human need for autonomy, the tech industry will likely mobilize continued consumer outrage, probably followed by regulatory action. For example, a growing number of public officials are now asking for significantly more transparency into algorithms, which would be a far stronger drag on innovation than the moderate autonomy-related recommendations we outline in this article.
Radical transformation isn’t necessary, but merely putting a minor lever or two in customers’ hands won’t be enough either. Using Dostoevsky’s terminology, truly appeasing the Underground Man means avoiding actions that may threaten people’s need for autonomy, while incorporating features and functionality designed to continually renew users’ sense of themselves as people rather than keys on a piano.
References
1. J. Bughin, J. Seong, J. Manyika, et al., “Notes From the AI Frontier: Modeling the Impact of AI on the World Economy,” McKinsey Global Institute, September 2018.
2. C. Tucker, “Privacy, Algorithms, and Artificial Intelligence,” in “The Economics of Artificial Intelligence: An Agenda,” eds. A. Agrawal, J. Gans, and A. Goldfarb (Chicago: University of Chicago Press, 2019): 423-437.
3. Q. André, Z. Carmon, K. Wertenbroch, et al., “Consumer Choice and Autonomy in the Age of Artificial Intelligence and Big Data,” Consumer Needs and Solutions 5, no. 1-2 (March 2018): 28-37.
4. C. Longoni, A. Bonezzi, and C. Morwedge, “Resistance to Medical Artificial Intelligence,” Journal of Consumer Research 46, no. 4 (December 2019): 629-650.
5. C. Welch, “Google Just Gave a Stunning Demo of Assistant Making an Actual Phone Call,” The Verge, May 8, 2018, www.theverge.com.
6. S. Lim, S.M.J. van Ossalear, C. Fuchs, et al., “Made for You: The Effect of Consumer Identification on Consumer Preference,” in “NA - Advances in Consumer Research Volume 44,” eds. P. Moreau and S. Puntoni (Duluth, Minnesota: Association for Consumer Research, 2016): 118-122.
7. H. Abdulhalim, P. Kireyev, G. Tomaino, et al., “Explaining Algorithmic Decisions to Customers,” research in progress.
8. G. Mortimer, F. Mathmann, and L. Grimmer, “How the ‘Ikea Effect’ Subtly Influences How You Spend,” BBC, April 22, 2019, www.bbc.com.
9. R. Brown, “Social Psychology” (New York: The Free Press, 1965).
10. K. Wertenbroch, “Consumption Self-Control by Rational Purchase Quantities of Vice and Virtue,” Marketing Science 17, no. 4 (November 1998): 317-337.
11. L. Van Boven, P.J. Ehret, and D.K. Sherman, “Psychological Barriers to Bipartisan Public Support for Climate Policy.” Perspectives on Psychological Science 13, no. 4 (July 2018): 492-507.
12. J. Valentino-DeVries, N. Singer,M.H. Keller, et al., “Your Apps Know Where You Were Last Night, and They’re Not Keeping It Secret,” New York Times, Dec. 10, 2018, www.nytimes.com.
13. E. Pan, J. Ren, M. Lindorfer, et al., “Panoptispy: Characterizing Audio and Video Exfiltration From Android Applications,” Proceedings on Privacy Enhancing Technologies 2018, no. 4 (October 2018): 33-50.
14.“YouTube Fined $170 Million in US Over Children’s Privacy Violation,” BBC, Sept. 4, 2019, www.bbc.com.
15. G. Brockell, “Dear Tech Companies, I Don’t Want to See Pregnancy Ads After My Child Was Stillborn,” Washington Post, Dec. 12, 2018, www.washingtonpost.com.
16. T. Romm, “Apple’s Tim Cook Blasts Silicon Valley Over Privacy Issues,” Washington Post, Oct. 24, 2018, www.washingtonpost.com.