Can We Solve AI’s ‘Trust Problem’?

To address users’ wariness, makers of AI applications should stop overpromising, become more transparent, and consider third-party certification.

Reading Time: 5 min 

Topics

Frontiers

An MIT SMR initiative exploring how technology is reshaping the practice of management.
More in this series
Permissions and PDF Download

The sad fact is that many people don’t trust decisions, answers, or recommendations from artificial intelligence. In one survey of U.S. consumers, when presented with a list of popular AI services (for example, home assistants, financial planning, medical diagnosis, and hiring), 41.5% of respondents said they didn’t trust any of these services. Only 9% of respondents said they trusted AI with their financials, and only 4% trusted AI in the employee hiring process.1 In another survey, 2,000 U.S. consumers were asked, “When you think about AI, which feelings best describe your emotions?” “Interested” was the most common response (45%), but it was closely followed by “concerned” (40.5%), “skeptical” (40.1%), “unsure” (39.1%), and “suspicious” (29.8%).2

What’s the problem here? And can it be overcome? I believe several issues need to be addressed if AI is to be trusted in businesses and in society.

Rein in the Promises

The IT research firm Gartner suggests that technologies like cognitive computing, machine learning, deep learning, and cognitive expert advisers are at the peak of their hype cycle and are headed toward the “trough of disillusionment.”3

Vendors may be largely to blame for this issue. Consider IBM’s very large Watson advertising budget and extravagant claims about Watson’s abilities. One prominent AI researcher, Oren Etzioni, has called Watson “the Donald Trump of the AI industry — [making] outlandish claims that aren’t backed by credible data.”4

Tesla’s Elon Musk is another frequent contributor to AI hype, particularly about the ability of Tesla cars to drive autonomously. The company uses the term autopilot to describe its capabilities, which suggests full autonomy and has generated controversy.5 Tesla cars have some impressive semiautonomous driving capabilities and are impressive vehicles in many other respects, but clearly they are not yet fully autonomous.

Fortunately, not all companies are overselling their AI capabilities. Take, for instance, the Nordic bank SEB and its use of Aida, an intelligent agent that’s derived from Ipsoft’s Amelia. SEB has consistently been conservative in its portrayals of what Aida can do, first launching it for internal use on the IT help desk (where it is still used and is popular with employees), and then making it available to customers on an experimental basis. A press release from SEB captures the conservative tone, at least relative to how many companies describe their AI systems:

At present Aida has two main duties: She has been employed as a digital employee in the bank’s internal IT Service Desk, where she speaks her original language of English, and she is a trainee at the Telephone Bank, where she is learning to chat with customers in Swedish, on seb.se.

“We try to think of Aida as a person,” says Erica Lundin, head of the Aida Center of Excellence. “So we are building up her CV to show what she has accomplished and is competent in, and going forward we will work on her PDD [personal development dialogue] to develop her areas of competence.”6

Whether the use of cognitive technologies is internal or external, it’s best to under-promise and over-deliver. Introduce new capabilities as beta offerings and communicate the goal of learning about the use of the technology. And don’t eliminate alternative (usually human) approaches to solving employees’ or customers’ problems. Over time, as the technology matures and the AI solution improves its capabilities, both the machine and the communications describing its functions can become more confident.

Provide Full Disclosure

Another way to increase trust in AI is to fully disclose as much as possible about the system and how it will be used. Disclosure might include, for example, noticing that the customer is working with an “intelligent agent computer system” rather than a human representative. Or, if the solution is a hybrid/augmented one with some human and some computerized advice, disclosure should address who does what.

Such disclosures should be crafted not by lawyers — who might wrap them up in legalese — but perhaps by marketers. The idea to get across is that this is an opportunity to try something new, that the help is available 24/7, and that it may well address the customer’s issue. But companies need to be careful with AI in marketing. In a U.S. survey of 2,000 consumers, 87% of respondents said they would support a rule that would prohibit AI systems such as bots, chatbots, and virtual assistants from posing as humans. More broadly, 88% of the respondents said that AI in marketing should be regulated by an ethical code of conduct. On the more positive side, two-thirds of the surveyed Americans were open to businesses and brands using AI to communicate with them and serve them. But, as the ad agency that conducted the survey notes, “The prerequisite appears to be transparency and disclosure.”7

Certify Models and Algorithms

As we come to rely more heavily in our society and economy on AI and machine learning, there will most likely need to be some form of external certification if we are to trust the underlying models and algorithms. Just as the FDA certifies drug efficacy, auditors certify financial processes, and the Underwriters Laboratory certifies the safety of products, there will need to be trusted organizations — governmental or private sector — that endorse the reliability, replicability, and accuracy of AI algorithms.

Adam Schneider, a consultant who works in financial services, was the first to point out the possibilities of AI certification to me. He provides several examples of settings in which certification should perhaps be required:8

  • AI driving cars: Do we need an automated vehicle review board to understand car failures, enable comparison of different AI approaches across manufacturers, and monitor progress?
  • AI diagnosing patients: Do we need a protocol where human doctors verify enough of the diagnoses personally, using statistically valid techniques, before there is general reliance?
  • AI “robo” investing: One firm advertised “We have AI” that has been “extensively tested.” Is that level of disclosure good enough? What does “We have AI” mean? Should standards be defined before AI can be advertised to unsophisticated investors?

It’s early for such certification to emerge, but I have heard of one example that is consistent with Schneider’s thinking.

I interviewed a Deloitte consultant, Christopher Stevenson, about his work with “robo-advisers” in investing and wealth management settings.9 He said that the firm was already supplying certification and advising to financial institutions with robo-advice capabilities. It provides services such as establishment of controls and periodic effectiveness testing, evaluation of client communications and disclosures, algorithm assessment, and evaluation of compliance with trading rules.

I don’t know whether such services will catch on in this market domain and others. Given the importance of the tests to effective use of AI, I suspect certification will eventually gain traction. It may require a highly publicized failure, however, to make it a legal requirement.

Topics

Frontiers

An MIT SMR initiative exploring how technology is reshaping the practice of management.
More in this series

References

1. K. Krogue, “Artificial Intelligence Is Here to Stay, but Consumer Trust Is a Must for AI in Business,” Forbes, Sept. 11, 2017.

2. SYZYGY, “Sex, Lies, and AI,” SYZYGY Digital Insight Report 2017 (U.S. version).

3. K. Panetta, “Top Trends in the Gartner Hype Cycle for Emerging Technologies, 2017,” press release, Aug. 15, 2017.

4. J. Brown, “Why Everyone Is Hating on IBM Watson, Including the People Who Helped Make It,” Gizmodo, Aug. 10, 2017.

5. R. Mitchell, “Controversy Over Tesla ‘Autopilot’ Name Keeps Growing,” Los Angeles Times, July 21, 2016.

6. E. Lundin, interview with the author, February 2018; and SEB, “Burning Passion to Use AI for World-Class Service,” press release, Aug. 21, 2017, https://sebgroup.com.

7. SYZYGY, “Sex, Lies, and AI.”

8. A. Schneider, personal communication with the author, Jan. 22, 2018.

9. Deloitte, “Robo-Advising Platforms Carry New Risks in Asset Management,” Perspectives (n.d.).

Reprint #:

60217

More Like This

Add a comment

You must to post a comment.

First time here? Sign up for a free account: Comment on articles and get access to many more articles.