The Human Factor in AI-Based Decision-Making

Facing identical AI inputs, individuals make entirely different choices based on their own decision-making styles.

Reading Time: 8 min 

Topics

Permissions and PDF Download

AI now has a firm footing in organizations’ strategic decision-making processes. Five years ago, less than 10% of large companies had adopted machine learning or other forms of AI, but today 80% of them make use of the technology.1 Whether it is Amazon integrating algorithms into its recruiting processes or Walmart using AI for decisions about product lines, such examples show that the use of AI now transcends mere process automation and that AI is increasingly being used to augment decision-making processes at all levels, including top management.2

In the boardroom, companies can use the power of AI to analyze information, recognize complex patterns, and even get advice on strategic issues. This predictive technology can help executives handle the increasing complexity of strategic choices by offering new perspectives and insights for consideration, which can help organizations gain competitive advantage.3

In AI-augmented decision processes, where algorithms provide suggestions and information, executives still have the final say. Salesforce, for example, uses its own AI program, called Einstein, to help leaders make important decisions in executive-level meetings. According to Salesforce’s visionary founder and CEO, Marc Benioff, the company’s decision-making processes have changed entirely since AI was introduced. Einstein’s guidance, whether on the performance of different regions or products or on new business opportunities, has helped to significantly reduce bias in meetings and decrease discussions driven by politics or personal agendas among members of the top management team.4

Our research reveals that this human filter makes all the difference in organizations’ AI-based decisions. Data analysis shows that there is no single, universal human response to AI. Quite the opposite: One of our most surprising findings is that individuals make entirely different choices based on identical AI inputs.

Moreover, these differences in AI-based decision-making have a direct financial effect on organizations. Depending on their particular decision-making style, some executives invest up to 18% more in important strategic initiatives based on identical AI advice. By understanding and using executives’ individual decision-making styles, then, organizations can better optimize the use of AI in their strategic initiatives and overcome flaws in human judgment.

How Executives Interact With AI

It’s possible for companies today to automate entire decision-making processes and autonomously substitute human involvement in the final choice through AI integration. Swedish private equity firm EQT Partners, for example, uses AI to scan investment proposals; 10% of its holdings have even been completely sourced by AI. These forms of AI integration are growing across a broad range of applications, including loan approval at Ant Group and algorithmic pricing at Uber.

Most strategic AI-based decisions in organizations, however, are hybrid forms that heavily rely on human judgment, requiring executive interaction with the AI-based system. In fact, 87% of managers believe that such hybrid approaches will emerge as the dominant form of human-machine collaboration in the future.5

How does this human element shape the decision outcome? Research has shown that executives differ substantially in making decisions, depending on their individual styles — the patterns they prefer and typically follow when confronted with a choice. Such individual differences are based on information processing and self-regulation, as well as the perceived urgency of the decision and their cognitive approach to making choices.6 Decision-making styles tend to induce an individual’s consistent response patterns across decision tasks and situations.

There are four main styles of decision-making: Rational decision makers are structured and logical in their assessment of strategic challenges. In contrast, intuitive-spontaneous decision makers rely on their emotion and tend to make spur-of-the-moment decisions. Dependent executives seek assistance and support from others when making decisions, while avoidant leaders postpone making decisions.7

Our research shows that these tendencies shape executives’ decisions in unexpected ways when they interact with AI-based recommendations. In a study with 140 U.S. senior executives, we confronted each person with an identical strategic decision: whether to invest in a new technology that would enable them to pursue potential new business opportunities. Participants were told that an AI-based system tasked with evaluating business opportunities had recommended investing in the technology. Finally, we asked the executives how likely they would be to invest in the technology, and if they chose to do so, how much money they would be willing to commit. We also asked them about their preferred AI input in the process.

Our results illustrate that executives differ significantly in how much input they require from AI-based systems and how likely they are to invest based on the system’s recommendation. (See “The Human Factor in AI-Based Decisions.”) At the same time, senior managers require different levels of autonomy when making choices. Rational types seem reluctant to lose autonomy in the process, while avoidant executives who typically postpone decisions are happy to delegate decision-making responsibility to AI. Others both embrace the AI recommendations and demonstrate confidence in making strategic choices without the involvement of algorithms.

These results suggest that executives using AI to make strategic decisions fall into three archetypes based on their individual decision-making style.

Skeptics do not follow the AI-based recommendations, preferring to control the process themselves. These managers do not want to make strategic decisions based on the analysis performed by what they perceive as a black box that they do not fully understand. Skeptics are very analytical themselves and need to comprehend details before making a commitment in the decision process. When using AI, skeptics can fall prey to a false illusion of control, which leads them to be overly confident in their own judgment and underestimate AI’s.

Interactors balance their own perception and the algorithm’s advice. They are open to the use of AI but do not rely on it entirely. Interactors view AI input as one tool of many at their disposal when making important choices. When AI-based analyses are available, interactors are willing to trust and make decisions based on these recommendations.

Delegators largely transfer their decision-making authority to AI in order to reduce their perceived individual risk. For these executives, AI use significantly increases the speed of the strategic decision-making process and can break a potential gridlock. However, delegators may also misuse AI to avoid personal responsibility; they might rely on its recommendations as a personal insurance policy in case something goes wrong. This risk shift from the decision maker to the machine could induce unjustified risk taking for the company.

These different decision-making archetypes show that the quality of the AI recommendation itself is only half of the equation in assessing the quality of AI-based decision-making in organizations. How executives make sense of and act on this advice is equally important. How can companies optimize the interplay between AI and human judgment?

Successful AI Integration in Decision-Making

Organizations should follow three strategies when integrating AI into their decision-making processes.

Create awareness. Because individuals make entirely different choices based on identical AI input, executives should be aware of their personal tendencies (or biases) when interacting with AI. Companies should thus make it a priority to communicate the impact of human judgment, which remains a decisive factor when augmenting the top management team. Such communication must include all executives who interact with AI-based systems, whether in hiring processes or in top management decisions. Most executives know about the significant impact cognitive biases can have on individual judgment. They should learn about and pay as much attention to the specific biases they have toward AI, depending on their individual decision-making styles. This awareness is the crucial foundation for a successful integration of AI into organizations’ decision-making processes.

Avoid risk shift and the illusion of control. We were struck by the degree to which some executives transferred the risk of decision-making to AI — and by the complete opposite, as other executives were unwilling to cede an inch of their control to technology. Both tendencies need to be managed by the decision makers themselves or their superiors observing the process. Because we are typically unaware of our own biased judgment in the moment we make a decision, active interventions are necessary when de-biasing decisions. So when an individual or team interacts with AI in crucial decisions, colleagues — an executive who is not directly involved in the process, or a knowledgeable adviser — can assist in highlighting potential problems. This intervention can interrupt the decision maker’s subconscious autopilot process and elevate the decision to a more conscious and unbiased choice.

To avoid risk shift, organizations must emphasize that the ultimate decision authority stays with the executive, even if AI is involved. On the other hand, the illusion of control can be mitigated by better explaining the potential benefits of AI, as well as what parameters and data the suggested course of action is based upon. Such new and counterintuitive information can help defuse the illusion of control and contribute to a more balanced and less cautious perception of AI.8

Embrace team-based decisions. Balancing out the predominant tendencies of the three decision-making archetypes in teams may help them to avoid choices that are overly risky or risk averse. Different perspectives and multiple options always improve human decision-making processes, whether or not AI is involved. Framing the AI as an additional source of input and analysis, akin to an additional team member rather than a superior, undisputable authority, can help successfully integrate AI-based recommendations into discussions.

Only 50% of companies have an AI strategy in place, and few of those look beyond AI’s technological development and implementation.9 To take advantage of AI’s full potential, companies need a human-centered approach to addresses the cognitive dimension of human-machine interactions beyond automation.

When AI is used to augment strategy development and decision-making processes, our psychological tendencies and habits determine how we act upon its recommendations. A targeted application of AI can de-bias discussions and reduce politics in top management meetings, as exemplified at Salesforce — but, based on our habitual decision-making styles, the use of algorithms can also lead us astray.

By acknowledging this potential human bias in AI-informed decision-making, leaders can accordingly design processes to champion AI in the boardroom. With the right balance of analytics and experience, AI-augmented decision processes can increase the quality of an organization’s most critical choices, thus driving tremendous value for companies in an increasingly complex world.

Topics

References

1. G. Bhaskar, P.R. Daugherty, H.J. Wilson, et al., “Taking a Systems Approach to Adopting AI,” Harvard Business Review, May 9, 2019, https://hbr.org.

2. S. Raisch and S. Krakowski, “Artificial Intelligence and Management: The Automation-Augmentation Paradox,” Academy of Management Review 46, no. 1 (January 2021): 192-210.

3. B. Libert, M. Beck, and M. Boncheck, “AI in the Boardroom: The Next Realm of Corporate Governance,” MIT Sloan Management Review, Oct. 19, 2017, https://dev03.mitsmr.io.

4. J. Bort, “How Salesforce CEO Marc Benioff Uses Artificial Intelligence to End Internal Politics at Meetings,” Business Insider, May 19, 2017, www.businessinsider.com.

5. S. Ransbotham, S. Khodabandeh, D. Kiron, et al., “Expanding AI’s Impact With Organizational Learning,” MIT Sloan Management Review, Oct. 19, 2020, https://dev03.mitsmr.io.

6. M. Geisler and C.M. Allwood, “Relating Decision-Making Styles to Social Orientation and Time Approach,” Journal of Behavioral Decision Making 31, no. 3 (July 2018): 415-429.

7. D.P. Spicer and E. Sadler-Smith, “An Examination of the General Decision Making Style Questionnaire in Two U.K. Samples,” Journal of Managerial Psychology 20, no. 2 (March 2005): 137-149.

8. S. Schulz-Hardt, M. Jochims, and D. Frey, “Productive Conflict in Group Decision Making: Genuine and Contrived Dissent as Strategies to Counteract Biased Information Seeking,” Organizational Behavior and Human Decision Processes 88, no. 2 (July 2002): 563-586.

9. S. Ransbotham, D. Kiron, P. Gerbert, et al., “Reshaping Business With Artificial Intelligence: Closing the Gap Between Ambition and Action,” PDF file (Cambridge, Massachusetts: MIT Sloan Management Review, 2017), https://dev03.mitsmr.io.

Reprint #:

63219

More Like This

Add a comment

You must to post a comment.

First time here? Sign up for a free account: Comment on articles and get access to many more articles.