Why Your Board Needs a Plan for AI Oversight
AI demands increased board fluency with technology, and attention to its risks as well as its rewards.
Topics
Frontiers
We can safely defer the discussion about whether artificial intelligence will eventually take over board functions. We cannot, however, defer the discussion about how boards will oversee AI — a discussion that’s relevant whether organizations are developing AI systems or buying AI-powered software. With the technology in increasingly widespread use, it’s time for every board to develop a proactive approach for overseeing how AI operates within the context of an organization’s overall mission and risk management.
According to McKinsey’s 2019 global AI survey, although AI adoption is increasing rapidly, overseeing and mitigating its risks remain unresolved and urgent tasks: Just 41% of respondents said that their organizations “comprehensively identify and prioritize” the risks associated with AI deployment.
Get Updates on Leading With AI and Data
Get monthly insights on how artificial intelligence impacts your organization and what it means for your company and customers.
Please enter a valid email address
Thank you for signing up
Board members recognize that this task is on their agendas: According to the 2019 National Association of Corporate Directors (NACD) Blue Ribbon Commission report, “Fit for the Future: An Urgent Imperative for Board Leadership,” 86% of board members “fully expect to deepen their engagement with management on new drivers of growth and risk in the next five years.”1
Why is this so urgent? Because AI’s potential to deliver significant benefits comes with new and complex risks. For example, the frequency with which AI-driven facial recognition technologies misidentify nonwhite or female faces is among the issues that have driven a pullback by major vendors — which are also concerned about the use of the technology for mass surveillance and consequent civil rights violations. In June 2020, IBM stopped selling the technology altogether. That same month, Microsoft said it would not sell its facial recognition technology to police departments until Congress passes a federal law regulating its use by law enforcement. Similarly, Amazon said it would not allow police use of its technology for a year, to allow time for legislators to act.
The use of AI-driven facial recognition technology in policing is just one notorious example, however. Virtually all AI systems in use today may be vulnerable to problems that result from the nature of the data used to train and operate them, the assumptions made in the algorithms themselves, the lack of system controls, and the lack of diversity in the human teams that build, instruct, and deploy them.
Many of the decisions that will determine how these technologies work, and what their impact will be, take place largely outside of the board’s view — despite the strategic, operational, and legal risks they present. Nonetheless, boards are charged with overseeing and supporting management in better managing AI risks.
Increasing the board’s fluency with and visibility into these issues is just good governance. A board, its committees, and individual directors can approach this as a matter of strict compliance, strategic planning, or traditional legal and business risk oversight. They might also approach AI governance through the lens of environment, social, and governance (ESG) considerations: As the board considers enterprise activity that will affect society, AI looms large. The ESG community is increasingly making the case that a T for technology needs to be added to the board’s portfolio — that civil liberties, workforce, and social justice issues warrant board focus on the impact of these new capabilities.2
What Boards Owe the Organizations They Serve
Directors’ duties of care and loyalty are familiar and well established. They include the obligations to act in good faith, be sufficiently informed, and exercise due care in oversight over strategy, risk, and compliance. Delaware courts recently have underscored the role of boards in understanding systemic and knowable risks, instituting effective reporting protocols, and demanding adequate compliance programs to avoid liability.3
Boards assessing the quality and impact of AI and what sort of oversight is required should understand the following:
- AI is more than an issue for the technology team. Its impact resonates across the organization and implicates those managing legal, marketing, and human resources functions, among others.
- AI is not a single thing. It is a system comprising the technology itself, the human teams who manage and interact with it, and the data upon which it runs.
- AI systems need the accountability of C-level strategy and oversight. They are highly complex and contextual and cannot be trustworthy without integrated, strategic guidance and management.
- AI is not static. It is designed to adapt quickly and thus requires continuous oversight.
The AI systems most in use by business today are efficient and powerful prediction engines. They generate these predictions based on data sets that are selected by engineers, who use them to train and feed algorithms that are, in turn, optimized on goals articulated — most often — by those developers. Those individuals succeed when they build technology that works, on time and within budget. Today, the definition of effective design for AI may not necessarily include guardrails for its responsible use, and engineering groups typically aren’t resourced to take on those questions or to determine whether AI systems operate consistently with the law or corporate strategies and objectives.
The choices made by AI developers — or by an HR manager considering a third-party résumé-screening algorithm, or by a marketing manager looking at an AI-driven dynamic pricing system — are significant. Some of these choices may be innocuous, but some are not, such as those that result in hard-to-detect errors or bias that can suppress diversity or that charge customers different prices based on gender. Board oversight must include requirements for policies at both the corporate level and the use-case level that delineate what AI systems will and will not be used for. It must also set standards by which their operation, safety, and robustness can be assessed. Those policies need to be backed up by practical processes, strong culture, and compliance structures.
Present legal and regulatory risks. Claims that AI is not yet regulated are not entirely true. Current legal and regulatory risks should already be commanding a board’s attention.
Plenty of laws and guidelines already apply. The Federal Trade Commission’s recently published guidelines acknowledge that some automated decision-making is already regulated under existing statutes such as the Fair Credit Reporting Act, enacted in 1970, and the Equal Credit Opportunity Act, enacted in 1974. Moreover, in 2017, the FTC and the Department of Justice jointly submitted a statement to the Organisation for Economic Co-operation and Development analyzing the application of antitrust laws to algorithms and concluding that existing rules against collusion are sufficient to prosecute abuses.
Companies may be held accountable for whether their uses of algorithm-driven systems comply with well-established anti-discrimination rules. The U.S. Department of Housing and Urban Development recently charged Facebook with violations of the federal Fair Housing Act for its use of algorithms to determine housing-related ad-targeting strategies based on protected characteristics such as race, national origin, religion, familial status, sex, and disabilities. California courts have held that the Unruh Civil Rights Act of 1959 applies to online businesses’ discriminatory practices. For instance, in Candelore v. Tinder, Tinder’s pricing model was found to have violated the Unruh Act because it discriminated against customers age 30 and over by charging them more than double the amount that subscribers under age 30 paid to access Tinder Plus.
The legal landscape also is adapting to the increasing sophistication of AI and its applications in a wide array of industries beyond the financial sector. For instance, the FTC is calling for the “transparent, explainable, fair, and empirically sound” use of AI tools and demanding accountability and standards. The Department of Justice’s Criminal Division’s updated guidance underscores that an adequate corporate compliance program is a factor in sentencing guidelines.
We can further expect that regulatory gaps will require new rules such as Illinois’s new Artificial Intelligence Video Interview Act, which governs employers’ use of AI to analyze video-based job interviews. Meanwhile, the Washington State Legislature is advancing the proposed Washington Privacy Act, which would directly address certain commercial uses of facial recognition technology. And the U.S. Congress is considering multiple proposals that would touch on different aspects of algorithmic justice.
Governments around the world are advancing voluntary guidelines for trustworthy or responsible AI. The European Union published its ethics guidelines in April 2019. In the U.S., the Department of Defense shared its five-point AI ethics guide this year, and the National Security Commission on Artificial Intelligence released its responsible AI guidelines in July. We can expect many more such efforts to set standards for what is acceptable use of AI. Some will be aspirational, and some prescriptive; some more general, and some targeted to specific technologies, industries, or use cases.
From the board’s perspective, compliance with existing rules is an obvious point, but it is also important to keep up with evolving community standards regarding the appropriate duty of care as these technologies become more prevalent and better understood. Further, even after rules are in force, applying them in particular business settings to solve specific business problems can be difficult and intricate. Boards need to confirm that management is sufficiently focused and resourced to manage compliance well, along with AI’s broader strategic trade-offs and risks.
Risks to brand and reputation. The issue of brand integrity — clearly a current board concern — may most likely drive AI accountability in the short term. Recent research into issues faced by individuals charged with advancing responsible AI within companies found that the “most prevalent incentives for action were catastrophic media attention and decreasing media tolerance for the status quo.” Well before new laws and regulations are in effect, company stakeholders such as customers, employees, and the public are forming opinions about how an organization uses AI. As these technologies penetrate further into business and the home, their impact will increasingly define a brand’s reputation for trust, quality, and its mission.
The role of AI in exacerbating racial, gender, and cultural inequities is inescapable. Addressing these issues within the technology is necessary, but it is not sufficient. Without question, we can move forward only with genuine commitments to diversity and inclusion at all levels of technology development and technology consumption.
Public opinion favors vigilance about how these technologies will affect privacy and civil liberties: The University of Oxford’s Centre for the Governance of AI found that 8 in 10 Americans polled “agree that AI and/or robots should be carefully managed.” Consumers (individuals and businesses) arguably have been slower to consider questions about how to distinguish high-quality tools from lower-quality tools, how to confidently procure technologies for business or home use, or what safety features or assumptions (beyond data usage) they want, trust, or will accept. But this is coming.
Business continuity concerns (again). Boards and executives are already keenly aware that technology-dependent enterprises are vulnerable to disruption when systems fail or go wrong, and AI raises new board-worthy considerations on this score. First, many AI systems rely on numerous (and unknown) third-party technologies, which might threaten reliability if any element is faulty, orphaned, or inadequately supported. Second, AI carries the potential of new kinds of cyberthreats, requiring new levels of coordination within any enterprise. And bear in mind that many AI developers will tell you that they don’t really know what an AI system will do until it does it — and that AI that “goes bad,” or cannot be trusted, will need remediation and may have to be pulled out of production or off the market.
What Can Boards Do?
Regardless of how a board decides to approach AI oversight, it will play a critical role in considering the impact of the AI technologies that a business chooses to use. Before specific laws are in effect, and even well after they are written, businesses will be making important decisions about how to use these tools, how they will impact their workforces, and when to rely upon them in lieu of human judgment.
The hardest questions a board will face about proposed AI applications are likely to be “Should we use AI in this way?” and “What is our duty to understand how that function is consistent with all of our other beliefs, missions, and strategic objectives?” Boards must decide where they want management to draw the line: for example, to identify and reject an AI-generated recommendation that is illegal (easier) or at odds with organizational values (harder).
To follow the NACD’s recommendation that they “must engage more proactively, deeply, and frequently on entirely new and fast-changing drivers of strategy and risk,” boards might begin by looking at their composition. As one NACD commissioner observed in its 2019 report, “Many directors don’t feel comfortable talking about emerging technologies, cybersecurity, and other complex topics. … As a result, they tend to defer to others, which can become an abdication of their responsibility to be active board members.” And when NACD polled directors on digital governance in 2019, 64% of respondents agreed that the next member recruited to their board should have strong technology expertise.
With capable members in place, boards should do the following in order to provide adequate oversight of AI efforts:
- Learn where in the organization AI and other emerging technologies are being used or planning to be used, and why.
- Set a regular cadence for management to report on policies and processes for governing these technologies specifically, and for setting standards for AI procurement and deployment, training, compliance, and oversight. The World Economic Forum has created a good AI tool kit for boards to start this work.
- Encourage the appointment of a C-level executive to be responsible for this work, across company functions.
- Encourage adequate resourcing and training of the oversight function.
It’s not too soon for boards to begin this work; even for companies with little investment in AI development, the technology will find its way into the organization through AI-infused tools and services. The legal, strategic, and brand risks of AI are sufficiently grave that boards need facility with them and a process by which they can work with management to contain the risks while reaping the rewards.
References
1. NACD Blue Ribbon Commission, “Fit for the Future: An Urgent Imperative for Board Leadership” (Washington, D.C.: NACD, 2019).
2. A. Bonime-Blanc, “Gloom to Boom: How Leaders Transform Risk Into Resilience and Value,” 1st ed. (New York: Routledge, 2019).
3. Whereas the Caremark line of cases have long held that “only a sustained or systemic failure of the board to exercise oversight — such as an utter failure to attempt to assure a reasonable information and reporting system exists — will establish the lack of good faith that is a necessary condition of liability,” recent case law recognized a broader set of conditions under which directors might encounter liability. For example, see Wells Fargo & Co. Shareholder Derivative Litig., C.A. No. 16-cv-05541-JST (N.D. Cal. Oct. 4, 2017); Marchand v. Barnhill, 212 A.3d 805 (Del. June 18, 2019); Clovis Oncology Inc. Derivative Litig., C.A. No. 2017-0222-JRS (Del. Ch. Oct. 1, 2019); and Hughes v. Hu, C.A. No. 2019-0112-JTL (Del. Ch. April 27, 2020).
Comment (1)
SOMA DUTTA