Topics
Artificial Intelligence and Business Strategy
In collaboration with
BCGKay Firth-Butterfield was teaching AI, ethics, law, and international relations when a chance meeting on an airplane landed her a job as chief AI ethics officer. In 2017, Kay became head of AI and machine learning at the World Economic Forum, where her team develops tools and on-the-ground programs to improve AI understanding and governance across the globe.
In the final episode of the first season of the Me, Myself, and AI podcast, Kay joins cohosts Sam Ransbotham and Shervin Khodabandeh to discuss the democratization of AI, the values of good governance and ethics in technology, and the importance of having people understand the technology across their organizations — and society. She also weighs in on other themes our hosts have discussed this season, including education, collaboration, and innovation.
Subscribe to Me, Myself, and AI via Apple Podcasts, Spotify, or Google Podcasts.
Transcript
Sam Ransbotham: How can governance of artificial intelligence help organizations? The word governance can come with a lot of baggage and some negative connotations, but governance can enable organizations, too. The question is, how?
We’ll close out the season with a discussion with Kay Firth-Butterfield. She’s the head of artificial intelligence and machine learning for the executive committee of the World Economic Forum. With Kay, we’ll learn not only about her specific background in the legal profession, but she’ll also help us think about what we’ve learned overall this season.
Welcome to Me, Myself, and AI, a podcast on artificial intelligence in business. Each episode, we introduce you to someone innovating with AI. I’m Sam Ransbotham, professor of information systems at Boston College. I’m also the guest editor for the AI and Business Strategy Big Ideas program at MIT Sloan Management Review.
Shervin Khodabandeh: And I’m Shervin Khodabandeh, senior partner with BCG, and I colead BCG’s AI practice in North America. Together, BCG and MIT SMR have been researching AI for four years, interviewing hundreds of practitioners and surveying thousands of companies on what it takes to build, deploy, and scale AI capabilities and really transform the way organizations operate.
Sam Ransbotham: Hi, Kay, and welcome to the show.
Kay Firth-Butterfield: Thank you.
Sam Ransbotham: Of course. So, Kay, you have a kind of fascinating job or, actually, really, jobs — you’ve got so many things going on. So, for our listeners, can you introduce yourself and describe your current roles?
Kay Firth-Butterfield: Yes, certainly. I’m Kay Firth-Butterfield, and I am head of AI and machine learning at the World Economic Forum. So what that essentially means is that we work with multiple stakeholders — say, companies, academics, governments, international organizations, and civil society — to really think through the governance of AI. So when I say governance, I say it very much with a small “g.” We’re thinking about everything from norms through to regulation, but AI, we feel, is less susceptible to regulation.
Sam Ransbotham: Can you tell us how you got there? Give us a little bit of background about your career to date, and how did you end up in this role?
Kay Firth-Butterfield: I am, by background, a human rights lawyer. I am a barrister — that’s the type of trial lawyer that wears the wig and gown. And I got to a point in my career where I was being considered for a judicial appointment. In the U.K., they kindly sort of try out whether you want to be a judge and whether they think you’re any good at it. I don’t know what their view was, but my view was that it wasn’t the culmination of a career in the law that I really wanted. I had been very interested in the impact of technology on humans and their human rights, and so it gave me this wonderful opportunity to rethink where my career would go. So I was fortunate to be able to come to Austin [Texas] and teach AI, law, [and] international relations [and] to pursue my own studies around law and AI and international relations and AI, and the geopolitical implications of this developing technology.
And then, purely by luck, [I] met a person on a plane from Heathrow to Austin — it’s 10 hours. He was this chair and CEO of an AI company who was thinking about AI ethics. And this was back in 2014, when hardly anybody apart from me and the dog and some other people were thinking about it. And so he asked me as we got off the plane if I would like to be his chief AI ethics officer. And so that’s really how I moved into AI, but obviously with the social justice, with the ideas of what benefits AI can bring to society and also [being] cognizant of what we might have to be worrying about. And so I have been vice chair of the IEEE’s initiative on ethically aligned design since 2015. I was part of the Asilomar [Conference on Beneficial AI] thinking about ethical principles for AI back again in 2015. And so my career ended up with me taking this job at the forum in 2017. I say “ended up,” but maybe not. Who knows?
Sam Ransbotham: Yeah. We won’t call it an end just yet. So, what does artificial intelligence mean?
Kay Firth-Butterfield: Well, part of the problem, and part of its complexity, is that AI means different things to different people. So AI means one thing to an engineer and another thing to a person who’s using it as a member of the public, through their phone.
Sam Ransbotham: So we’re shifting our definition as we go, and we’ll continue to as well.
Kay Firth-Butterfield: Yeah. There’s that old adage that it’s not in artificial intelligence once it’s done.
Shervin Khodabandeh: And how much of that do you think is education and it sort of stemming from [a] lack of understanding and lack of education versus a technical or process complexity inherent in putting all that governance in place? I mean, I guess part of it is, you can’t really manage or govern that which you don’t really quite understand. Is that most of the battle? And once everybody understands it, because it’s common sense, then they begin to say, “Well, now how can we govern this? Like anything else we would govern, because now we understand it.”
Kay Firth-Butterfield: Yes. Well, I think that it’s organizational change, it’s education and training for employees, but it’s also thinking very carefully about product design, so that if you are actually developing [an] algorithmic product, what’s the path of that from the moment that you dream up the idea to the moment that you release it either to other businesses or to customers, and maybe even beyond that?
Sam Ransbotham: I couldn’t help but pick up on one of the things you said about governance as being negative. But one of our studies a few years ago found that health care shared data more than other industries. And that seems counterintuitive, but when we dug into it, what we found is, they knew what they could share. They had structure about it. And so that structure then enabled them to know what they could do and know what they couldn’t do — whereas other places, when they talked about data sharing, they were [saying], “Well, [we’ll] have to check with our compliance department and have to check and see what we can do.” There’s much less checking because it’s explicit, and the more explicit we can be. … And that’s an enabling factor of governance versus this sort of oppressive factor of governance.
Kay Firth-Butterfield: Yes. I think just governance has got itself a bad name because regulation impedes innovation, and that’s not necessarily so. I think that at the moment, we’re exploring all these different soft governance ideas, largely because to begin with, yes, we will probably see regulation — the EU says you need regulation out of Europe, around things like facial recognition and human risk, AI and human resources, because they’re classified as high-risk cases. But a lot are not necessarily high-risk cases; what they are are things that businesses want to use, but they want to use wisely. So, what we have done as well is created a lot of tool kits, for example, and guidelines and workbooks, say, that companies or governments can say, “Oh, yes, this can guide me through this process,” of, for example, [the] procurement of artificial intelligence. Just to give you an example, we surveyed a number of our members of boards on their understanding of artificial intelligence, and they didn’t really understand artificial intelligence terribly well.
And so what we did was develop an online tool for them to understand artificial intelligence, but also then to say, “OK, my company is going to be deploying artificial intelligence. What are my oversight responsibilities”? And long questionnaires — the things that you might want to ask your board, if you’re on the audit committee or the risk committee, or you’re thinking about strategy. So really digging into the way that boards should be thinking across the enterprise about the deployment of AI.
Sam Ransbotham: I’m guessing most people need that guidance.
Shervin Khodabandeh: Yeah, most people for sure need that guidance, and I think this is a very well-placed sort of point you’re making. What we don’t want to happen is to be so far behind in understanding and education and governance of any technology where then it becomes such a black box that it’s a huge activation energy for anybody to get there. We heard that also from Slawek Kierner from Humana, we heard that from Arti [Zeighami] at H&M. [It] was the importance of really big cross-organizational training, not just for the board and not just for the handful, but for everybody almost. I think we heard from Porsche that they actually did training for their entire technology organization: “This is AI. This is what it could do right. This is what it could do wrong. This is what you need to learn. And by the way, this is how it can give you all these new designs that you as an engineer or a designer can explore to design the next-generation model. And this is how it could be your friend.” But I think you’re pointing out that it’s time for us to really internalize all of these as not nice-to-haves, but critical, even, I would say, almost first steps, before getting too far ahead.
Kay Firth-Butterfield: Yes, absolutely. And in fact, there’s a company in Finland that requires everybody to learn something about AI, even at the very most basic level. And they have a course for their employees, which is important. Obviously, not everybody can master the math, but you don’t even have to go that far.
Sam Ransbotham: Or should. I can’t help but build off of your human rights background. One of the things that strikes me is there [are] incredible advances with artificial intelligence used by organizations, particularly large organizations, particularly well-funded large organizations. How do we, as individuals, stand a chance here? Do we each need our own individual AI working for us? How can we empower people to work in this perhaps lopsided arrangement?
Kay Firth-Butterfield: Yes, I think the imbalance[s] of power is something that we have to address as both individuals and as companies. There are some companies with more AI capabilities than others as nonprofits, and also as a world. Because at the moment, the concentration of AI talent, skills, and jobs is very skewed around the world. And we really have to think globally about how AI is deployed on behalf of humans and what makes us human and where we want to be maybe in 15 or 20 years, when AI can do a lot of the things that we are doing currently. I think it’s systemic and structural conversations that we have to have in all those different layers as well.
Sam Ransbotham: Right. The systemic and structural issues are big because, I have to say, I don’t think most companies intend to start AI with an evil bent. I mean, they’re not cackling and rubbing their hands together and plotting. I think these things are more insidious and systemic than that. How do we do that?
Kay Firth-Butterfield: In my experience of working with a lot of companies, governments, etc., I would say you’re absolutely right. Companies want to go in doing the right thing. And what we need to be doing is making sure that we help them do the right thing. And it’s very much that perhaps a lack of understanding of the technology is getting to skew how they use it. And so those are all areas that we have been trying to focus on at the forum, so that people who go into using AI with the right mindset actually come out with the right results. And, you know, your company is a little piece of society. The idea should be that everybody works together, because you’re actually going to end up with a better product. And I think, to your point, the better we enable our customers or the general public to understand AI, the less scary it will be.
I also fear that there are many companies that are being told to go out and get AI, and they actually don’t know what it is that they’re getting or really what the benefit is going to be or what the downsides might be. So having the board being capable of asking the right question is absolutely crucial. … We’re currently working on a similar tool kit for different types of C-suite officers so that they, too, can be empowered to understand more. But I always see the need for thinking carefully about AI as [a] top-down and bottom-up; that’s why I go back to that survey that you did, where … an understanding across the organization is actually so important. And I think where you’re seeing some of the developments amongst the companies that have been dealing with this, like Microsoft — they went for an Aether [AI, Ethics, and Effects in Engineering and Research] Committee, they went for really sort of thinking about strategically how are we using AI. And so I think that we have the benefits of what they learned early on that we can then begin to bring into the sector, from board to designer.
Sam Ransbotham: And the good part about that is that education component keeps it from just being ethics theater — kind of the thin veneer to put the stamp on it and check the box that, yes, we’ve done the ethics thing.
But I guess, what’s the role for business in trying to educate people to have a better human-machine collaboration? Obviously, we’ve heard a lot about the potential for AI to affect workplace and job security, but people are already incredibly busy at work. What potential is there for AI to free us from some of these mundane things and lead to greater innovation? When we talked with Gina Chung at DHL — she’s in the innovation department, and that’s where they’re focusing AI efforts. Is this a pipe dream or is there potential here?
Kay Firth-Butterfield: No, I think that it’s certainly not a pipe dream, and most people have innovation labs both in their companies and countries. And UNICEF has an innovation lab. We were talking about children and AI, so the potential for AI to free us from some of the things that we see as mundane, the potential for it to help us to discover new drugs, to work on climate change. … They’re all the reason that I stay working in this space. And you might say, “Well, you work on governance; doesn’t that mean that you just see AI as a bad thing?” And that’s not true.
Just as an example, at the moment we have problems just using Zoom for education, because there are many kids who don’t have access to broadband. That brings us against the questions of rural poverty and the fact that many people move from rural communities to cities. And yet, if we look at the pandemic, cities tend to be bad for human beings. So, are the conversations that we should be having — I’m thinking about the innovations that AI will create, which allow that sort of cross-function of rural to be as wealthy as city — we should be having really deep structural conversations about what our future looks like. Does it look like Blade Runner cities, or does it look like something else?
Sam Ransbotham: You were mentioning … kids were one extreme, and you had been talking about board level, which seems like another extreme. It seems like there’s a lot of other people between those two extremes that would need to learn how to work together alongside [AI]. And I guess just looking for some practical [advice], how do businesses get people to be comfortable with their teammate as a machine versus their teammate as a normal worker? Actually, for example, we’ve seen people completely impatient with robots. If it’s not perfect right off the bat, then why am I bothering teaching this machine how to do this? You’d never be that impatient with another coworker. You remember when you were first learning to do a job. So how do we get that same sort of, I guess, empathy for the poor little machine?
Kay Firth-Butterfield: Well … I do think it’s an education and training piece that the company has to put in place, but also it’s important because sometimes we overtrust the technology: “The computer told us to do it.” That’s something that we’d been noticing, for example, in the criminal sentencing problems that we’re having there, where judges have been overreliant upon the fact that the machine is telling them this. And so it’s that education to not overtrust the machine and also trust [that] the machine is not going to take your job, is not going to be spying on you. There are sort of a lot of things that employees are frightened of, and so you’ve got to make sure that they have some better understanding of what that robot or machinery is going to do with them. And that it’s a human-machine interaction as opposed to one dominating the other.
Shervin Khodabandeh: What’s your thinking on — to bring about large-scale understanding and change, not just at the board level, but from the fabric of the organization — how important is it that companies begin to understand the different modes of interaction between AI and human[s] and begin to test some of those things?
Kay Firth-Butterfield: Obviously, that’s really important. We do have a project that’s actually led by Salesforce called the Responsible Use of Technology. And … what we’re trying to do is to bring together all the different companies, like BCG, who are actually thinking about these issues and come up with some best practices. So how do you help your employees to really think about this interaction with AI? How do you make sure that the company itself is focused on ethical deployment of technology and where your employees are going to be working specifically with the technology [so] that they don’t fear it? I think there’s a lot of fear, and that is, at the moment, probably not useful at all.
Shervin Khodabandeh: Yes. You clearly can’t be friends with somebody if you’re afraid of them.
Kay Firth-Butterfield: Yes. What we are seeing is that, when I was talking about AI and ethics in 2014, very few people were talking about it. Now, everybody — not everybody, but every enlightened person — is talking about it. Business is talking about it. And we’re talking about business here. Business is talking about it, government’s talking about it. Governments are talking about it in the “If there is something that is unsafe, usually we regulate the unsafe.” I think, actually, the time is now to be having these conversations. Do we regulate? Do we depend upon more soft-floor approaches? Because what we’re doing, what we are setting now in place is the future. And it’s not just our terrestrial future, but that if we’re going to go to Mars, we’re going to use a lot of AI.
We need to be really having these conversations. And one of the things that we have been doing is having a conversation that looks at positive futures. So you can sort of look across the panoply of sci-fi and it’s almost all dystopian. And so what we wanted to do is say, “OK, we have this potential with AI; what do we want to create?” And so we brought sci-fi writers and AI scientists and business and economists and people together to really sort of have that conversation. So we’re having the conversation about AI ethics, but the next conversation has to be, how do we systematically want to grow and develop AI for the benefit of the world and not just sectors of it?
Shervin Khodabandeh: I could recall the flavor of these kinds of conversations I would have five years ago. It was very heavily tech-focused. What does that tell you in terms of a profile of future leaders of AI? What [are] the right sort of traits, skills, [and] profiles, do you think?
Kay Firth-Butterfield: I think we will see — so, I have a humanities background — I think we will see more humanities. There’s the AI piece that the technologists have to work on. But what we do know is that … there’s a Gartner study that says that by 2022, if we don’t deal with a bias, 85% of algorithms will be erroneous because of the bias. If that’s anywhere near true, that’s really bad for your R&D and your company. So what we know is that we have to create those multistakeholder teams. And also, I see the future of AI, this discussion, as part of ESG [environmental, social, and corporate governance]. I see the AI ethics discussion moving into that more social realm of the way that companies think about some of the things that they do.
Sam Ransbotham: And that’s something that we heard from, for example, Prakhar [Mehrotra] at Walmart — that they’re thinking big picture about how these will connect and remove inefficiencies from the process. And that certainly has ESG implications. What we’ve seen with some of the other folks we’ve discussed artificial intelligence in business with is that they’ve transferred learning from things that they’ve done in one organization to another. They’ve moved. … This education component that you’ve mentioned before has not happened within companies; it’s happened across companies and it’s happened across functional areas. How do we encourage that? How do we get people to have those diverse experiences?
Kay Firth-Butterfield: Yes, I think that’s, (a) right and (b) really important that we do. I was actually talking to somebody yesterday who had set up some really good resources and training around artificial intelligence in a bank, then moved to government, and then moved to yet another private-sector job and is doing the same thing. And many of the trainings that we need to be thinking about with artificial intelligence are cross-sectoral. We did an interesting look at all the ethical principles that are out there. There are over 190 now, from the Beijing [AI] Principles through to the Asilomar ones, etc.
Sam Ransbotham: That’s different from 2014.
Kay Firth-Butterfield: It’s very different from 2014. And one of the things that a lot of people sort of have said to me in the past is, “Well, whose ethics are you talking about anyway?” And what we found was actually there were about 10 things that were ubiquitous to all of those 190 different ethical principles. So there are 10 things that we care about as human beings, wherever we are in the world. And those are 10 things that are actually fairly cross-sectoral. So they’re about safety and robustness. They’re about accountability, transparency, explainability. They’re about that conversation we had earlier, human-machine interaction. Then they’re about, how does AI benefit us as humans? I think that that ability to be able to take what you’ve learned in one sector and move it to another is important and relatively straightforward.
Sam Ransbotham: And also it seems very human.
Kay Firth-Butterfield: Yeah.
Sam Ransbotham: That’s something that I think that the machines themselves are going to struggle with and need at least our help for a while.
Kay Firth-Butterfield: Oh, undoubtedly. Yes. And it probably doesn’t need saying to this audience, but it’s worth saying that these machines are not really very clever yet.
Sam Ransbotham: There’s still time. We’re still —
Shervin Khodabandeh: And thank God for that.
[Kay laughs.]
Sam Ransbotham: Kay, thank you for taking the time to talk to us. We’ve really enjoyed it.
Shervin Khodabandeh: Yeah, thank you so much, Kay, it’s really been a pleasure hearing your views and your leadership on this topic.
Kay Firth-Butterfield: Thank you so much to both of you. It’s been a pleasure and a privilege to be with you. I could have talked on for hours.
Sam Ransbotham: But we can’t, because that is the end of our episode and that is the end of our first season. Thank you for joining us on this podcast.
Shervin Khodabandeh: Thank you very much.
Allison Ryder: Thanks for listening to Me, Myself, and AI. If you’re enjoying the show, take a minute to write us a review. If you send us a screenshot, we’ll send you a collection of MIT SMR’s best articles on artificial intelligence, free for a limited time. Send your review screenshot to smrfeedback@mit.edu.
Comment (1)
keiko mizuno