Responsible AI / Panelist

Katia Walsh

Harvard Business School

United States

Katia Walsh is Harvard Business School’s inaugural chief digital officer, focused on delivering educational enhancement and commercial value through generative AI-powered products. Previously, she cofounded the Chief Customer Office at Prudential Financial, was Vodafone Group’s first chief data and AI officer, and was Levi Strauss & Co.’s first chief strategy and AI officer. Walsh is on the board of Securian Financial and the not-for-profit Global Legal Entity Identifier Foundation. She has been recognized as a 2023 SwissCognitive Global AI Ambassador, one of the Top 50 Women Leaders of San Francisco for 2022, and VentureBeat’s 2021 global AI Mentor. She holds a doctorate from the University of Missouri-Columbia in strategic communication.

Learn more about Walsh’s approach to AI via the Me, Myself, and AI podcast.

Voting History

Statement Response
There is sufficient international alignment on emerging codes of conduct and standards for global companies to effectively implement RAI requirements across the organization. Disagree “There isn’t much of an international alignment on emerging codes of conduct and standards for global companies to effectively implement RAI requirements across their organizations. We are dealing with a lag between theory and practice. Yes, global companies generally agree on the principles around using AI in safe, ethical, and trustworthy ways. However, the reality of implementing specifics in practice is very different due to the novelty of the technology, varying degrees of implementation in companies across industries around the world, and the nature of ethical dilemmas, which is, by definition, not straightforward.

As the technology evolves, adoption accelerates, and more regulation takes hold, we are likely to see greater alignment on unregulated but equally important aspects of AI to ensure its responsible deployment and scaling. After all, how can we expect to realize value from the technology if we don’t protect our customers, employees, and organizations first and ensure that we do the right thing, no matter how hard that might be or how easy the technology could make it to do the wrong thing? The latter is all the more reason we need responsible AI to progress globally and at speed for all.”
Companies should be required to make disclosures about the use of AI in their products and offerings to customers. Neither agree nor disagree “The answer is, it depends. It depends on what kind of “use” and for what purpose. Should companies disclose that they use AI for secondary research, lookup of contact information, and preparation for, or to get to, a sales or client meeting? No — the same way they do not have to disclose use of Google, a social network (e.g., LinkedIn), or a mobile phone. What about use of AI agents for customer service? Yes — customers should know that they are interacting with a technology instead of a human. Therefore, companies should disclose the use of AI for that purpose. Interestingly, that would not necessarily be to their detriment. Early practices with chatbots for customer service in a global telecommunications provider showed consistently greater satisfaction with chatbots versus human agents. This could be due to different expectations of technology compared with humans but is still noteworthy. And what about use of AI for the creation of text, images, or video? Perhaps. Adobe already discloses the use of AI for “photos.” Over time, as the technology becomes part of daily life, AI use disclosure may follow the same pattern as artists’ descriptions of the materials they use for painting, etching, lithography, or sculpting.”
Organizations will be ready to meet the requirements of the EU AI Act as they phase in over the next 12 months. Neither agree nor disagree “Organizations will have no choice to meet these requirements ... eventually. And not all requirements are created equal. Some organizations, like those with a high level of cybersecurity, have no choice but to address them immediately. Other requirements, such as high-quality data sets or detailed documentation, have less dire consequences if not implemented immediately or consistently.

One of the hardest requirements to meet would be that for transparency of AI algorithms. I am a general supporter of the EU AI Act, and I understand the reasons for the transparency requirement in order to understand judgment, reasoning, and decision-making by machines. But the reality is, we do not have that transparency into human decision-making either. AI will get even more complex, and I fear this requirement will be unattainable. There could be other ways to minimize bias and maximize fairness, including diversity of data sets; implementation of checks and balances; the proverbial human (there should be humans in the loop); stress-testing that scrutinizes the levels of accuracy, fairness, and bias in outcomes; and validation by additional machine systems. That’s the one requirement of the EU AI Act that, as written today, is set up to fail.”
Organizations are sufficiently expanding risk management capabilities to address AI-related risks. Disagree “The technology is advancing so rapidly that there is no way to do anything “sufficiently,” including expanding risk management capabilities to address AI-related risks. While we know many of the risks, I imagine a number of those to come are still unknown. The technology will keep us all on our toes for years to come because it’s developing in nanoseconds.”
Executives usually think of RAI as a technology issue. Agree “Yes, it is true that executives tend to view responsible AI as a technology issue. However, that is a misconception. Responsible AI should be part of the values of the full organization, just as critical as other key pillars, such as sustainability; diversity, equity, and inclusion; and contributions to making a positive difference in society and the world. In summary, responsible AI should be a core issue for an organization, not relegated to technology only.”
RAI constrains AI-related innovation. Strongly disagree “The practice of responsible AI is not at odds with innovation at all. The two can and must coexist. Innovation is not only about unleashing imagination and achieving what once wasn’t possible; it’s about doing so with ethics, values, and respect for rights. When innovation and responsibility combine, we see the greatest potential of technology and human (including emotional) intelligence, and the two — together — will achieve what cannot succeed alone.”
Organizations should tie their responsible AI efforts to their corporate social responsibility efforts. Strongly agree “Across industries, businesses are now also tech and data companies. Those companies poised to successfully compete in the future have embraced that reality and understand that long-term, sustainable growth requires digital technologies to create new business processes, cultures, customer experiences, and opportunities. One of the myths about digital transformation is that it’s all about harnessing technology. For digital transformations to be successful, they need to be grounded in and underpinned by an organization’s guiding values. And they need to be in service to business drivers that will enable and protect a company’s right to operate in the future, including corporate social responsibility priorities.”
Responsible AI should be a part of the top management agenda. Strongly agree “Every company is now a technology company, and building a responsible AI framework from the start is critical to running a responsible business. Data, digital, and AI are core to how companies connect with consumers, drive internal operations, and chart future strategies. Therefore, these technologies can immensely impact consumers, employees, partners, and the organization overall. Just like human intelligence, AI is susceptible to limitations, bias, and error. We must ensure opportunities for responsible and ethical applications, continuous measurement, and advancement.

As we see continued adoption of AI throughout organizations, it’s vital that leaders incorporate ethical and responsible practices, including controls, to prevent unintended or harmful outcomes. Leading a modern organization includes building the right strategy and a plan that achieves desired outcomes, eliminates bias, ensures transparency, and withstands the highest data privacy, security, and protection measures.

An effective AI program is ethical and responsible, and these factors cannot be an afterthought or one-time conversation. Ethical and responsible AI must be a part of every organization’s core.”