Nanjira Sambuli is a researcher, policy analyst, and strategist studying the gendered impacts of technology adoption and digitalization on governance, diplomacy, the media, entrepreneurship, and culture, especially in Africa. She is a fellow in the Technology and International Affairs Program at the Carnegie Endowment for International Peace and a Ford Global fellow; a board member of Development Gateway and the Digital Impact Alliance; an adviser to the Carnegie Council’s AI and Equality Initiative and the Alliance for Inclusive Algorithms; and a diplomacy moderator at the Geneva Science and Diplomacy Anticipator. Sambuli previously led digital equality advocacy efforts at the World Wide Web Foundation and provided strategic guidance to iHub Nairobi.
Voting History
Statement | Response |
---|---|
There is sufficient international alignment on emerging codes of conduct and standards for global companies to effectively implement RAI requirements across the organization. Neither agree nor disagree | “There is a flurry of initiatives aiming to influence norms, standards, and codes of conduct on responsible AI and on AI governance more broadly. At a cursory glance, it indicates alignment, especially where companies and governments have come together and made declarations and (mostly voluntary) commitments on responsible AI and AI safety. However, alignment on these emerging codes to guide how global companies operate is fraught with fierce geopolitical competition. As such, the interpretation of these emerging codes of conduct and standards is more than just about technical guardrails. Besides, the power dynamics between global companies and governments in many jurisdictions are skewed, so one company may apply such codes and standards disparately, depending on how such a balance of power tilts. These realities underline the fact that the technological is political.” |
Organizations will be ready to meet the requirements of the EU AI Act as they phase in over the next 12 months. Agree | “The timelines provided for compliance, coupled with the audit processes and mechanisms available, should make it feasible for organizations to meet the act’s requirements.” |
Organizations are sufficiently expanding risk management capabilities to address AI-related risks. Neither agree nor disagree |
“An immediate question is how AI-related risk is markedly different from other risks arising from the use and diffusion of digital and emerging tech. Risk management capabilities extending to AI-related risks are more likely to be found in big/leading companies in tech and other sectors on account of either designing, being early adopters of, or disrupted by AI. Micro, small, and medium enterprises that form the bulk of organizations in many economies may not yet have the capacity to have dedicated risk management teams or the resources to use third-party risk management services.
Organizations are likely still exploring how AI affects them and their spheres of operation, with other immediate risks, such as cybersecurity and data protection, taking precedence. However, AI-related risks might already emerge in existing risk assessments owing to digitalization more broadly and thus not be treated as unique or separate per se.” |