Steven Vosloo is UNICEF’s digital policy specialist in the Office of Global Insight and Policy, leading work at the intersection of children and their digital lives, including AI, digital literacy, and misinformation. Previously, he developed guidelines for UNESCO on how technology can be better designed for youth and adults with low literacy and low digital skills, established and led the organizations mobile learning program, and coauthored Policy Guidelines for Mobile Learning. Prior to that, Vosloo was head of mobile in the Innovation Lab at Pearson South Africa.
Voting History
Statement | Response |
---|---|
There is sufficient international alignment on emerging codes of conduct and standards for global companies to effectively implement RAI requirements across the organization. Neither agree nor disagree | “While there has been much progress toward regional or international alignment on AI commitments and cooperation — an encouraging trend — there should be greater global representation and more detail in the codes of conduct and standards for global companies to effectively implement RAI requirements across the organization.” |
Companies should be required to make disclosures about the use of AI in their products and offerings to customers. Strongly agree | “For AI to be adopted and leveraged for maximum benefit, it needs to be trusted. Gaining people’s trust is multifaceted, such as through AI safety, accountability, human agency, and transparency — including through disclosure. As UNICEF has noted in its guidance on AI, children should be notified when they interact directly with an AI system, to avoid a situation where they think they are interacting with a human. In addition, AI should not be used as the only input to determine key life decisions that impact children — for example, medical diagnoses, welfare decisions, or processing school applications — without a human in the loop to make the final decision. Children and their caregivers should be notified that AI systems have been used to guide such important decisions. Broadly following this approach for a vulnerable population group will benefit all AI users.” |
The management of RAI should be centralized in a specific function (versus decentralized across multiple functions and business units). Disagree | “While there should be a dedicated function that is ultimately accountable for responsible AI, the practice of RAI is a whole-of-organization undertaking. A hybrid approach is thus ideal: a central function, made up of a diverse range of stakeholders in the organization, that relies on responsible practices across multiple functions and business units. The decentralized units are equally accountable for implementing RAI and shaping the thinking of the dedicated unit.” |
Most RAI programs are unprepared to address the risks of new generative AI tools. Disagree | “The current principles of responsible AI apply to newly developed generative AI tools. For example, it is still necessary to aim for AI systems that are transparent and explainable. However, in terms of being prepared for generative AI, responsible AI programs should consider the potential impacts of these tools on problems like the widespread generation of disinformation or misinformation. Consequently, responsible AI programs should place greater emphasis on developing novel use cases and evaluating their potential impact to inform decisions on the development of AI systems. This is especially critical because negative impacts will be felt not only today but also by future generations.” |
RAI programs effectively address the risks of third-party AI tools. Neither agree nor disagree | “I would think it depends on the organization/company and its rigor in addressing the risks. Determining how to fully assess the risks (real or potential) in third-party AI tools can be challenging.” |
Executives usually think of RAI as a technology issue. Neither agree nor disagree | “It is important for executives not to see RAI as purely a technology issue. Responsible use of technology should represent an organization’s approach to innovation and be embedded in its strategies, processes, and roles — such as human and child rights experts, impact assessments, and, of course, the tech itself.” |
Mature RAI programs minimize AI system failures. Disagree | “I would rather say that mature RAI programs help to reduce AI system failures, since even if developed responsibly and not to do harm, AI systems can still fail. This could be due to limitations in algorithmic models, poorly scoped system goals, or problems with integration with other systems.” |
RAI constrains AI-related innovation. Disagree | “Responsible AI, when done well, does not constrain innovation. In fact, working to create clear processes that provide guardrails for how to develop AI responsibly can help to focus the innovation process. In addition, for corporates, it makes good business sense. As noted in the UNICEF Policy Guidance on AI for Children, “As consumers and the wider public make greater demands for technology services to have the right safeguards in place, business should capitalize on this market opportunity and thereby also mitigate against corporate reputational risks for AI-related harms.”” |
Organizations should tie their responsible AI efforts to their corporate social responsibility efforts. Disagree | “The challenge is that corporate social responsibility efforts can change as companies focus on different issues over time. Responsible AI is an evergreen issue that needs to be anchored in the core functioning of the organization. That is not to say that corporate social responsibility does not have a role to play, for example, in working on issues such as greater AI skills for women and girls or amplifying youth voices through consultations. But these efforts should stem from a responsible AI code that is embedded in the core of the organization and has both an internal and externally facing impact.” |
Responsible AI should be a part of the top management agenda. Strongly agree |
“Commitment to implementing responsible AI has to come from the top. It is not enough to expect product managers and software developers to make difficult decisions around the responsible design of AI systems when they are under constant pressure to deliver on corporate metrics. They need a clear message from top management on where the company’s priorities lie and that they have support to implement AI responsibly.
Having responsible AI as an ongoing agenda item will help keep the topic and commitment fresh in mind. But beyond that, capacity-building on AI and human and child rights for top management must also happen. In this way, they can have a clearer understanding of rights, potential impacts on those rights — positive and negative — and their role in managing the responsible AI agenda.” |