Responsible AI / Panelist

Ellen Nielsen

Chevron

United States

Ellen Nielsen is the former chief data officer at Chevron, where she focused on creating a data-oriented culture partnered with value-chain thinking. She has over 30 years of global experience as an executive in IT, digital, data, procurement, and supply chain across multiple industries. Nielsen is a regular speaker at industry events and has received numerous awards and recognitions, including being named to CDO Magazine’s 2023 Global Data Power Women list. She was also ranked fifth on the 2023 DataIQ 100 list of the most influential people in data in the United States. She also serves on a variety of boards, including those of the Petroleum Industry Data Exchange and Women Leaders in Data & AI.

Learn more about Nielsen’s approach to AI via the Me, Myself, and AI podcast.

Voting History

Statement Response
There is sufficient international alignment on emerging codes of conduct and standards for global companies to effectively implement RAI requirements across the organization. Neither agree nor disagree “Different legal systems, cultural norms, and political climates influence RAI approaches, making complete alignment unlikely. International alignment on RAI is a noble goal, but it is not a reality.

Some countries prioritize ethics and data privacy while others focus on innovation and economic growth, which can make it challenging to create a unified global standard. Thus, global companies need to invest in local or regional expertise to comply with laws, and build strong internal governance and relationships with regulators to stay compliant and provide expertise back to them.

RAI is still a young practice compared with other regulatory fields. The lack of established standards and the rapid pace of technological development mean that the landscape is continually evolving. Companies need to be agile and proactive in their approach to compliance, regularly updating their practices to align with the latest developments in both local, regional, and international standards.”
Companies should be required to make disclosures about the use of AI in their products and offerings to customers. Strongly agree “Absolutely. Transparency is paramount to maintaining consumer trust. Consumers have a right to understand what they are purchasing and to ensure that these products align with their values on quality, ethics, sustainability, and safety. Just as historical precedents have established trust through the disclosure of product ingredients and adherence to safety regulations, the same principles should apply to AI. Consumer protection institutions, like the Federal Trade Commission, established in 1914, highlight the importance of safeguarding consumer interests. Therefore, it is reasonable to argue that companies should be required to disclose the use of AI, providing consumers with the necessary information to make informed decisions and understand decisions made by AI, such as mortgage qualifications, clinical trial selections, autopilot in cars, or AI-created information. Consumers have the right to know how data is being used and manipulated. AI systems often rely on vast amounts of personal data to function effectively. Without disclosure, consumers are left in the dark about how data is being utilized and for what purpose.”
Organizations are sufficiently expanding risk management capabilities to address AI-related risks. Strongly disagree “Businesses are merely scratching the surface of AI risk management. Responsible AI is a new challenge. Whereas operational or financial risks are defined, the fast-evolving nature of AI-related risks makes managing them more complex. It is crucial to understand the specific elements that require protection and safeguarding. Many companies have not undertaken that exercise.

Consider a responsible AI trilemma: Any risk management framework hinges on three points — people, process, and technology.

People: Continuous education is crucial for managing AI risk. The expense of training workers to manage AI-driven challenges may exceed a company’s capacity and resources, leading it to avoid investing. The demand for AI governance and risk experts is outpacing the supply.

Process: Processes form the backbone of any risk management system. Policies and regulations on AI are still emerging, making it hard to incorporate effective AI risk management strategies that can adapt and evolve.

Technology: AI calls for security beyond safeguarding data, to also protect the very insights that drive business strategies. The technologies aimed at managing these newer, subtler risks are themselves evolving.”