Simone Oldekop is head of responsible AI at Carl Zeiss, a multinational manufacturer of optical systems and optoelectronics. Oldekop is steering the Zeiss AI governance framework as leader of the company’s responsible AI office. She is an international lawyer, with both German and French law degrees. She has practiced law in the medical technology, finance, sustainability, and media industries and is certified both as a data protection and information security officer. Her passion for AI governance stems from her extended experience with the key pillars of responsible AI, as well as a strong interest in interdisciplinary and social issues at the intersection of AI governance.
Voting History
Statement | Response |
---|---|
There is sufficient international alignment on emerging codes of conduct and standards for global companies to effectively implement RAI requirements across the organization. Disagree |
“There are various organizations and initiatives at a global level that are working on the development of guidelines and standards for the responsible development and use of artificial intelligence, such as OECD, the EU Commission, IEEE, ISO, and the World Economic Forum.
However, there are currently no uniform global standards and codes for AI governance. Each organization and country has developed its own standards and codes based on its specific challenges and needs. Some countries, such as the U.S. and China, have their own national strategies and initiatives to promote AI development and use. This easily leads to challenges with the implementation of requirements, as it can result in conflicting key terms, an increasing number of different assessments needed, and other issues. Overall, there has been progress in the international alignment of codes of conduct and standards for global companies in the area of AI governance, but much work remains to be done. It is expected that discussions and cooperation between different countries and organizations will continue to increase in the coming years in order to reach a broader international consensus.” |
Organizations will be ready to meet the requirements of the EU AI Act as they phase in over the next 12 months. Neither agree nor disagree |
“The assessment is a mixed one. The readiness of organizations mainly depends on the size of an organization, the business, and how well an organization is already prepared in the underlying compliance fields, such as privacy and cybersecurity, and is able to build upon existing governance structures.
The maturity level of organizations in these disciplines and good coordination of the respective frameworks plays a significant role in adapting to the EU AI Act. The higher the level of maturity and a streamlined coordination, the more likely a smooth transition. Certainly, readiness also depends on the applicable risk categories and the amount of resulting requirements, which varies among organizations. Readiness also relates to whether organizations expected the EU AI Act to be passed. Given the debates at the European level, some might have relied on the scope and transition period having a smaller effect — if not even counting on the postponement of the EU AI Act. As a result, some organizations might start a little later and be less well prepared, which could result in unexpected challenges and delays during the first 12 months.” |
Organizations are sufficiently expanding risk management capabilities to address AI-related risks. Neither agree nor disagree | “The adaptation of risk management capabilities currently appears to be very dependent on the industry sector, often more advanced in very regulated industries such as finance or medtech. In addition, companies are probably better prepared where targeted by AI regulation, such as the upcoming EU AI Act and its respective requirements.” |