Responsible AI / Panelist

Franziska Weindauer

TÜV AI.Lab

Germany

Franziska Weindauer is the CEO of TÜV AI.Lab, a joint venture founded by companies in the certification industry in Germany and beyond. TÜV AI.Lab is paving the way for trustworthy AI by developing conformity criteria and test methods for AI systems. It is also helping to prepare the AI ecosystem for new regulatory requirements, such as the European Union’s AI Act. Previously, Weindauer was a senior adviser on digital policy in the German Federal Chancellery; head of policy at Bitkom, Europe’s largest digital industry association; and an adviser to a member of the European Parliament. She has attended universities in the Netherlands, the U.K., and Turkey and has a background in European studies.

Voting History

Statement Response
There is sufficient international alignment on emerging codes of conduct and standards for global companies to effectively implement RAI requirements across the organization. Disagree “The answer to this question depends on the frame of reference. While there are ongoing efforts to establish principles and concrete codes of practice for responsible AI across industries — such as the Alliance for Trust in AI and the Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems — international alignment on RAI standards is still insufficient for effective implementation. Even within the EU and with the EU AI Act that came into force on Aug. 1, 2024, the resulting requirements in practice are still under development. Although progress is being made, many unresolved questions remain, so sufficient alignment for effective implementation may be achievable in the future but is not a given at this moment.”
Companies should be required to make disclosures about the use of AI in their products and offerings to customers. Agree “In particular, transparency should be mandatory for AI-based applications that involve direct customer interaction, such as bots, as well as AI-generated audio or video content. This is also what the EU AI Act focuses on in terms of transparency. These measures, such as informing customers by simply adding a disclaimer, should be feasible.”
Organizations will be ready to meet the requirements of the EU AI Act as they phase in over the next 12 months. Disagree “We should clearly differentiate: Companies will be ready if and when they have to be, even if it takes a lot of effort. But companies only have to be compliant in case they fall under the regulation — that is, provide or deploy a high-risk or general-purpose AI (GPAI) product/system falling under the scope of Annex 1 or 3 or GPAI provisions. This means that most of the AI use cases we see will not fall under the AI Act and won’t have to be compliant. Secondly, AI systems falling under Annex 1 (medical, machinery, toys, etc.) will have three years to comply; AI systems falling under Annex 3 (education, HR, public administration) will have two years to comply; and GPAI systems with systemic risks falling under the scope of the regulation will have one year to comply.”
Organizations are sufficiently expanding risk management capabilities to address AI-related risks. Disagree “My feeling is that most organizations still behave like a rabbit caught in headlights. They know something is coming but are rather paralyzed in their response, as they simply don’t know how to deploy or deal with prevalent AI systems, let alone address their possible risks. The two biggest issues, in my mind, are (a) a lack of knowledge, as well as qualified staff, and (b) missing frameworks and guidelines developed by knowledgeable actors in the field to help them implement a risk management system.”