Rainer Hoffmann is the chief data officer at EnBW, where he drives the scaling of data and AI initiatives toward shaping a sustainable energy future. Concurrently, he is an adjunct lecturer at the Karlsruhe Institute of Technology (KIT), where he teaches the course Responsible Artificial Intelligence. Previously at EnBW, he was accountable for the Data & Analytics Excellence program, was a data scientist in energy trading, and established algorithmic trading. Hoffmann is an industrial engineer and has a doctorate in stochastic optimization from KIT and a professional certificate in AI from MIT.
Voting History
Statement | Response |
---|---|
There is sufficient international alignment on emerging codes of conduct and standards for global companies to effectively implement RAI requirements across the organization. Disagree |
“In my view, there is a certain degree of international alignment on RAI principles, exemplified by frameworks such as the OECD AI Principles. Consequently, the RAI principles adopted by various organizations tend to be quite similar and straightforward.
However, the challenge arises when it comes to operationalizing these principles. Each organization seems to undertake pioneering efforts in this regard. The lack of standardized alignment complicates the implementation of RAI, particularly for organizations that lack dedicated resources for this purpose. Moreover, the optimal approach to operationalizing RAI principles highly depends on the specific circumstances within each organization, such as its existing processes, committees, and organizational culture. Despite these challenges, I believe that some degree of alignment would be beneficial. Standardized frameworks and guidelines could provide a foundation that organizations can adapt to their unique needs, thereby facilitating more effective and consistent implementation of RAI.” |
Companies should be required to make disclosures about the use of AI in their products and offerings to customers. Neither agree nor disagree |
“As AI is a general-purpose technology, it will inevitably become integral to many products in the future. Requiring companies to disclose the use of AI (and details) for every single product is impractical. Although organizations could provide such information easily, it might overwhelm customers, and‚ much like privacy statements, they become seldom read and often ignored, especially as AI becomes ubiquitous. Additionally, disclosing specific AI implementations could reveal trade secrets.
Ultimately, it is crucial to hold organizations accountable for their AI-based products. This requires a degree of transparency regarding their overall AI development practices rather than product-specific disclosures. Some key questions: What principles do organizations adhere to when developing AI? How is it ensured that these principles are adhered to? Do they conduct risk assessments? How do they ensure that their staffs are properly qualified? In summary, organizations should provide insights into their AI development and operational practices, focusing on the conduct and standards of their employees rather than detailing individual AI solutions.” |
Organizations will be ready to meet the requirements of the EU AI Act as they phase in over the next 12 months. Disagree |
“Should the AI Act be enacted this year, only select provisions will become effective over the following 12 months. Notably, Article 5, which outlines prohibited AI practices, will warrant particular attention in the coming year. I expect that many organizations will undertake reviews to determine whether they are currently utilizing any AI systems that fall under these prohibitions or whether they have plans to implement such systems. Moreover, I expect that these organizations will implement safeguards or compliance mechanisms to prevent the deployment of any AI systems that violate these restrictions in the future.
However, full compliance with the AI Act’s requirements within a single year seems impossible, notably for large organizations with extensive AI deployments. Such entities face multiple challenges due to the AI Act: achieving transparency across myriad AI use cases organizationwide, discerning which systems will be under the act’s purview, interpreting and adapting to still-ambiguous requirements, and creating an oversight mechanism to consistently evaluate every new AI introduction for conformity.” |
Organizations are sufficiently expanding risk management capabilities to address AI-related risks. Disagree |
“My insights into the European AI ecosystem show that companies have yet to prioritize risk management specifically for AI systems. Traditionally, risk management has been viewed as a subset of the responsibilities held by product managers and developers within their daily tasks. Additionally, while considerations for information security risks are made, these represent merely a fraction of the potential risks associated with AI.
However, with the introduction of the European AI Act, which mandates risk management for high-risk applications, organizations are beginning to acknowledge the importance of AI-related risk considerations. Despite this progress, the act’s focus on high-risk applications alone leaves me skeptical about the comprehensive addressal of AI-related risks across all applications.” |