Teemu Roos is a professor of computer science at the University of Helsinki and leads the AI education program at the Finnish Center for AI. He is also the lead instructor of the “Elements of AI,” a free online course from the University of Helsinki and learning technology company MinnaLearn that has had over 1.2 million participants. He has held visiting scientist positions at the University of California, Berkeley and MIT and was a visiting fellow at Cambridge University. His research interests include statistical machine learning and its applications in astrophysics, neuroscience, and epidemiology.
Voting History
Statement | Response |
---|---|
There is sufficient international alignment on emerging codes of conduct and standards for global companies to effectively implement RAI requirements across the organization. Strongly disagree | “While regional regulation is being established in many regions, like in Europe, global alignment is still far beyond the horizon. On a positive note, international standards are being established. However, how they will be enforced through binding regulation, and how such regulation can be harmonized globally remains to be seen. One possible scenario is the Brussels effect, meaning that other regions imitate European regulatory frameworks. But realistically speaking, it is unlikely that either the U.S. or China, for instance, would fully comply with Europe’s (relatively) heavy-handed regulations, most notably the AI Act, the Digital Markets Act, and the Digital Services Act.” |
Organizations will be ready to meet the requirements of the EU AI Act as they phase in over the next 12 months. Neither agree nor disagree |
“I haven’t followed it closely, but according to the European Parliament, the AI Act still needs to undergo some processes before being formally and finally agreed on and will thereafter enter into force in six to 36 months, depending on the risk level.
Hence, it seems likely that organizations will be allowed more time than 12 months from now, which is also likely to be required due to the complexity of the package.” |
Organizations are sufficiently expanding risk management capabilities to address AI-related risks. Agree |
“On the one hand, companies are probably applying extensive risk management measures already; they're just not called AI risk management. What I mean is that we sometimes seem to think that AI is a separate universe, where only AI-specific risks and opportunities exist, when in fact every AI application is deployed in a certain context. Trying to create separate AI risk management from scratch will likely only lead to inconsistencies with existing, overlapping risk management capabilities.
On the other hand, some types of risks, such as intellectual property and privacy issues, are indeed magnified by AI, which means that additional risk management capabilities are called for. Additionally, as the EU’s AI Act is being brought into effect, companies will need to invest in compliance, not unlike the introduction of GDPR in 2018.” |