Ben Dias is chief AI scientist at IAG. He has over 20 years of industry experience, with previous roles at EasyJet, Royal Mail, Tesco, and Unilever. His current focus is building and leading data teams and applying the lean startup approach to AI within large organizations. Dias actively engages with the U.K. mathematics community to help inspire the next generation of mathematicians and AI professionals. He also strives to ensure that AI solutions drive more equity and inclusion rather than exacerbating existing biases and inequalities. Dias holds a doctorate in computer vision and a master’s degree in mathematics and astronomy, both from University College London.
Voting History
Statement | Response |
---|---|
There is sufficient international alignment on emerging codes of conduct and standards for global companies to effectively implement RAI requirements across the organization. Strongly agree | “The U.S. and EU define AI as encompassing all machine-based systems that can make decisions, recommendations, or predictions. But the U.K. government defines AI by reference to the two key characteristics of adaptivity and autonomy. I believe that the U.K. definition is more future-proof and more useful, as it is these two characteristics that create new risks that need regulating and mitigating. However, although there are differences in how they define what AI is and in their timelines of implementation, all countries generally seem to be taking a risk-based approach to AI regulation. In addition, I can see that the risks they are highlighting are not unreasonable and are the risks companies would want to mitigate against regardless of regulation. I believe this is the ideal approach, as AI is still a developing technology, and this approach across all countries also provides the alignment companies need in order to effectively implement RAI requirements across their organizations. The regulatory frameworks, however, will continue to evolve and may diverge in the future. Therefore, I would recommend using an AI governance platform that specializes in keeping up to date with international regulatory frameworks.” |
Companies should be required to make disclosures about the use of AI in their products and offerings to customers. Strongly agree |
“Given the hype and fear equally prevalent in the media and general public, it is critical for companies to disclose whether and how they use AI in their products and offerings. The biggest challenge companies will face is in how to explain, in customer-friendly language, what type of AI they are using and for what. The main concern most customers will have is around how their personal data will be used by the AI and, in particular, whether the AI will use their personal data to train itself. Therefore, addressing this clearly and upfront will be critical. Customers will also want to know how the AI is governed. Therefore, it will be important to let customers know whether there is a human in the loop or how a human can be contacted to challenge any decisions or actions taken by the AI.
If the AI used is both adaptive and autonomous, the companies should be transparent about the fact that some outcomes may seem counterintuitive and may even sometimes be wrong. In these cases, the company should also aim to provide a customer-friendly explanation for each key customer-facing output while always providing an opportunity and mechanism to challenge the output if required.” |
Organizations will be ready to meet the requirements of the EU AI Act as they phase in over the next 12 months. Neither agree nor disagree | “Some elements of the EU AI Act will come into effect at various stages, with bans on prohibited practices applying within six months. Therefore, some basic obligations will start to apply in 2024, which some organizations may not be prepared for. While most of the prohibitions outlined in the act may not be relevant to most organizations, the use of AI in recruitment and monitoring employees’ well-being, and biometric AI usage will apply to most organizations. Most will be considered “deployers,” generally incurring fewer obligations unless they deploy a high-risk AI system. However, if an organization modifies a purchased high-risk AI system, it may then take on the role of a provider, leading to significantly more responsibilities. AI system suppliers may also amend their contracts, potentially shifting some of their compliance responsibility and liability to their customers. Therefore, all organizations will need to establish processes to ascertain whether any of their AI systems will be subject to this regulation. However, despite this new regulation, organizations must continue to comply with existing laws like GDPR, consumer legislation, and competition law for routine operations.” |