Last week, the European Economic and Social Committee (EESC) proposed that the EU should develop a system of certification for trusted applications and products using artificial intelligence (AI), to be delivered by an independent body.
Credit: Tara Winstead / Pexels (Licence: CC0)
AI systems and machine learning are so complex that even the original developers are unable to fully predict their outcome and must develop tools to test their limits. With this as a backdrop, the proposal for certification would, in principle, increase trust in AI amongst the European public.
The EESC has also proposed that an independent body is entrusted with testing for bias, prejudice, discrimination, robustness, resilience, and, especially, safety. Companies could use the certification to prove that the products they are developing and marketing are based on reliable AI systems, in line with European standards.
The Committee has also emphasised the necessity for a clear set of rules around responsibility – it must always be linked to a human. Machines cannot be held liable in cases of failure, the EESC explained.
The assessment list will be reviewed early next year. If deemed appropriate, the European Commission will propose further measures.
Back to Homepage
Back to Technology & Innovation