Building trust in AI: first European ethics guidelines

Photo by Markus Spiske on Unsplash

The European Union published a set of Ethics Guidelines for Trustworthy Artificial Intelligence on 8 April. These guidelines were prepared by the EU High-Level Expert Group on AI, an independent expert group set up by the European Commission in June 2018, and build on the results of a public consultation to which FERMA provided feedback.

FERMA welcomes these ethics guidelines, the first in the world, aimed at strengthening public trust in AI. They are not legally binding, but could shape future EU legislation. The EU wants to be a leader in ethical AI, as it has done with the GDPR for private data. It aims to build an international consensus on AI ethics guidelines.

The guidelines consist of  seven ethical requirements to be followed by companies and governments when developing applications of AI:

  1. Human agency and oversight: respect of fundamental rights, human agency and maintaining human oversight
  2. Technical robustness and safety: resilience to attack and security, fall back plan and general safety, accuracy, reliability and reproducibility
  3. Privacy and data governance: respect for privacy, quality and integrity of data, and access to data
  4. Transparency: traceability, explainable and communication
  5. Diversity, non-discrimination and fairness: the avoidance of unfair bias, accessibility and universal design, and stakeholder participation
  6. Societal and environmental wellbeing: sustainability and environmental friendliness, social impact, society and democracy
  7. Accountability: auditability, minimisation and reporting of negative impact, trade-offs and redress.

The report includes a list of practical questions (“trustworthy AI assessment list”) to help users implement the requirements in practice. This list will be tested by stakeholders in order to gather feedback for improvement.

Interested stakeholders can register for the piloting process and start testing the assessment list. Feedback will be gathered through an online survey, which will be launched in June 2019. Based on this feedback, the High-Level Expert Group on AI will propose a revised version of the assessment list to the Commission in early 2020.

Share with others

Subscribe to our newsletter

* indicates required
Interests

By subscribing to our newsletter, you agree that we may process your information in accordance with our Privacy policy.

You can change your mind at any time by clicking the unsubscribe link in the footer of any email you receive from us, or by contacting us at enquiries@ferma.eu.

We use MailChimp as our marketing platform. By subscribing to our newsletter, you acknowledge that your information will be transferred to MailChimp for processing. Learn more about MailChimp’s privacy practices here.

Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.

Strictly Necessary Cookies

Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.

If you disable this cookie, we will not be able to save your preferences. This means that every time you visit this website you will need to enable or disable cookies again.

3rd Party Cookies

This website uses Google Analytics to collect anonymous information such as the number of visitors to the site, and the most popular pages.

Keeping this cookie enabled helps us to improve our website.