news

As the new President of the European Commission, Ursula Von Der Leyen promised legislation that would focus on Artificial Intelligence within the first 100 days of her taking office. The outcome was the release on 19 February 2020 of a White Paper on Artificial Intelligence, “A European approach to excellence and trust”.

FERMA welcomes the European Commission’s approach which is in line with its vision on managing AI-related risks as presented in its own White Paper “Artificial Intelligence applied to Risk Management”.  We are preparing a response to the Commission’s latest proposals.

Following the European elections in May 2019, President Von Der Leyen, announced “A Europe Fit For the Digital Age” as one her priorities to strengthen the European Union’s Digital Single Market. The paper on AI is a first step towards a broader discussion with stakeholders on the future direction of policies.

The regulatory framework and the steps to move toward it by the Commission are structured as follows:

  • Updating existing European laws such as the Product Liability Directive, General Product Safety Directive, Unfair Commercial Practices Directive and the Consumer Rights Directive.
  • Specific and targeted regulation on AI that covers areas that the existing horizontal and sectoral legislation does not.

Of interest to risk managers in the White Paper is the emphasis on a ‘risk-based’ approach to the regulatory regime. In practical terms, a risk-based approach, according to the Commission, is important to ensure that regulation is proportionate.

Thus, the Commission envisages a tiered system with ‘high risk’ AI systems subject to mandatory certification before they can be available on the market. High risk AI shall be determined in light of what is at stake, considering whether both the sector and the intended use involve significant risks. These are particularly risks that could affect the protection of safety, consumer rights and fundamental rights.

More specifically, an AI application should be considered high-risk where it meets the both of following criteria:

  • First, the AI application is employed in a sector such as healthcare, transport, energy and parts of the public sector, where significant risks can be expected given their typical activities.
  • Second, the AI application in the sector in question is used in such a manner that significant risks are likely to arise. The assessment of the level of risk of a given use could be based on the potential impact on the affected parties.

Parties have until 19 May 2020 to submit their comments. If you wish to contribute to FERMA’s response, contact administration@ferma.eu