FERMA calls for urgent attention to two priorities for European business in the debate on ethical use of Artificial Intelligence

The Federation of European Risk Management Associations (FERMA) welcomes the appointment by the European Commission of experts for the High Level Group on Artificial Intelligence (AI HLG) and calls for urgent attention to two priorities for European business.

The High-Level Expert Group will support the implementation of the European strategy on AI, including the development of ethical guidelines by the end of this year. There are currently no clear ethical rules for the use of data generated by AI tools, and the AI guidelines will take into account principles on data protection and transparency.

Therefore, FERMA is calling on the High Level Group to address immediately the two following priorities for corporate organisations:

  1. Draw a clear line between the opportunities of AI technologies and the threats posed by the same technologies to the insurability of organisations as a result of over-reliance on AI during decision making processes.
  2. Define ethical rules for the corporate use of AI not just for employees but also suppliers and all actors of the value chain. AI tools will allow increased and constant monitoring of a very high number of different parameters. The risk management profession believes that this greater use of data could create concerns among stakeholders and risks to reputation.

The President of FERMA Jo Willaert says, “FERMA stands ready to bring its unique expertise in enterprise risk management methodology and tools, such as risk identification and mapping, risk control and risk financing, to the discussion so we can manage the threats and opportunities posed by the rise of AI to our organisations and society within acceptable risk tolerances.”

He adds, “FERMA argues that the new possibilities offered by AI must remain compatible with the public interest and those of the economy and commercial organisation.  AI is already a reality in many organisations and it is going to disrupt our comprehension of the future.

Public authorities have a key role to play to ensure that there is a human judgement as a last resort. This dialogue between regulators and AI users must start now and the newly set up AI HLG and open access European AI Alliance are the right settings.

Share with others

Subscribe to our newsletter

* indicates required
Interests

By subscribing to our newsletter, you agree that we may process your information in accordance with our Privacy policy.

You can change your mind at any time by clicking the unsubscribe link in the footer of any email you receive from us, or by contacting us at enquiries@ferma.eu.

We use MailChimp as our marketing platform. By subscribing to our newsletter, you acknowledge that your information will be transferred to MailChimp for processing. Learn more about MailChimp’s privacy practices here.

Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.

Strictly Necessary Cookies

Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.

If you disable this cookie, we will not be able to save your preferences. This means that every time you visit this website you will need to enable or disable cookies again.

3rd Party Cookies

This website uses Google Analytics to collect anonymous information such as the number of visitors to the site, and the most popular pages.

Keeping this cookie enabled helps us to improve our website.