European Interest

EPP Group: Don’t fear AI, but regulate risks

FLICKR/GWYDION M. WILLIAMS/CC BY 2.0

The EPP Group wants clear standards for a human-centred approach to Artificial Intelligence (AI), based on European ethical standards and democratic values. Don’t fear AI, but regulate risks. Europe must have guardrails in place to ensure that new powerful AI systems, such as ChatGPT, are developed and deployed responsibly. This is the stance the EPP Group took today when the joint Committees on Civil Liberties (LIBE) and on Consumer Protection (IMCO) voted on the planned EU Artificial Intelligence Act (AI Act).

“The AI Act is the right step to ensure that AI is used for the benefit of our citizens and to strengthen European democratic values in the global market”, said Axel Voss MEP, who negotiated the law on behalf of the EPP Group in the LIBE Committee. “However, some people seem to have a fear-driven approach to AI and this stifles the opportunities of the new technology. The EPP Group wants a harmonised and flexible regulatory environment that takes into account all needs and prevents unnecessary administrative burdens for SMEs and start-ups. The EU must create a framework that boosts innovation. I want this law to also strengthen our industrial location for new technologies. Until now, our industry is still not getting the chance it needs to keep up with the USA or China”, Voss added.

Deirdre Clune MEP, who negotiated the law on behalf of the EPP Group in the IMCO Committee, highlighted: “This is a world first and a ground-breaking piece of legislation. It could become the de facto global standard to regulating Artificial Intelligence, ensuring that such technology is developed and used in a responsible, ethical manner, while also supporting innovation and economic growth. The EU will require that high-risk AI meets technical fairness and safety requirements. AI uses that pose an unacceptable risk will be prohibited, like social scoring.”

“From the start, the EPP Group wanted to address the challenges and potential risks of so-called foundation models upon which AI systems such as ChatGPT are based by providing clear rules and creating a framework for the sharing of necessary information along the AI value chain. I am extremely pleased that our proposal to address these models was included in the final text”, Clune said.

“As the EPP Group, we would like to maintain the possibility of law enforcement to use biometric recognition in cases of searches of victims of crime such as missing children, preventing imminent threats such as terrorist attacks or in criminal investigations”, Voss emphasised.

Explore more