EU lays out plans to regulate AI development

0
344
Oracle enhances customer experience platform with a B2B refresh

Source is ComputerWeekly.com

The European Union (EU) has published a proposal for new artificial intelligence (AI) regulations to address trust in AI and protect the privacy of EU citizens.

A draft of the document was leaked a week ago, but announcing the official release of the proposed regulations, the EU said it wanted to achieve “proportionate and flexible rules” to address the specific risks posed by AI systems and set the highest standard worldwide.

The EU has identified two types of system that require regulation – those deemed to pose an unacceptable risk and those it believes present a critical risk.

Dan Whitehead, senior associate at Hogan Lovells, said the regulations could have a major impact on companies that develop and make use of AI in their products and services.

“The European Commission’s proposed regulation on artificial intelligence is a bold and ground-breaking attempt at regulating the future of our digital and physical worlds,” he said. “This framework promises to have the same profound impact on the use of AI as the GDPR [General Data Protection Regulation] has had on personal data, with both the developers of high-risk AI technologies, along with the organisations that use them, facing a range of new obligations.”

AI systems considered a clear threat to the safety, livelihoods and rights of people will be banned. These include AI systems or applications that manipulate human behaviour to circumvent users’ free will and systems that enable “social scoring” by governments.

The EU proposal lists eight applications of AI deemed to be of high risk. Broadly speaking, these cover critical infrastructure, systems for managing crime and the judicial process, and any system whose decision-making may have a negative impact on an EU citizen’s life, health or livelihood. The remit of these systems covers areas such as AI used to deny access to education or training, worker management, credit scoring and where AI is used to prioritise access to private and public services and border control.

Thierry Breton, EU commissioner for internal market, said: “AI is a means, not an end. It has been around for decades, but has reached new capacities fuelled by computing power. This offers immense potential in areas as diverse as health, transport, energy, agriculture, tourism or cyber security. It also presents a number of risks.

“Today’s proposals aim to strengthen Europe’s position as a global hub of excellence in AI from the lab to the market, ensure that AI in Europe respects our values and rules, and harness the potential of AI for industrial use.”

The EU’s regulations for AI puts remote biometric identification systems in the high-risk category, which means their deployment is subject to strict requirements. Under the proposed regulations, deployment of AI-based biometric identification systems in publicly accessible spaces for law enforcement purposes is prohibited. The EU said that any exceptions will be subject to authorisation by a judicial or other independent body and to appropriate limits in time, geographic reach and the databases searched.

Margrethe Vestager, executive vice-president for a Europe Fit for the Digital Age, said: “On AI, trust is a must, not a nice-to-have. With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted.

“By setting the standards, we can pave the way to ethical technology worldwide and ensure that the EU remains competitive along the way. Future-proof and innovation-friendly, our rules will intervene where strictly needed – when the safety and fundamental rights of EU citizens are at stake.”

Organisations that break the EU’s rules on AI will be subject to a hefty fine. Herbert Swaniker, tech lawyer at Clifford Chance, said: “The Commission’s draft AI rules will mean fines of up to 6% of global turnover for some breaches. This elevates the EU’s fining power to a level much closer to the realm of EU competition-style sanctions for the most serious breaches.”

Swaniker said the EU AI rules would solidify many of the ethical AI principles that organisations have been producing in anticipation of the legislation. “Data sets that are used to train high-risk AI systems are to be trained with ‘high quality’ data sets, and processes will need to be in place to ensure that these data sets do not incorporate any intentional or unintentional biases,” he said.  

“To comply, tech and legal teams will need to invest and uplift their existing data governance and manage practices. That will not be a simple feat, and will require action in a way similar to what we saw for the GDPR efforts.”

Along with the draft regulations, the EU has also updated its 2018 Coordinated Plan, which aims to align AI initiatives around the European Strategy on AI and the European Green Deal, while taking into account new challenges brought by the coronavirus pandemic. The EU said the updated plan will use funding allocated through the Digital Europe and Horizon Europe programmes, as well as the Recovery and Resilience Facility, which foresees a 20% digital expenditure target, and Cohesion Policy programmes.

Among the goals set out in the plan are fostering AI excellence and enabling conditions for AI’s development and uptake through the exchange of policy insights, data sharing and investment in critical computing capacities.

The Cohesion plan also aims to encourage the development of AI as a force for good in society by putting the EU at the forefront of the development and deployment of trustworthy AI, nurturing talents and skills by supporting traineeship, doctoral networks and postdoctoral fellowships in digital areas.

The final focus area for the Cohesion programme is strategic leadership in the use of AI for sustainable production and healthcare.

Source is ComputerWeekly.com

Vorig artikelNieuws & Blogs – De ePrivacy Verordening: tijd voor de eindsprint
Volgend artikelUK government renews cloud discount pricing MoU with Microsoft