EU set to tilt AI balance in favour of citizen rights

0
372
Oracle enhances customer experience platform with a B2B refresh

Source is ComputerWeekly.com

In a leaked draft of proposed regulations on artificial intelligence (AI) in Europe, the European Union (EU) has set out plans to establish a central database of high-risk AI systems.

The draft, posted on Google Drive, also lists several uses of AI prohibited in the EU. The plans, which aim to protect the rights of EU citizens, have far-reaching implications, impacting systems that make decisions that affect individuals.

The draft regulation bans the use of AI systems that manipulate human behaviour, those used for indiscriminate surveillance and social scoring. The document stipulates that penalties for infringement would be subject to administrative fines up to €20m, or 4% of the offender’s total worldwide annual turnover for the preceding financial year.

The rules set out in the document cover the application of AI. According to the draft document, AI systems providers will need to be validated and will be required to provide information on the data models, algorithms and test datasets used to verify their systems.

The document details a number of AI implementations deemed as high risk, covering the use of AI in prioritising dispatch of emergency first-response services, assigning people to educational and vocational training institutions, as well as a number of systems for crime detection and those used by judges. Other areas identified as high risk include recruitment, creditworthiness of persons and individual risk assessments.

The draft document stipulates that rules for AI available in the EU market or otherwise affecting EU citizens should “put people at the centre (be human-centric), so that they can trust that the technology is used in a way that is safe and compliant with the law, including the respect of fundamental rights”.

To minimise bias, the EU document states that training and testing datasets should be sufficiently relevant, representative, free of errors and complete in view of the intended purpose and should have the appropriate statistical properties.

The regulations would require AI systems suppliers to provide EU regulators with information about the conceptual design and the algorithms they use. The EU wants AI companies to provide it with information concerning design choices and assumptions related to algorithms.

The EU also appears to be looking for suppliers of high-risk AI systems to provide detailed information about the functioning of the validated AI system. This will need to include a description of its capabilities and limitations, anticipated inputs and outputs, and expected accuracy/error margin.

In the draft, the EU also wants providers of high-risk AI systems to provide information on the limitations of the system, including known biases, foreseeable unintended consequences, and sources of risk to safety and fundamental rights.

One commentator on Twitter wrote: “The definition of AI seems to be crazy broad, covering software based only on conditional logic like conversational bots or contract generation wizards.”

Source is ComputerWeekly.com

Vorig artikelCloud archiving: A perfect use case, but beware costs and egress issues
Volgend artikelRemote workers largely supportive of a UK right to disconnect