Human Rights Watch and 149 other civil society organizations are urging European Union (EU) institutions to strengthen the protection of people’s fundamental rights in its upcoming Artificial Intelligence (AIA) Act.
In May 2023, the committees of the European Parliament passed a series of amendments to the AIA – including a number of bans on “intrusive and discriminatory” systems as well as measures to improve the accountability and transparency of AI deployers – which were subsequently adopted by the whole of Parliament in a plenary vote in June.
However, the amendments only represent a “draft negotiating mandate” for the European Parliament, with behind-closed-doors trialogue negotiations set to begin between the European Council, Parliament and Commission at the end of July 2023 – all of which adopted positions different on a range of subjects.
THE Council positionfor example, is to implement greater secrecy around police deployments of AI, while simultaneously attempting to expand exemptions that would allow it to be more easily deployed in the context of law enforcement. law and migration.
Parliament, on the other hand, opted for a total ban on predictive policing systems, and promotes expanding the scope of the AIA’s publicly visible database of high-risk systems to also include those deployed by government agencies.
Ahead of the secret negotiations, Human Rights Watch, Amnesty International, Access Now, European Digital Rights (EDRi), Fair Trials and dozens of other civil society groups urged the EU to ban a number of apps. Harmful, discriminatory or abusive AI; mandate fundamental rights impact assessments throughout the life cycle of an AI system; and to provide effective remedies for those adversely affected by AI, among a number of other safeguards.
“In Europe and around the world, AI systems are used to monitor and control us in public spaces, predict our likelihood of future crime, facilitate violations of the right to seek asylum, predict our emotions and categorize us , and make critical decisions that determine our access to public services, welfare, education and employment,” they wrote in a statement. statement.
“Without strong regulation, businesses and governments will continue to use AI systems that exacerbate mass surveillance, structural discrimination, centralized power of big tech companies, irresponsible public decision-making, and environmental damage.
“We call on EU institutions to ensure that the development and use of AI is accountable, transparent to the public and that people are empowered to address harm.”
National Security and Military Exemptions
For the signatories of the declaration, a major point of contention around the AIA as it stands is that national security and military uses of AI are fully exempted from its provisions, while law enforcement uses are partially exempted.
The groups therefore call on EU institutions to set clear limits on the use of AI by national security, law enforcement and migration authorities, particularly when it comes to practices “prejudicial and discriminatory” surveillance.
They say these limits must include a total ban on real-time, retrospective “remote biometric identification” technologies in publicly accessible spaces, by all actors and without exception; a ban on all forms of predictive policing; a closing of all loopholes and exemptions for law enforcement and migration control; and a complete ban on emotion recognition systems.
They added that the EU should also reject the Council’s attempt to include a blanket exemption for systems developed or deployed for national security purposes; and prohibit the use of AI in migration contexts to perform individualized risk assessments, or to “prohibit, restrict and prevent” migration.
The groups also call on the EU to empower members of the public to understand and challenge the use of AI systems, noting that it is “crucial” that AIA develops an effective framework for accountability, transparency, accessibility and redress.
This should include requiring all AI deployers to conduct and publish fundamental rights impact assessments before each deployment of a high-risk AI system; register their use of AI in the publicly available EU database before deployment; and to ensure that people are informed and have the right to request information when they are affected by AI systems.
All of this should be underpinned by meaningful engagement with civil society and those affected by AI, who should also be entitled to effective remedies for violations of their rights.
Big Tech Lobbying
Finally, the undersigned groups call on the EU to push back on big tech lobbying, noting that negotiators “must not give in to the lobbying efforts of big tech companies seeking to circumvent regulation for financial gain.”
In 2021, a report by Corporate Europe Observatory and LobbyControl revealed that big tech companies now spend more than €97m a year lobbying the EU, making it Europe’s biggest lobbying sector ahead of pharmaceuticals, fossil fuels and finance
The report found that despite a wide variety of active players, tech sector lobbying efforts are dominated by a handful of companies, with just 10 companies responsible for almost a third of total tech lobby spending. This includes, in ascending order, Vodafone, Qualcomm, Intel, IBM, Amazon, Huawei, Apple, Microsoft, Facebook and Google, who have collectively spent over €32 million making their voices heard in the EU.
Given the influence of private tech companies on EU processes, the groups said it should therefore “remove the extra layer added to the risk classification process in Article 6 [in order to] restore the clear and objective risk classification process described in the initial position of the European Commission.”
Speaking ahead of Parliament’s plenary vote in June, Daniel Leufer, Senior Policy Analyst at Access Now, told Computer Weekly that Article 6 has been amended by the European Council to exempt from the high-risk list (contained in Annex Three of the AIA) systems that are “purely ancillary”, which would essentially allow AI providers to opt out of regulation based on a self-assessment of whether their apps are high-risk or not.
“I don’t know who sells an AI system that does any of the schedule three things, but it’s purely incidental to decision-making or results,” he told the era. “The big danger is that if you leave it up to a vendor to decide whether their system is ‘purely incidental’ or not, they have a huge incentive to say it is and just refuse to follow the regulations.”
Leufer added that Parliament’s text now includes “something much worse… which is to allow suppliers to do a self-assessment to see if they actually pose a significant risk”.