• Home
  • Resources and Tips
    • Digital Resources
    • Physical Resources
    • Hints and Tips
  • Education
  • IT
  • Learning in the future
  • Schools
  • Students
  • Tech in education
What's hot

Latino teachers share how their communities can reshape education – if given the chance

July 25, 2023

Preparing for the IBR “tax bomb” and student loan forgiveness

July 25, 2023

2 unions vote ‘no confidence’ for Hampshire Regional School Superintendent – Western Massachusetts News

July 25, 2023

Standing Shoulder to Shoulder – ED.gov Blog – Department of Education (.gov)

July 25, 2023
Facebook Twitter Instagram
  • Home
  • Contact us
  • Privacy policy
  • Terms & Conditions
Facebook Twitter Instagram
Teaching Resources Pro
  • Home
  • Resources and Tips
    • Digital Resources
    • Physical Resources
    • Hints and Tips
  • Education

    Standing Shoulder to Shoulder – ED.gov Blog

    July 25, 2023

    Florida approves controversial set of black history standards

    July 23, 2023

    Summer Reading Contest Week 6: What caught your eye in The Times this week?

    July 21, 2023

    These are the effects of talking to yourself

    July 19, 2023

    Risk Mitigation and Security Enhancement

    July 17, 2023
  • IT

    What is DevOps Automation? | TechRepublic.com

    July 23, 2023

    Future Cyber ​​Threats: The Four “Horsemen of the Apocalypse”

    July 21, 2023

    Splunk’s New AI Tools Aim to Make Security and Observability Tasks Easier

    July 19, 2023

    Navigating through directories in Java | TechRepublic

    July 15, 2023

    Civil society groups call on EU to put human rights at center of AI law

    July 13, 2023
  • Learning in the future

    Standing Shoulder to Shoulder – ED.gov Blog – Department of Education (.gov)

    July 25, 2023

    The future of free breakfast and lunch for all college students in Pennsylvania… – Pittsburgh Post-Gazette

    July 23, 2023

    Halıcıoğlu Data Science Institute at UC San Diego: Pioneering … – Datanami

    July 21, 2023

    Empowering Africa’s Future Through Collaboration – Commonwealth

    July 19, 2023

    In memory: Larry Pryor | USC Annenberg School for… – USC Annenberg School for Communication and Journalism |

    July 17, 2023
  • Schools

    2 unions vote ‘no confidence’ for Hampshire Regional School Superintendent – Western Massachusetts News

    July 25, 2023

    Council rejects ‘gut instinct’ proposal to close disciplinary school near Baker – The Advocate

    July 23, 2023

    Man, 26, impersonated 17-year-old student for 54 days at Nebraska high schools, police say – USA TODAY

    July 21, 2023

    Top Schools Begin Dropping Legacy Admissions After Affirmative Action Decision – Yahoo! Voice

    July 19, 2023

    Lake County: Back-to-School Students to Return to New Schools, Programs and Leadership in August – WFTV Orlando

    July 17, 2023
  • Students

    Preparing for the IBR “tax bomb” and student loan forgiveness

    July 25, 2023

    8 things to do in the summer that will make college easier

    July 23, 2023

    Fun things to do with teens before college

    July 21, 2023

    Moving into the halls of the University of Dundee – Student Blog

    July 19, 2023

    Attendance at ALA’s annual conference was “absolutely invaluable” – SJSU

    July 17, 2023
  • Tech in education

    Latino teachers share how their communities can reshape education – if given the chance

    July 25, 2023

    Best FIFA World Cup Activities and Lessons

    July 23, 2023

    Cybersecurity tips for students

    July 21, 2023

    Microsoft Forms tutorials for teachers

    July 19, 2023

    The power of quality class sound

    July 17, 2023
Teaching Resources Pro
Home»IT»Civil society groups call on EU to put human rights at center of AI law
IT

Civil society groups call on EU to put human rights at center of AI law

July 13, 2023No Comments6 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email

Human Rights Watch and 149 other civil society organizations are urging European Union (EU) institutions to strengthen the protection of people’s fundamental rights in its upcoming Artificial Intelligence (AIA) Act.

In May 2023, the committees of the European Parliament passed a series of amendments to the AIA – including a number of bans on “intrusive and discriminatory” systems as well as measures to improve the accountability and transparency of AI deployers – which were subsequently adopted by the whole of Parliament in a plenary vote in June.

However, the amendments only represent a “draft negotiating mandate” for the European Parliament, with behind-closed-doors trialogue negotiations set to begin between the European Council, Parliament and Commission at the end of July 2023 – all of which adopted positions different on a range of subjects.

THE Council positionfor example, is to implement greater secrecy around police deployments of AI, while simultaneously attempting to expand exemptions that would allow it to be more easily deployed in the context of law enforcement. law and migration.

Parliament, on the other hand, opted for a total ban on predictive policing systems, and promotes expanding the scope of the AIA’s publicly visible database of high-risk systems to also include those deployed by government agencies.

Ahead of the secret negotiations, Human Rights Watch, Amnesty International, Access Now, European Digital Rights (EDRi), Fair Trials and dozens of other civil society groups urged the EU to ban a number of apps. Harmful, discriminatory or abusive AI; mandate fundamental rights impact assessments throughout the life cycle of an AI system; and to provide effective remedies for those adversely affected by AI, among a number of other safeguards.

“In Europe and around the world, AI systems are used to monitor and control us in public spaces, predict our likelihood of future crime, facilitate violations of the right to seek asylum, predict our emotions and categorize us , and make critical decisions that determine our access to public services, welfare, education and employment,” they wrote in a statement. statement.

“Without strong regulation, businesses and governments will continue to use AI systems that exacerbate mass surveillance, structural discrimination, centralized power of big tech companies, irresponsible public decision-making, and environmental damage.

“We call on EU institutions to ensure that the development and use of AI is accountable, transparent to the public and that people are empowered to address harm.”

National Security and Military Exemptions

For the signatories of the declaration, a major point of contention around the AIA as it stands is that national security and military uses of AI are fully exempted from its provisions, while law enforcement uses are partially exempted.

The groups therefore call on EU institutions to set clear limits on the use of AI by national security, law enforcement and migration authorities, particularly when it comes to practices “prejudicial and discriminatory” surveillance.

They say these limits must include a total ban on real-time, retrospective “remote biometric identification” technologies in publicly accessible spaces, by all actors and without exception; a ban on all forms of predictive policing; a closing of all loopholes and exemptions for law enforcement and migration control; and a complete ban on emotion recognition systems.

They added that the EU should also reject the Council’s attempt to include a blanket exemption for systems developed or deployed for national security purposes; and prohibit the use of AI in migration contexts to perform individualized risk assessments, or to “prohibit, restrict and prevent” migration.

The groups also call on the EU to empower members of the public to understand and challenge the use of AI systems, noting that it is “crucial” that AIA develops an effective framework for accountability, transparency, accessibility and redress.

This should include requiring all AI deployers to conduct and publish fundamental rights impact assessments before each deployment of a high-risk AI system; register their use of AI in the publicly available EU database before deployment; and to ensure that people are informed and have the right to request information when they are affected by AI systems.

All of this should be underpinned by meaningful engagement with civil society and those affected by AI, who should also be entitled to effective remedies for violations of their rights.

Big Tech Lobbying

Finally, the undersigned groups call on the EU to push back on big tech lobbying, noting that negotiators “must not give in to the lobbying efforts of big tech companies seeking to circumvent regulation for financial gain.”

In 2021, a report by Corporate Europe Observatory and LobbyControl revealed that big tech companies now spend more than €97m a year lobbying the EU, making it Europe’s biggest lobbying sector ahead of pharmaceuticals, fossil fuels and finance

The report found that despite a wide variety of active players, tech sector lobbying efforts are dominated by a handful of companies, with just 10 companies responsible for almost a third of total tech lobby spending. This includes, in ascending order, Vodafone, Qualcomm, Intel, IBM, Amazon, Huawei, Apple, Microsoft, Facebook and Google, who have collectively spent over €32 million making their voices heard in the EU.

Given the influence of private tech companies on EU processes, the groups said it should therefore “remove the extra layer added to the risk classification process in Article 6 [in order to] restore the clear and objective risk classification process described in the initial position of the European Commission.”

Speaking ahead of Parliament’s plenary vote in June, Daniel Leufer, Senior Policy Analyst at Access Now, told Computer Weekly that Article 6 has been amended by the European Council to exempt from the high-risk list (contained in Annex Three of the AIA) systems that are “purely ancillary”, which would essentially allow AI providers to opt out of regulation based on a self-assessment of whether their apps are high-risk or not.

“I don’t know who sells an AI system that does any of the schedule three things, but it’s purely incidental to decision-making or results,” he told the era. “The big danger is that if you leave it up to a vendor to decide whether their system is ‘purely incidental’ or not, they have a huge incentive to say it is and just refuse to follow the regulations.”

Leufer added that Parliament’s text now includes “something much worse… which is to allow suppliers to do a self-assessment to see if they actually pose a significant risk”.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

What is DevOps Automation? | TechRepublic.com

July 23, 2023

Future Cyber ​​Threats: The Four “Horsemen of the Apocalypse”

July 21, 2023

Splunk’s New AI Tools Aim to Make Security and Observability Tasks Easier

July 19, 2023
Add A Comment

Leave A Reply Cancel Reply

Latest

Latino teachers share how their communities can reshape education – if given the chance

July 25, 2023

Preparing for the IBR “tax bomb” and student loan forgiveness

July 25, 2023

2 unions vote ‘no confidence’ for Hampshire Regional School Superintendent – Western Massachusetts News

July 25, 2023

Standing Shoulder to Shoulder – ED.gov Blog – Department of Education (.gov)

July 25, 2023

Subscribe to Updates

Get the latest creative news from teachingresourcespro.

We are social
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo
Don't miss

Latino teachers share how their communities can reshape education – if given the chance

July 25, 2023

Preparing for the IBR “tax bomb” and student loan forgiveness

July 25, 2023

2 unions vote ‘no confidence’ for Hampshire Regional School Superintendent – Western Massachusetts News

July 25, 2023

Subscribe to Updates

Get the latest creative news from teachingresourcespros.

  • Home
  • Contact us
  • Privacy policy
  • Terms & Conditions
© 2023 Designed by teachingresourcespro .

Type above and press Enter to search. Press Esc to cancel.