Artificial Intelligence
Image credit: Piqsels

Artificial Intelligence Regulation Coming to European Union

The EU is spearheading an ambitious effort for regulating artificial intelligence technology. The European Commission unveiled the comprehensive set of proposals in mid-April. Among other goals, the framework seeks to limit police use of facial recognition. It also would prevent artificial intelligence deployment in ways that cause psychological harm or exploitation.

It marks one of the largest efforts by any Western government to set standards on the development and use of artificial intelligence. Violations for usages of artificial intelligence deemed high risk could face steep fines. For the most severe violations, companies could face fines totaling up to 6% of their annual worldwide revenue.

“Artificial intelligence should not be an end in itself, but a tool that has to serve people with the ultimate aim of increasing human well-being. Rules for artificial intelligence available in the Union market or otherwise affecting Union citizens should thus put people at the center (be human-centric), so that they can trust that the technology is used in a way that is safe and compliant with the law, including the respect of fundamental rights,” the European Commission stated in its draft regulations.

The high-risk applications of artificial intelligence encompassed by the law are far-ranging. They include uses that exploit children’s vulnerabilities or use subliminal techniques. For instance, if a toy manipulates a child into carrying out dangerous activities. They also include prohibitions of certain biometric and mass surveillance techniques. For example, the rules would ban using live facial recognition technology to pick out a person from a crowd. An exception would allow its use in context of a specific law enforcement purpose such as tracking a terrorist.

The EU’s recent regulatory proposals to address the harmful uses of artificial intelligence has parallels to the GDPR data privacy legislation it rolled out a few years ago. The GDPR, or General Data Protection Regulation, was the first regulatory framework set forth by a major government. Its object is to enforce digital content moderation, address privacy concerns, and subject big tech to restrictions. It created a template for other state and federal governments to craft their own set of privacy rules. Some of these governments have adopted them.

Like with the GDPR for data privacy regulations, the EU aspires to be the first major government body to implement comprehensive artificial intelligence legislation. It hopes to provide model laws for others outside the EU in developing their own frameworks for regulating risky uses of artificial intelligence.

“The European Commission once again has stepped out in a bold fashion to address emerging technology, just like they had done with data privacy through the GDPR,” says Dr. Brandie Nonnecke, Director of the CITRIS Policy Lab at UC Berkeley.

The EU has increasingly been taking on the role of pushing for tougher oversight of big tech. In recent years, the EU has sought to hold technology companies responsible for their data practices and anti-competitive behaviors. The newly proposed artificial intelligence regulations come in the midst of these other sweeping regulations and regulatory proposals.

Artificial Intellence Supercomputer
Artificial Intellence Supercomputer
Image credit: Peqsels

Margrethe Vestager, who serves in the official position of Executive Vice President of the European Commission for a European Fit for the Digital Age, stated: “With these landmark rules, the EU is spearheading the development of new global norms to make sure artificial intelligence can be trusted. By setting the standards, we can pave the way for ethical technology worldwide and ensure that the EU remains competitive along the way.” Ms. Vestager previously served as European Commissioner for Competition.

It could take years for the European Untion to pass the new artificial intelligence proposals. The European Union consists of three legislative branches — the European Commission, the European Council, and the European Parliament. In addition to getting the backing of the European Commission, the proposed laws would require approval by the European Council. The European Council is a body composed of the bloc’s 27 national governments. The European Parliament, a democratically elected body, would also have to pass the laws. This process could take a number of years.

The proposed EU regulations over artificial intelligence have attracted a number of criticisms. Some argue that the burdensome regulations will give other nations such as China an advantage, since they wouldn’t face such restrictions on their artificial intelligence development.

Benjamin Mueller, a senior policy analyst at the Center for Data Innovation, commented that, “It’s going to make it prohibitively expensive or even technologically infeasible to build artificial intelligence in Europe. The U.S. and China are going to look on with amusement as the EU kneecaps its own startups.”

Others argue that the regulations are too vague, making them susceptible to loopholes. Sarah Chander, a senior policy adviser at a network of nongovernmental organizations called European Digital Rights states: “The list of exemptions is incredibly wide. Such a list kind of defeats the purpose for claiming something is a ban.”

Another concern is that artificial intelligence technology itself is so broadly defined and the definition is rapidly evolving with time. Hence, there is a potential risk that the regulations could become obsolete quickly.

Ryan Carpenter serves as Attorney and Managing Director of Carpenter Wellington. Ryan advises clients across a broad set of corporate and commercial matters.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store