Regístrese ahora para una mejor cotización personalizada!

The EU AI Act: What you need to know

12 de mayo de 2022 Hi-network.com

It's been almost one year since the European Commission unveiled the draft for what may well be one of the most influential legal frameworks in the world: the EU AI Act. According to the Mozilla Foundation, the framework is still work in progress, and now is the time to actively engage in the effort to shape its direction.

Mozilla Foundation's stated mission is to work to ensure the internet remains a public resource that is open and accessible to everyone. Since 2019, Mozilla Foundation has focused a significant portion of its internet health movement-building programs on AI.

We met with Mozilla Foundation's Executive Director Mark Surman and Senior Policy Researcher Maximilian Gahntz to discuss Mozilla's focus and stance on AI, key facts about the EU AI Act and how it will work in practice, as well as Mozilla's recommendations for improving it, and ways for everyone be involved in the process.

The EU AI Act is on its way, and it's a big deal even if you're not based in the EU

Big Data

  • How to find out if you are involved in a data breach (and what to do next)
  • Fighting bias in AI starts with the data
  • Fair forecast? How 180 meteorologists are delivering 'good enough' weather data
  • Cancer therapies depend on dizzying amounts of data. Here's how it's sorted in the cloud

In 2019, Mozilla identified AI as a new challenge to the health of the internet. The rationale is that AI makes decisions for us and about us, but not always with us: it can tell us what news we read, what ads we see, or whether we qualify for a loan.

The decisions AI makes have the potential to help humanity but also harm us, Mozilla notes. AI can amplify historical bias and discrimination, prioritize engagement over user well-being, and further cement the power of Big Tech and marginalize individuals.

"Trustworthy AI has been a key thing for us in the last few years because data and machine learning and what we call today AI are such a central technical and social business fabric to what the Internet is and how the Internet intersects with society and all of our lives", Surman noted.

As AI is increasingly permeating our lives, Mozilla agrees with the EU that change is necessary in the norms and rules governing AI, writes Gahntz in Mozilla's reaction to the EU AI Act.

The first thing to note about the EU AI Act is that it does not apply exclusively to EU-based organizations or citizens. The ripple may be felt around the world in a similar way to the effect that the GDPR had.

The EU AI Act applies to users and providers of AI systems located within the EU, suppliers established outside the EU who are the source of the placing on the market or commissioning of an AI system within the EU, and providers and users of AI systems established outside the EU when the results generated by the system are used in the EU.

That means that organizations developing and deploying AI systems will have to either comply with the EU AI Act or pull out of the EU entirely. That said, there are some ways in which the EU AI Act is different from GDPR -- but more on that later.

Like all regulation, the EU AI Act walks a fine line navigating between business and research needs and citizen concerns 

By ra2 studio -- Shutterstock

Another key point about the EU AI Act is that it's still a work in progress, and it will take a while before it becomes effective. Its lifecycle started with the formation of a high-level expert group, which, as Surman noted, coincided with Mozilla's focus on Trustworthy AI. Mozilla has been keeping a close eye on the EU AI Act since 2019.

As Gahntz noted, since the first draft of what the EU AI Act was published in April 2021, everyone involved in this process has been preparing to engage. The EU Parliament had to decide which committees and which people in those committees would work on it, and civil society organizations had the chance to read the text and develop their position.

The point we're at right now is where the exciting part starts, as Gahntz put it. This is when the EU Parliament is developing its position, considering input it receives from designated committees as well as third parties. Once the European Parliament has consolidated what they understand under the term Trustworthy AI, they will submit their ideas on how to change the initial draft.

The EU Member States will do the same thing, and then there will be a final round of negotiations between the Parliament, the Commission, and the Member States, and that's when the EU AI Act will be passed into law. It's a long and winding road, and according to Gahntz, we're looking at a one-year horizon at a minimum, plus a transitional period between being passed into law and actually taking effect.

For GDPR, the transitional period was two years. So it probably won't be anytime before 2025 until the EU AI Act becomes effective.

Defining and categorizing AI systems

Before going into the specifics of the EU AI Act, we should stop and ask what exactly does it apply to. There is no such thing as a widely agreed-upon definition of AI, so the EU AI Act provides an Annex that defines the techniques and approaches which fall within its scope.

As noted by the Montreal AI Ethics Institute, the European Commission has chosen a broad and neutral definition of AI systems, designating them as software "that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with".

The techniques mentioned in the EU AI Act's Annex include both machine learning approaches and logic- and knowledge-based approaches. They are wide-ranging, to the point of drawing criticism for "proposing to regulate the use of Bayesian estimation". While navigating between business and research needs and citizen concerns walks a fine line, such claims don't seem to grasp the gist of the proposed legislation's philosophy: the so-called risk-based approach.

In the EU AI Act, AI systems are classified into 4 categories according to the perceived risk they pose: Unacceptable risk systems are banned entirely (although some exceptions apply), high-risk systems are subject to rules of traceability, transparency and robustness, low-risk systems require transparency on the part of the supplier, and minimal risk systems for which no requirements are set.

So it's not a matter of regulating certain techniques but rather of regulating the application of those techniques in certain applications in accordance to the risk the applications pose. As far as techniques go, the proposed framework notes that adaptations overtime may be necessary to keep up with the evolution of the domain.

Excluded from the scope of the EU AI Act are AI systems developed or used exclusively for military purposes. Public authorities of third countries and international organisations using AI systems in the framework of international law enforcement and judicial cooperation agreements with the EU or with one or more of its members are also exempt from the EU AI Act.

In the EU AI Act, AI systems are classified in 4 categories according to the perceived risk they pose

Getty Images/iStockphoto

AI applications that manipulate human behavior to deprive users of their free will and systems that allow social scoring by the EU Member States are classified as posing an unacceptable risk and are outright banned.

High-risk AI systems include biometric identification, management of critical infrastructure (water, energy etc), AI systems intended for assignment in educational institutions or for human resources management, and AI applications for access to essential services (bank credits, public services, social benefits, justice, etc.), use for police missions as well as migration management and border control.

However, the application of biometric identification includes several exceptions, such as the search for a missing child or the location of suspects in cases of terrorism, trafficking in human beings or child pornography. The EU AI Act dictates that high-risk AI systems should be recorded in a database maintained by the European Commission.

Limited risk systems include mostly various bots. For those, the key requirement is transparency. For example, if users are interacting with a chatbot, they must be informed of this fact, so they can make an informed decision on whether or not to proceed.

Finally, according to the Commission, AI systems that do not pose a risk to citizens' rights, such as spam filters or games, are exempt from the regulatory obligation.

The EU AI Act as a way to get to Trustworthy AI

The main idea behind this risk-based approach to AI regulation is somewhat reminiscent of the approach applied to labeling household electrical devices based on their energy efficiency in the EU. Devices are categorized based on their energy efficiency characteristics and applied a labels ranging from A (best) to G (worst).

But there are also some important differences. Most prominently, while energy labels are meant to be seen and taken into account by consumers, the risk assessment of AI systems is not designed with the same goal in mind. However, if Mozilla has its way, that may change by the time the EU AI Act becomes effective.

Drawing analogies is always interesting, but what's really important here is that the risk-based approach is trying to minimize the impact of the regulation on those who develop and deploy AI systems that are of little to no concern, said Gahntz.

"The idea is to focus attention on the bits where it gets tricky, where risk is introduced to people's safety, rights and privacy, and so on. That's also the part that we want to focus on because regulation is not an end in and of itself.

We want to accomplish with our recommendations and our advocacy work around this. The parts of the regulation that focus on mitigating or preventing risks from materializing are strengthened in the final EU AI Act.

There are a lot of analogies to be drawn to other risk-based approaches that we see in European law and regulation elsewhere. But it's also important to look at the risks that are specific to each use case. That basically means answering the question of how we can make sure that AI is trustworthy", said Gahntz.

Gahntz and Surman emphasized that Mozilla's recommendations have been developed with care and the due diligence that needs to go into this process to make sure that no one is harmed and that AI ends up being a net benefit for all.

We will continue with an elaboration on Mozilla's recommendations to improve the EU AI Act, as well as the underlying philosophy of Trustworthy AI and the AI Theory of Change and how to get involved in the conversation in part 2 of this article.

Artificial Intelligence

8 ways to reduce ChatGPT hallucinationsAI is transforming organizations everywhere. How these 6 companies are leading the way3 ways AI is revolutionizing how health organizations serve patients. Can LLMs like ChatGPT help?If AI is the future of your business, should the CIO be the one in control?
  • 8 ways to reduce ChatGPT hallucinations
  • AI is transforming organizations everywhere. How these 6 companies are leading the way
  • 3 ways AI is revolutionizing how health organizations serve patients. Can LLMs like ChatGPT help?
  • If AI is the future of your business, should the CIO be the one in control?

tag-icon Etiquetas calientes: Inteligencia Artificial innovación

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.