Regístrese ahora para una mejor cotización personalizada!

The EU AI Act could help get to Trustworthy AI, according to the Mozilla Foundation

Mayo, 20, 2022 Hi-network.com

One year after the first draft was introduced, details about the EU AI Act remained few and far between. Despite the fact that this regulatory framework is not still finalized -- or rather, precisely because of that reason -- now is the time to learn more about it.

Previously, we covered some key facts about the EU AI Act: who it applies to, when it will be enacted, and what it's about. We embarked on this exploration alongside Mozilla Foundation's Executive Director Mark Surman and Senior Policy Researcher Maximilian Gahntz.

As Surman shared, Mozilla's focus on AI came about around the same time the EU AI Act started its lifecycle too -- circa 2019. Mozilla has worked with people around the world to map out a theory of how to make AI more trustworthy, focusing on two long term outcomes: agency and accountability.

Today we pick up the conversation with Surman and Gahntz. We discuss Mozilla's recommendations for improving the EU AI Act and how people can get involved, and Mozilla's AI Theory of Change.

The EU AI Act is a work in progress

Big Data

  • How to find out if you are involved in a data breach (and what to do next)
  • Fighting bias in AI starts with the data
  • Fair forecast? How 180 meteorologists are delivering 'good enough' weather data
  • Cancer therapies depend on dizzying amounts of data. Here's how it's sorted in the cloud

The EU AI Act is coming, as it's expected to become effective around 2025, and its impact on AI could be similar to the impact GDPR had on data privacy.

The EU AI Act applies to users and providers of AI systems located within the EU, suppliers established outside the EU who are the source of the placing on the market or commissioning of an AI system within the EU, and providers and users of AI systems established outside the EU when the results generated by the system are used in the EU.

Its approach is based on a 4-level categorization of AI systems according to the perceived risk they pose: Unacceptable risk systems are banned entirely (although some exceptions apply), high-risk systems are subject to rules of traceability, transparency and robustness, low-risk systems require transparency on the part of the supplier and minimal risk systems for which no requirements are set.

At this point, the EU Parliament is developing its position, considering input it receives from designated committees as well as third parties. Once the EU Parliament has consolidated what they understand under the term Trustworthy AI, they will submit their ideas on how to change the initial draft. A final round of negotiations between the Parliament, the Commission, and the Member States will follow, and that's when the EU AI Act will be passed into law.

To influence the direction of the EU AI Act, now is the time to act. As stated in Mozilla's 2020 paper Creating Trustworthy AI, AI has immense potential to improve our quality of life. But integrating AI into the platforms and products we use every day can equally compromise our security, safety, and privacy. [...] Unless critical steps are taken to make these systems more trustworthy, AI runs the risk of deepening existing inequalities.

Mozilla believes that effective and forward-looking regulation is needed if we want AI to be more trustworthy. This is why it welcomed the European Commission's ambitions in its White Paper on Artificial Intelligence two years ago. Mozilla's position is that the EU AI Act is a step in the right direction, but it also leaves room for improvements.

The improvements suggested by Mozilla have been laid out in a blog post. They are focused on three points: 

  1. Ensuring accountability
  2. Creating systemic transparency
  3. Giving individuals and communities a stronger voice.

The 3 Focal points

Accountability is really about figuring out who should be responsible for what along the AI supply chain, as Gahntz explained. Risks should be addressed where they come up; whether that's in the technical design stage or in the deployment stage, he went on to add.

The EU AI Act would place most obligations on those developing and marketing high-risk AI systems in its current form. While there are good reasons for that, Gahntz believes that the risks associated with an AI system also depend on its exact purpose and the context in which it is used. Who deploys the system, and what is the organizational setting of deployment which could be affected by the use of the system -- these are all relevant questions.

To contextualize this, let's consider the case of a large language model like GPT-3. It could be used to summarize a short story (low risk) or to assess student essays (high risk). The potential consequences here differ vastly, and deployers should be held accountable for the way in which they use AI systems, but without introducing obligations they cannot effectively comply with, Mozilla argues.

Systemic transparency is going beyond user-facing transparency. While it's good for users to know when they're interacting with an AI system, what we also need at a higher level is for journalists, researchers and regulators to be able to scrutinize systems and how these are affecting people and communities on the ground, Gahntz said.

The draft EU AI Act includes a potentially powerful mechanism for ensuring systemic transparency: a public database for high-risk AI systems, created and maintained by the Commission, where developers register and provide information about these systems before they can be deployed.

Mozilla's recommendation here is three-fold. First, this mechanism is extended to apply to all deployers of high-risk AI systems. Second, it also reports additional information, such as descriptions of an AI system's design, general logic, and performance. Third, that it includes information about serious incidents and malfunctions, which developers would already have to report to national regulators under the AI Act.

Mozilla's engagement with the EU AI Act is in line with its AI Theory of Change, which includes shifting industry norms, building new tech and products, generating demand, and creating regulations and incentives

Mozilla Foundation

Giving individuals and communities a stronger voice is something that's missing from the original draft of the EU AI Act, Gahntz said. As it stands now, only EU regulators would be permitted to hold companies accountable for the impacts of AI-enabled products and services.

However, Mozilla believes it is also critical for individuals to be able to hold companies to account. Furthermore, other organizations -- like consumer protection organizations or labor unions -- need to have the ability to bring complaints on behalf of individuals or the public interest.

Therefore, Mozilla supports a proposal to add a bottom-up complaint mechanism for affected individuals and groups of individuals to file formal complaints with national supervisory authorities as a single point of contact in each EU member state.

Mozilla also notes that there are several additional ways in which the AI Act can be strengthened before it is adopted. For instance, future-proofing the mechanism for designating what constitutes high-risk AI and ensuring that a breadth of perspectives are considered in operationalizing the requirements that high-risk AI systems will have to meet.

Getting involved in The AI Theory Of Change

You may agree with Mozilla's recommendations and want to lend your support. You may want to add to them, or you may want to propose your own set of recommendations. However, as Mozilla's people noted, the process of getting involved is a bit like leading your own campaign -- there's no such thing as "this is the form you need to fill in".

"The way to get involved is really the normal democratic process. You have elected officials looking at these questions, you also have people inside the public service asking these questions, and then you have an industry in the public having a debate about these questions.

I think there's a particular mechanism; certainly, people like us are going to weigh in with specific recommendations. And by weighing in with us, you help amplify those. 

But I think that the open democratic conversation -- being in public, making allies and connecting to people whose ideas you agree with, wrestling with and surfacing the hard topics.That's what's going to make a difference, and it's certainly where we're focused", Surman said.

At this point, what it's really about is swaying public opinion and the opinion of people in the position to make decisions, according to Gahntz. That means parliamentarians, EU member state officials, and officials within the European Commission, he went on to add.

At a more grassroots level, what people can do is the same as always, Gahntz opined. You can write to your local MEP; you can be active on social media and try to amplify voices you agree with; you can sign petitions, and so on. Mozilla has a long history of being involved in shaping public policy.

"The questions of agency and accountability are our focus, and we think that the EU AI Act is a really good backdrop where they can have global ripple effects to push things in the right direction on these topics", Surman said.

Agency and accountability are desired long term outcomes in Mozilla's AI Theory Of Change, developed in 2019 by spending 12 months talking with experts, reading, and piloting AI-themed campaigns and projects. This exploration honed Mozilla's thinking on trustworthy AI by reinforcing several challenge areas, including monopolies and centralization, data governance and privacy, bias and discrimination, and transparency and accountability.

Mozilla's AI Theory Of Change identifies a number of short term outcomes (1-3 years), grouped into 4 medium-term outcomes (3-5 years): shifting industry norms, building new tech and products, generating demand, and creating regulations and incentives. The envisioned long term impact would be "a world of AI [where] consumer technology enriches the lives of human beings".

"Regulation is an enabler, but without people building different technology in a different way and people wanting to use that technology, the law is a piece of paper", as Surman put it.

If we look at the precedent of GDPR, sometimes we've gotten really interesting new companies and new software products that keep privacy in mind, and sometimes we've just gotten annoying popup reminders about your data being collected and cookies, and so on, he went on to add.

"Making sure that a law like this drives real change and real value for people is a tricky matter. This why right now, the focus should be on what are the practical things that the industry and developers and deployers can do to make AI more trustworthy. We need to make sure that the regulations actually reflect and incentivize that kind of action and not just sit up in the cloud", Surman concluded.

Artificial Intelligence

8 ways to reduce ChatGPT hallucinationsAI is transforming organizations everywhere. How these 6 companies are leading the way3 ways AI is revolutionizing how health organizations serve patients. Can LLMs like ChatGPT help?If AI is the future of your business, should the CIO be the one in control?
  • 8 ways to reduce ChatGPT hallucinations
  • AI is transforming organizations everywhere. How these 6 companies are leading the way
  • 3 ways AI is revolutionizing how health organizations serve patients. Can LLMs like ChatGPT help?
  • If AI is the future of your business, should the CIO be the one in control?

tag-icon Etiquetas calientes: Inteligencia Artificial innovación

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.