Regístrese ahora para una mejor cotización personalizada!

Reflections on AI governance at IGF 2023

Oct, 10, 2023 Hi-network.com

The Main Session on Artificial Intelligence at the IGF 2023 brought together prominent experts and thought leaders to discuss crucial aspects of AI governance, ethics, and the global landscape of AI technology. Participants highlighted various dimensions of AI governance, including transparency, risk assessment, inclusivity, and a human-centric approach. Their insights shed light on the complexities of governing AI in a rapidly evolving technological landscape.

Arisa Ema, Associate Professor at the Institute for Future Initiatives, The University of Tokyo, emphasised the need to consider different models and structures in global AI governance discussions. She highlighted the importance of transparency and interoperability in AI governance and the significance of framework interoperability, as mentioned in the G7 communique. Ema stressed the importance of risk-based assessments in AI, considering different usage scenarios and their associated risks. She also underlined the need to include physically challenged individuals in forums like the IGF and promoted a human-centric approach in AI discussions. Ema called for democratic principles to ensure the involvement of all stakeholders in shaping AI governance policies.

Senior Director of the IEEE European Business Operations, Clara Neppel, discussed IEEE's initiative in promoting responsible AI governance through ethical standards and value-based design. She highlighted IEEE's collaboration with regulatory bodies such as the Council of Europe and OECD to align technical standards with responsible AI governance. Neppel emphasised the importance of capacity development in technical expertise and understanding social and legal matters in AI implementation. She acknowledged the role of certification bodies in supporting capacity development. Neppel also highlighted efforts to protect vulnerable communities online and mentioned the effectiveness of voluntary standards in AI governance. She stressed the need for cooperation, feedback mechanisms, standardised reporting, and benchmarking/testing facilities for the global governance of AI.

Head of International Policy and Partnerships of OpenAI, James Hairston, discussed the company's commitment to AI safety and collaboration with various stakeholders, including the public sector, civil society, and academia. He highlighted the importance of standardised language and definitions in AI conversations to facilitate practical discussions. Hairston highlighted OpenAI's focus on safeguarding technology use by vulnerable groups and ensuring fair labour practices in AI production. He acknowledged the challenges posed by jurisdictional differences in AI governance and the significance of involving international institutions like the IGF. Hairston emphasised the importance of human involvement in AI development and testing, as well as the use of synthetic data sets to address AI bias. He recognised the role of standards bodies, research institutions, and government security testers in AI governance and stressed the importance of public-private collaboration for the safety of digital tools.

Seth Center, Deputy Envoy for Critical and Emerging Technology at the US Department of State, compared AI technology's transformative potential to electricity and emphasised the need for prompt governance frameworks. He highlighted the US government's multistakeholder approach to developing AI principles and governance. Center discussed the importance of accountability in AI governance, both through complex law and voluntary frameworks, while acknowledging scepticism about voluntary governance. He stressed the value of discussions in shaping responsible AI governance and the role of the multistakeholder community in guiding developers toward societal benefits. Center also emphasised the need for safeguards, such as red teaming, cybersecurity, third-party audits, and public reporting, to ensure AI safety. He called for careful yet swift action in AI governance and recognised the complexity of jurisdictional differences. Center concluded by highlighting the importance of transparency in understanding AI usage and the need for trustworthiness and openness in private sector contributions to responsible AI governance.

As a Senior Manager at Paradigm Initiative, Thobekile Matimbe's insights revolved around the Global South's efforts in AI regulation and governance. She expressed the need for inclusive processes and accessible platforms in IGFs to ensure participation from marginalised and vulnerable groups. Matimbe highlighted the challenges faced by the Global South in AI-related surveillance and discrimination. She stressed the importance of protecting human rights defenders from surveillance. Matimbe advocated for a victim-centred approach in AI discussions and the necessity of understanding global asymmetries and contexts. She also noted the agency of individuals in safeguarding their rights in the digital age. Matimbe touched upon the importance of children's and women's rights, as well as environmental rights, in AI discussions, underlining the broader societal impact of AI beyond technical considerations.

One of the moderators of this session, Maria Paz Canales Lobel, passionately advocates for a human-centric approach to AI governance, emphasising the significance of aligning AI technologies with international human rights principles. Her vision calls for a risk-based approach, transparency, and inclusivity in AI design and development. She underscores the need for multistakeholder conversations and global cooperation to ensure ethical and responsible AI governance. 

The audience introduced crucial dimensions to the AI discourse. They highlighted the disparities in AI labour, standards, and regulations between the Global South and the Western world. The challenge of countering AI-generated disinformation in developing countries, the imperative for developed economies to support inclusivity in the digital ecosystem, and the absence of a global consensus on AI and significant data regulation were illuminated. Furthermore, the importance of considering children's rights and the interests of future generations in AI policy-making was emphasised.

The session on AI fostered a rich dialogue among distinguished participants, including moderators and the engaged audience. These experts, alongside the perceptive audience, highlighted the paramount importance of transparency, interoperability, and risk-based assessments in global AI governance discussions. Inclusivity and a human-centric approach emerged as essential guiding principles, ensuring that AI technologies align with human values and needs. While democratic principles underscored the importance of involving all stakeholders, the ongoing nature of these discussions highlighted the need for a shared philosophy and continued collaboration. This session illuminated the multifaceted nature of AI governance and the imperative of addressing it comprehensively in our rapidly evolving technological landscape.

tag-icon Etiquetas calientes: Inteligencia Artificial ciberseguridad Derechos de los niños

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.