Regístrese ahora para una mejor cotización personalizada!

Industry leaders partner to promote responsible AI development

Jul, 26, 2023 Hi-network.com

Anthropic, Google, Microsoft, and OpenAI, four of the most influential AI companies, have joined to establish the Frontier Model Forum, an initiative poised to delve into cutting-edge AI research and establish guidelines for responsible AI development in the industry. The forum's mission is to ensure the safe and human-controlled advancement of AI technology, with membership reserved for companies actively engaged in developing large-scale machine-learning models that surpass existing capabilities. Thus, its focus will be on addressing risks related to highly potent AI, rather than dealing with current regulatory concerns about copyright and privacy. The primary goals of the group include:

  1. Advancing AI safety research:The forum aims to conduct extensive research to identify potential risks associated with powerful AI systems. By understanding these risks, they can work towards minimising potential harms and ensuring the safe implementation of AI technology.
  2. Setting best practices for AI development:The group seeks to determine and promote industry-wide best practices for developing large-scale machine-learning models. This involves fostering innovation while adhering to ethical and responsible development standards.
  3. Collaborating with governments and stakeholders:The Frontier Model Forum recognises the importance of involving policymakers, governments, and other stakeholders in the AI development process. The forum would firmly support and actively participate in various government and multilateral initiatives, including the G7 Hiroshima process, the OECD's work on AI risks, standards, and social impact, and the US-EU Trade and Technology Council.
  4. Sharing research findings and establishing ethical guidelines:The forum intends to share its research findings with the broader AI industry and society as a whole. By doing so, they hope to contribute to the establishment of ethical guidelines that promote the responsible use of AI.

Why does it matter?By formulating their own guidelines, big tech wants to influence the development and deployment of AI, driven by industry decisions rather than strict regulatory measures. This approach aligns with historical practices in the US, where it was favoured self-regulation by the industry over stringent government regulations when dealing with new technologies and big tech.

Nevertheless, sceptics contend that these companies might be leveraging this initiative to evade more rigorous regulations and, instead, advocate for stricter regulations akin to those proposed by the EU. They argue that such measures are necessary to ensure close scrutiny of AI development.

tag-icon Etiquetas calientes: Inteligencia Artificial desarrollo

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.