Regístrese ahora para una mejor cotización personalizada!

Noticias calientes

Meta to restrict high-risk AI development

Feb, 05, 2025 Hi-network.com

Meta has introduced a new policy framework outlining when it may restrict the release of its AI systems due to security concerns. The Frontier AI Framework categorises AI models into 'high-risk' and 'critical-risk' groups, with the latter referring to those capable of aiding catastrophic cyber or biological attacks. If an AI system is classified as a critical risk, Meta will suspend its development until safety measures can be implemented.

The company's evaluation process does not rely solely on empirical testing but also considers input from internal and external researchers. This approach reflects Meta's belief that existing evaluation methods are not yet robust enough to provide definitive risk assessments. Despite its historically open approach to AI development, the company acknowledges that some models could pose unacceptable dangers if released.

By outlining this framework, Meta aims to demonstrate its commitment to responsible AI development while distinguishing its approach from other firms with fewer safeguards. The policy comes amid growing scrutiny of AI's potential misuse, especially as open-source models gain wider adoption.

tag-icon Etiquetas calientes: Inteligencia Artificial ciberseguridad Protección de los consumidores quota

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.
Our company's operations and information are independent of the manufacturers' positions, nor a part of any listed trademarks company.