Meta has introduced a new policy framework outlining when it may restrict the release of its AI systems due to security concerns. The Frontier AI Framework categorises AI models into 'high-risk' and 'critical-risk' groups, with the latter referring to those capable of aiding catastrophic cyber or biological attacks. If an AI system is classified as a critical risk, Meta will suspend its development until safety measures can be implemented.
The company's evaluation process does not rely solely on empirical testing but also considers input from internal and external researchers. This approach reflects Meta's belief that existing evaluation methods are not yet robust enough to provide definitive risk assessments. Despite its historically open approach to AI development, the company acknowledges that some models could pose unacceptable dangers if released.
By outlining this framework, Meta aims to demonstrate its commitment to responsible AI development while distinguishing its approach from other firms with fewer safeguards. The policy comes amid growing scrutiny of AI's potential misuse, especially as open-source models gain wider adoption.
Regístrepor correo electrónico ahora para acciones semanales de promoción
100% free, Unsubscribe any time!Add 1: Room 605 6/F FA YUEN Commercial Building, 75-77 FA YUEN Street, Mongkok KL, HongKong Add 2: Room 405, Building E, MeiDu Building, Gong Shu District, Hangzhou City, Zhejiang Province, China
Whatsapp/Tel: +8618057156223 Tel: 0086 571 86729517 Tel en Hong Kong: 00852 66181601
Correo electrónico: [email protected]