Meta's global policy head, Sir Nick Clegg, has warned governments about 'fragmented' regulations around the technology. Instead, he advocates for an international agency to guide the regulation of AI.
Meta is looking to set 'an early benchmark' on transparency and safety mitigations with the release this week of Llama 2, its large language model developed with Microsoft. The acceleration of services like this one has prompted the evaluation of the ethical and legal concerns around the technology, including copyright issues, misinformation and online safety. In fact, Meta claims they are encouraging tech companies to start setting their guidelines on transparency, accountability and safety while governments set the official regulations.
On more immediate concerns, Clegg dismissed worries about the unfair use of data and copyrighted materials to train the models. He also downplayed payment suggestions for content creators like artists or news outlets whose work is scrapped to teach chatbots and generative AI, since the information is allegedly available under fair use arrangements.