In the days leading up to OpenAI CEO Sam Altman's temporary departure, a warning from multiple staff researchers to the board of directors disclosed a significant AI discovery that could have far-reaching implications for humanity, as reported by two individuals familiar with the matter to Reuters.
The warning letter details a powerful AI project called Q* (pronounced Q-Star). Q* could be a potential breakthrough in artificial general intelligence (AGI), a form of AI that aims to surpass human capabilities in economically valuable tasks.
While generative AI has excelled in language-related jobs, conquering the ability to perform complex mathematics suggests AI's more human-like reasoning capacity. The algorithm's proficiency in solving complex mathematical problems has sparked optimism about its future success and potential to redefine the landscape of AI development. Researchers foresee applications of Q* in novel scientific research, envisioning AI that can generalise, learn, and comprehend in a manner reminiscent of human intelligence.
The apprehensions surrounding Q* were not the exclusive trigger for Altman's removal but rather encapsulated a spectrum of concerns, prominently focused on the potential hazards associated with commercializing AI advances without a comprehensive understanding of their consequences. Insights from sources affirm that the board's anxieties and the letter's contents played a pivotal role in the abrupt decision to separate from Altman.