Sam Altman kicked off 2025 with a bold announcement: OpenAI has discovered how to create artificial general intelligence (AGI), which refers to AI systems that can understand, learn, and perform any intellectual task that humans can do.
In a reflective blog post published over the weekend, Altman also mentioned that the first wave of AI agents could enter the workforce this year, marking a pivotal moment in technological history.
Altman described OpenAI’s journey from a quiet research lab to a company on the verge of creating AGI.
The timeline may seem ambitious, especially considering that ChatGPT, OpenAI’s language model, just celebrated its second anniversary a month ago. Nevertheless, Altman suggests that the next wave of AI models capable of complex reasoning is already here.
The focus now is on integrating AI into society until it surpasses human capabilities.
However, Altman’s explanation of AGI remains vague, and his timeline predictions have raised concerns among AI researchers and industry experts.
Altman wrote, “We are now confident we know how to build AGI as we have traditionally understood it. We believe that, in 2025, we may see the first AI agents ‘join the workforce’ and materially change the output of companies.”
The lack of a standardized definition for AGI contributes to the vagueness surrounding Altman’s statements. As AI models become more powerful, the definition of AGI has evolved, and passing traditional benchmarks like the Turing test does not imply true sentience.
Altman’s optimism contrasts with the expert consensus, leading to questions about what he means by “AGI.” The idea of AI agents joining the workforce in 2025 sounds more like advanced automation than true artificial general intelligence.
Altman’s claims of imminent breakthroughs may help maintain investor interest, considering OpenAI’s substantial operating costs.
Nevertheless, some experts support Altman’s claims. According to Harrison Seletsky, director of business development at SPACE ID, if Altman’s statements are true and technology continues to evolve, “broadly intelligent AI agents” may be only a year or two away.
Altman also hinted that AGI is not the ultimate goal for OpenAI, as the company aims for artificial superintelligence (ASI), where AI models surpass human capabilities at all tasks.
While Altman did not provide a timeframe for ASI, some predict that robots could replace humans entirely by 2116. However, experts from the Forecasting Institute estimate a 50% probability of achieving ASI by at least 2060.
Knowing how to achieve AGI does not guarantee its realization. Limitations in training techniques and hardware pose significant challenges.
Eliezer Yudkowsky, an influential AI researcher and philosopher, suggests that Altman’s bold predictions may be a short-term strategy to benefit OpenAI.
The rise of AI agents, which possess agentic behavior, is progressing faster than expected in terms of quality and versatility.
Frameworks like Crew AI, Autogen, and LangChain have enabled the creation of AI agent systems with various capabilities, including collaborative work with users.
Experts are not overly concerned about the impact of AI agents on human workers. While there may be a reduction in human capital for repetitive tasks, advancements in AI may also address more sophisticated repetitive tasks and leave decision-making to humans.
Experts agree that AGI’s approach lacks the “humanity” of human decision-making. By removing emotional biases, objective and data-driven AI can assist in financial decisions.
The adoption of AI agents raises social implications, and research suggests that collaboration between AI and humans is crucial for societal growth.
Although companies have started replacing human workers with AI agents, there is still a need for human intervention due to hallucinations, training limitations, and lack of context understanding.
While some CEOs are excited about the idea of digitally enslaved agents, other experts argue that AI agents could potentially outperform CEOs in about 80% of their tasks.
In conclusion, Altman’s claims about AGI and AI agent integration in 2025 have generated mixed reactions. The lack of a standardized definition for AGI and the challenges associated with achieving true artificial general intelligence contribute to the skepticism surrounding these claims.