Google CEO Sundar Pichai suggested at the New York Times’ annual Dealbook Summit on Wednesday that the era of AI developers using large datasets of information scraped from the internet and the days of “low-hanging fruit” may be over.
“In the current generation of LLM models, roughly a few companies have converged at the top, but I think we’re all working on our next versions too,” Pichai said. “I think the progress is going to get harder.”
Pichai’s comments come as researchers report a slowdown in the performance of AI models compared to two years ago when ChatGPT was launched to the public.
Others raising concerns about the diminishing returns of AI models include Ethereum co-founder Vitalik Buterin, a16z’s Marc Andreessen and Ben Horowitz, and former OpenAI co-founder and former Chief Scientist Ilya Sutskever, who pointed out scaling AI models learning from massive amounts of unlabeled data has hit a plateau.
“When I look at 2025 the low-hanging fruit is gone, the curve, the hill is steeper,” Pichai added. “I think the elite teams will stand out in 2025, so I think it’s an exciting year from that perspective.”
Pichai’s remarks reflect a growing sentiment in the AI world that the days of simply building bigger models and throwing more data at them to achieve better results might be coming to an end.
Adding to these concerns is the risk of an “AI ouroboros” effect, where models train on data generated by other AIs rather than human-created content.
Named after the ancient image of a serpent eating its tail, the ouroboros effect in AI happens when one AI uses data generated by another AI, creating a feedback loop. Over time, this can lead to repetitive or distorted outputs as the system begins to rely on AI and not human-created data, effectively hitting a wall.
While some developers worry about AI models reaching a plateau, Pichai remains optimistic, suggesting the field will see significant advancements in the coming year.
“I expect a lot of progress in 2025, so I don’t fully subscribe to the wall notion,” Pichai said. “But when you start out quickly scaling up, you can throw more compute, and you can make a lot of progress, but you’re definitely going to need deeper breakthroughs as we go to the next stage.”