This post will not contain original insight or argumentation and is intended more as a ‘public service announcement’ for people who are not following the developments of AI. I do not consider these quotes persuasive in and of themselves, but I do put weight on them.
If you know other recent quotes about the pace of AGI (on either side of the discussion), please let me know!
What is AGI?
‘AGI’ stands for Artificial General Intelligence. There is no agreed upon definition, but roughly an AGI is some sort of artificial system that is as capable as humans at the vast majority of tasks.
Quotes
These are quotes of various figures in the AI space about how soon AGI could reasonably come.
Dario Amodei, CEO of Anthropic. From 11 Nov 2024, via Lex Fridman Podcast.
If you extrapolate the curves that we’ve had so far, if you say, “We’re starting to get to PhD level, and last year we were at undergraduate level and the year before we were at the level of a high school student.” […] if you just eyeball the rate at which these capabilities are increasing, it does make you think that we’ll get there by 2026 or 2027.
You can quibble about what tasks and that we’re still missing modalities, but those are being added. Computer use was added, image and video generation has been added.
This is totally unscientific, lots of things could derail it. We could run out of data. We might not be able to scale clusters as much as we want. Maybe Taiwan gets blown up or something, and then we can’t produce as many GPUs. So I don’t fully believe the straight line extrapolation, but if you believe the straight line extrapolation, we’ll get there in 2026 or 2027.
I think the most likely is that there are some mild delay relative to that, but I don’t know what that delay is. I think it could happen on schedule. I think there are still worlds where it doesn’t happen in a hundred years. The number of those worlds is rapidly decreasing.
We are rapidly running out of truly convincing blockers, truly compelling reasons why this will not happen in the next few years. There were a lot more in 2020, although my hunch at that time was that we’ll make it through all those blockers. So sitting as someone who has seen most of the blockers cleared out of the way, I suspect that the rest of them will not block us.I don’t want to represent this as a scientific prediction. People call them scaling laws. That’s a misnomer. Like Moore’s law is a misnomer. Moore’s laws, scaling laws, they’re not laws of the universe. They’re empirical regularities. I am going to bet in favor of them continuing, but I’m not certain of that.
Jack Clark, Head of Policy at Anthropic. From 9th Dec, via 6th Athens Roundtable.
My colleague and co-founder Dario recently wrote that by 2026 we expect that powerful AI systems will be capable of Nobel prize winning performance on many different scientific benchmarks and they will be able to carry out large long long-scale tasks for us in almost arbitrary domains. He compared it to a country of geniuses inside a data centre. This is not marketing. It’s what we believe.
Miles Brundage, full time AI Policy Researcher, previously OpenAI’s Senior Advisor for AI Readiness. From 20th Dec 2024, via his reaction to announcement of o3.
AI that exceeds human performance in nearly every cognitive domain is almost certain to be built and deployed in the next few years.
[…]There is no secret insight that frontier AI companies have which explains why people who work there are so bullish about AI capabilities improving rapidly in the next few years. The evidence is now all in the open. It may be harder for outsiders to fully process this truth without living it day in and day out, as frontier company employees do, but you have to try anyway, since everyone’s future depends on a shared understanding of this new reality
[…]
it is not just researchers but also the CEOs of these companies who are saying that this rate of progress will continue (or accelerate). I know some people think that this is hype, but please, please trust me — it’s not.
Nathan Labenz, Host of Cognitive Revolution Podcast, full-time keeps up with AI development broadly. From 22 Dec 2024, on Future of Life Institute podcast on the State of AI and Progress since GPT-4. Note this is not bullish like the above. I can recommend this podcast episode for good review of AI development in past year or so.
All this stuff is a kind of a Rorschach test. People are looking at the exact same evidence and coming to very different conclusions. I think it is pretty objectively clear at this point that the frontier models are closing in on expert performance on routine tasks. That is my standard way of framing it. ‘Routine’ meaning that there is data or examples for model to learn from. ‘Task’ meaning releatively finite in scope and not yet something that is job sized. With the latest o1 models [this interview was pre-o3] you might say that they’re starting to exceed expert performance in some of those routine tasks. And we’re seeing flashes of these eureka moments where models occasionally figure something out that people don’t necessarily know coming in. E.g. example of AI helping materials scientists.
[…]We also did not get any clarity on almost of the big picture questions. All the lab leaders are saying we are going to have AGI in the next couple of years. There’s also a certain discourse that everything is flattening off and you should expect this if we are running out of human data. I suspect that [flattening off] is not going to hold. I do think we’re going to find other ways to teach these things other than just imitating us. Anywhere you can get actual real feedback, e.g. coding. […] the more amenable something is to immediate programmatic evaluation, the faster you should expect super human stuff to emerge. The slower or more resource intensive it is to get the ground-truth for feedback, development will be slower but it should still be possible. And then some things where it’s inherently a human aesthetic it may be a different story. There just might not be such a thing as superhuman poetry.
Arvind Narayanan, Benedikt Ströbl, and Sayash Kapoor. Researchers at Princeton University, from their post on whether AI Progress is Slowing Down. They do not comment on AGI directly, but do comment on scaling and the trustworthiness of industry insiders. Note this was posted before o3 was announced.
In this essay, we look at the evidence on this question [of scaling], and make four main points:
Declaring the death of model scaling is premature.
Regardless of whether model scaling will continue, industry leaders’ flip flopping on this issue shows the folly of trusting their forecasts. They are not significantly better informed than the rest of us, and their narratives are heavily influenced by their vested interests.
Inference scaling is real, and there is a lot of low-hanging fruit, which could lead to rapid capability increases in the short term. But in general, capability improvements from inference scaling will likely be both unpredictable and unevenly distributed among domains.
The connection between capability improvements and AI’s social or economic impacts is extremely weak. The bottlenecks for impact are the pace of product development and the rate of adoption, not AI capabilities.