I listened to an interview recently which made me want to throw my phone into traffic.
President Biden’s former special AI Advisor, Ben Buchanan, spoke with Ezra Klein at the New York Times. The conversation is ominously titled ‘The government knows AGI is coming’.
Buchanan dislikes the term AGI; he prefers ‘strong AI’, but whatever you call it, the definition is what’s important: an AI system capable of doing essentially any cognitive task as well, if not better, than a human.
Based on his access to the halls of power, the interview starts with Buchanan being asked when we can expect this kind of AI to arrive.
His timeline is similar to one that much of the AI world is now coalescing around—approximately 24 months. It could be sooner, maybe slightly longer, but within President Trump’s current term.
He’s a measured, soft-spoken person, but his tone was very at odds with the weight of what he was saying. I had to rewind and listen twice just to make sure I was hearing him right.
Imagine if you heard Ashley Bloomfield giving a speech about protecting ourselves from future viruses, but he casually slipped in that the most virulent new flu will be arriving with the aliens landing just after our next election.
And aliens aren’t a bad metaphor for what’s coming. New minds, superior to ours, suddenly appearing in our midst. As Dario Amodei from Anthropic says, it’ll be like a new nation suddenly appearing on Earth but populated exclusively by super geniuses.
AGI’s invention will, and I really can’t stress this enough, be the most important thing that’s ever happened.
If that sounds like ludicrous hyperbole to you, I get it. But think closely about the social and economic consequences of having a system capable of doing almost everything better than almost any human. What does that do to every facet of social and professional life?
When asked what he and the Biden White House did with the knowledge that the world would probably be changing forever in very short order, Buchanan points to some executive orders on safety and some advanced chip restrictions on exports to China to ensure America was leading the charge on technology.
Klein, a very measured journalist, seems bewildered after listening to Buchanan’s list.
Frustrated, he asks the question I was near shouting into my phone: if something this “f***ing big” is coming, how is the the biggest action we can point to in preparation some restrictions on trade with China?
Buchanan merely mumbles something about how he agrees Governments should have more urgency.
Paying attention to AI progress right now feels like being in the world of ‘Don’t Look Up’.
In the Leo Decaprio-led black comedy, society’s response to climate change is embodied as a meteor heading directly towards Earth which we simply refuse to acknowledge or do anything useful about it until it’s too late
It’s a good metaphor for climate change but feels much better for artificial intelligence. Climate Change is real, immensely consequential but very slow-moving
Yes, we’re seeing the consequences all around us, but the worst effects will still occur over decades and centuries. According to the White House special advisor on AI and a large portion of Silicon Valley, AGI could arrive within President Trump’s second term.
At this point most people are using Chat GPT, Claude or one of another dozen models to sharpen up some writing, read boring research reports or generally assist their work in meaningful but not yet transformative ways
It’s easy to understand how most people might have some concept that AI is going to be important or perhaps even have some nebulous fear that it will replace some element of their job one day but not feel a whole lot of urgency.
And I can hear someone calling me an AI doomer on X already so, sure, this could all be hot air. I think believing that requires some wishful thinking but bubbles are real and technologists have promised us the moon before.
I personally think if you’re on the other side of an argument with multiple Nobel laureates, the former White House AI special advisor, world-leading academics and the most highly paid researchers working at the most valuable companies to ever exist, you better be damn sure of your position.
I think the current argument is only when, not if, we reach generally intelligent AI.
But yes, AI progress could stall or fall off a cliff completely in the next few years. If that happens, it’s still terrible news, considering how much of the world economy is now dependent on it.
Apple, Microsoft, Nvidia - all the world’s most valuable companies have essentially bet the farm on building AGI. If it proves impossible, we will have a crash for the ages.
I think some of the doubts stem from most people coming to terms with the capability of AI as it was 12 months ago, not as it is today and not truly grappling with how quickly it’s advancing.
There’s a line from author William Gibson that’s popular with technologists right now: “The future is already here; it’s just unevenly distributed.”
Perhaps some people reading this have used Openai’s most recent product, ‘Deep Research,’ which can provide a deeply thoughtful, well-cited, and detailed analysis of virtually any topic.
I’ve used it to prepare briefings in minutes on topics that would still take me hours if not a whole day to prepare on my own. Talking to GPT 4.5 using voice mode is almost indistinguishable from a person.
The capabilities for AI to replace many jobs are already there; it’s just that institutions and businesses are slow to adopt them even as those capabilities are increasing exponentially.
I think the key threshold we’re about to cross is but the tools acting on their own. Open AI and all other major AI companies have some version of an ‘operator’ model which they’re bringing to market over in the next 12 months.
Once we can automate most emails, spreadsheets, and PowerPoints, many knowledge workers will suddenly start grappling with this problem more acutely.
We’re in an entirely different world once we have systems capable of brainstorming, planning and executing an idea.
There’s a quote from E.O Wilson which might be the most useful things I’ve heard for understanding why everything is so, well, frightening right now
“The real problem of humanity is the following: We have Paleolithic emotions, medieval institutions and godlike technology.”
Look at social media as an example. We invented a godlike form of technology, the ability to connect with the thoughts and impressions of all of humanity at the click of a button. That activated all those lizard parts of our brains attracted to negativity, rage bait and polarisation.
It eroded trust in our government, universities, and institutions, which are the ones we have to try to rely on to regulate god-like technology. This created a terrible feedback loop that we’re still struggling to deal with today.
But social media was a very basic form of AI, essentially an algorithm that tries to guess what users are most likely to click on next.
Tristan Harris, co-founder for the Centre of Humane Technology, argues social media was essentially our test run for what AI will do to us.
If social media was first contact with aliens, AGI is the aliens actually landing on Earth.
And if aliens really are watching Earth right now, they wouldn’t want to even take a bathroom break. It feels like they’re about to see humanity’s series finale.