Last month, tech outlet The Information reported that OpenAI and its competitors Switching techniques As the rate of improvement in AI has slowed dramatically. For a long time, you’ve been able to make AI systems dramatically better at a wide range of tasks Just make them bigger.
Why does this matter? All kinds of problems that were once believed to require elaborate custom solutions crumble in the face of greater scale. We have applications like OpenAI’s ChatGPT because of the scaling law. If that’s the case And not trueThen the future of AI development will look a lot different than the past — and potentially a lot less optimistic.
This reporting is welcomed with a Chorus “I told you so” from AI skeptics. (I’m not inclined to give them too much credit, since many of them surely predicted 20 of the last two AI slowdowns.) But it was hard to understand how AI researchers felt about it.
Over the past few weeks, I’ve pressed some AI researchers in academia and industry on what they thought The Information story had truly set in motion — and, if so, how it would change the future of AI.
The overall answer I’ve heard is that we should probably expect the impact of AI to grow, not shrink, over the next few years, regardless of whether naive scaling actually slows down. This is effectively because when it comes to AI, we already have a huge amount of impact waiting to happen.
Powerful systems are already available that can do many commercially valuable tasks — it’s just that no one has ever figured out many commercially valuable applications, let alone implemented them.
it is It took decades It may take decades (maybe – many people at the cutting edge of this world) to transform the world from the birth of the Internet, and AI too. Still very stubborn (That in just a few years, our world will be unrecognizable.)
Bottom line: If greater scale doesn’t give us greater returns, that’s a big issue for how the AI revolution will play out, but not a reason to declare the AI revolution off.
Most people hate AI while underrating it
What those in the artificial intelligence bubble may not realize: AI is not a hot new technology, and it’s actually getting Less popular over time.
I wrote that I think it poses extreme risks, and so do many Americans agree with meBut many people dislike it A much more mundane way.
Its most visible consequences are by far unpleasant and depressing. Google Images is full of results Terrible low-quality AI slop Instead that cool and varied artwork is displayed. The teachers Can’t really assign No more take-home essays because AI-written work is so pervasive, while many students for their part have been accused of incorrectly using AI when they haven’t used AI detection tools. Terrible indeed. Artist and writer is angry About using our work to train models that will then adopt our work.
Much of this frustration is justified. But I think there’s an unfortunate tendency to conflate “AI sucks” with the idea that “AI isn’t that effective.” The question “What is AI good for?” It’s a popular, though de facto answer is AI A large number of things are already good And new applications are being developed at a breathtaking pace.
I think our frustration with the occasional AI slop and the carelessness with which AI is developed and deployed may underrate AI as a whole. many people Eagerly hurt In the news that OpenAI and competitors are scrambling to better the next-generation models, and taking this as proof that the AI wave was all hype and bitter disappointment will follow.
Two weeks later, OpenAI announcement Latest generation models, and sure they are better than ever. (A caveat: it’s hard to tell how much of the improvement came from scale as opposed to other possible sources of improvement, so that doesn’t mean the initial data report was wrong).
Don’t let the AI fool you
Better to dislike AI. But underrating it is a bad idea. And it’s a bad habit to take every hiccup, setback, limitation or engineering challenge as a reason to expect our world’s AI transformation to stall — even slow down.
Instead, I think a better way to think about it is that, at this point, an AI-driven transformation of our world is bound to happen. Even if larger models are not trained than at present, the existing technology is sufficient for large-scale disruptive change. And reasonably often when a constraint does arise, it’s prematurely declared completely intractable … and then resolved in short order.
After a few go-rounds of this particular dynamic, I want to see if we can cut it on the pass. Yes, the various technical challenges and limitations are real, and they are prompting strategic changes at large AI labs and shaping how progress will be made in the future. No, the latest such challenge doesn’t mean the AI wave is over.
AI is here to stay, and the reaction to it must become a thing of the past by wishing it would go away.
A version of this story originally appeared in the Future Perfect Newsletter. Sign up here!