Here's why AI is slowing down
If your bubble is similar to my bubble, you’ve been informed that AI has reached a plateau.
Ilya Sutskever says scaling is coming to an end, OpenAI is reportedly shifting strategy. Here's what's up with AI slowing down: If you believe your technology is growing exponentially, you have to expect a plateau. It's exactly what the scaling laws would predict and it's what we see in nature all the time.
I wrote about this a year ago in my Zeit Online newsletter (with less math focus) and it was a frequent audience question after talks and panels in the past months, so I'll give a brief summary here:
Growth is not free and innovation is not free. Exponential growth is a thing, but it always eventually plateaus as some underlying resource depletes. For AI this could be data, or compute hardware, or the energy required to keep that hardware running. Or the funding you need to pay for all that energy. For an exploding rabbit population it might be food or space. For a virus, it’s the number of hosts that don't have immunity yet. You know this intuitively. You see it all the time.
I wonder whether AI people forget this, because most of what they think about every day is mathematics, which is the only discipline where real exponentials exist. John von Neumann reportedly said
I think that’s true. For a mathematician like von Neumann, that is. From a theoretical standpoint even 76 years later I cannot tell you any single skill that machines will never be able to do. The limits are physical, not mathematical. You need time, you need engineers, you need money.
The reason people get mixed up is that we see innovation speeding up all the time. This is also intuitive. Technological progress feels exponential. But this is growth in many different interlaced areas, humans building on other humans, connecting new results with ancient observations. Even though on a grand scale it might feel like it, it's not one smooth graph. Innovation speeds up and peters out, then sometimes it curdles around a breakthrough discovery, then we invest in whatever we've learned there until we hit some plateau again.
The more people invest (time and money and attention), the steeper the growth feels. Because human time and money and attention multiply in this way. That's where the exponentiality comes from.
All this of course also means that this is not the end of growth for AI. There was just never any "free growth forever". We're hitting plateaus with data and energy, so progress might next come from models getting smaller and more efficient. Transformers/LLMs have reached their plateau of appearing to become smarter with scale, so research will have to focus on other areas. Maybe we’ll see a shift to specialized applications, maybe symbolic approaches will make a comeback, maybe test time training will help. (I don't believe improving inference in the sense that o1 does is "reasoning" or that this will be significant, that's just brute forcing your way around the limits of next token prediction. But that's something for another post.)

Final intuitive example: Yes, we can build a machine that writes emails for you. But it will not be a magic device that appears overnight in some model that we've scaled up. It will be a piece of software that people have painstakingly put together using their understanding of the problem. Combining different approaches, using time and money and energy.
IGF (infinite growth forever)
Marie
PS: I’m also posting this on Linkedin and would be happy to read your comments. Expect the next edition of this newsletter to arrive at an exponentially faster rate.
Further Reading
My November 2023 text on growth predictions (In German - Zeit Online: Natürlich intelligent)
A long post about with believers’ vs. skeptics’ arguments on scaling (Dwarkesh Patel, Dec 23)
In other news
Meta-analysis of how and when humans best “collaborate” with AI (Nature)
Google, Microsoft, and Perplexity Are Promoting Scientific Racism in Search Results (Wired)
Yann LeCun about how machines could reach human-level intelligence (Youtube lecture)
Try this
I’ve started using Readwise to collect articles (my previous tool Omnivore is shutting down) and I’ve been loving the integration with Obsidian and the SmartConnections Plugin. This setup allows me to highlight and annotate anything I’m reading or watching online, save my notes locally as .md files (forever!) and use AI to chat with my notes or find connections. Let me know if you’d like detailed instructions.
Suno v4 is launching soon. A collection of examples. (Tom’s Guide)
SillyTavern, a locally installed frontend for AI power users