My take on the AI bubble
Timeline
- End of 2022 – Release of ChatGPT and subsequent adoption by 100 million users in two months (LLMs).
- From GPT-4, pivot toward more “efficient” models (using the same amount of training data but achieving better results).
- 2024 – Training data recognized as a bottleneck (“training data is the fossil fuel of AI” – Ilya Sutskever).
- Continued scaling of model size and compute, along with new developments in training techniques.
- Late 2024 / Early 2025 – Adoption of “thinking models” (DeepSeek R1, OpenAI o1).
- Problem: Makes prompts significantly more expensive, which is a considerable issue for prompt-heavy applications like so-called “vibe coding apps.”
- GPT-5 performs below expectations.
- Primary AI providers (xAI, OpenAI, Anthropic) start pivoting to the secondary AI market.
- Secondary market companies start getting massively overinflated.
But:
- It seems people are recognizing that the “early adopters” (developers trying to apply AI in areas they know nothing about) maybe shouldn’t be the ones driving adoption.
- Instead, it should be people with deep industry knowledge trying to adopt AI.
- This will take longer but will undoubtedly bring revenue to the primary providers.
- So it’s basically a waiting game until different sectors figure out how exactly AI can help them grow revenue.
- What didn’t happen: AI didn’t suddenly turn the world upside down and revolutionize every economic sector.
- Instead, ChatGPT rapidly took on the role of the world’s internet search engine, information gatherer, essay writer, and coding assistant—but nowhere else did it gain rapid adoption.