Predictions for the AI Race

essayai

There are a lot of different perspectives on the AI race and what would lead to the best or worst outcomes. Rather than argue for one of these, I'm going to predict how things will turn out based on what the incentives suggest, taking inspiration from game theory ideas.

No one will choose to slow down

There's no world where everyone unanimously agrees to pause AI, even though many people see a pause as valuable for society. Everyone would rather everyone else pause or slow down their AI progress while continuing their own. Even if everyone did agree to pause AI, it only takes one party to break the agreement and make everyone else follow suit. It doesn't even have to be one party explicitly making progress on AI. It could just be some gray area like performance optimizations or incremental improvements to deployed systems.

This dynamic is strongest between nations. The US and China have no mechanism to credibly commit to slowing down, and each side has strong incentives not to fall behind. A pause agreement among companies within one country just hands the advantage to the other.

Innovations don't stay secret

No single entity will come up with a significant AI innovation that no one else replicates. Once some game-changer becomes publicly known to exist, everyone else will try to figure it out for themselves and may even collaborate with each other. There can only ever be progress towards making the innovation public knowledge, and it only takes a single person to share it.

The hypothetical world where some frontier lab keeps this secret for any extended period of time would require all people with knowledge of it to never leave the lab, never accidentally leak it, and no one else to rediscover it. The reasoning models from OpenAI are a good example. It only took DeepSeek a few months to release their own reasoning models, and details like reasoning traces gave away the approach OpenAI used to create them.

Models will be commoditized

Both open source and closed source models are steadily improving along multiple axes, with closed source models still ahead. Models continue to get bigger, more capable, and more data efficient. At a fixed model size, capabilities are improving. At a fixed capability level, the required model size is shrinking.

The only real lasting advantage frontier labs have is compute capacity in data centers. Models will only become more commoditized over time. Beyond a certain capability threshold, most use cases see diminishing returns from further scaling. This is analogous to how phones and laptops are the dominant computing devices these days, even though more powerful machines exist. I believe that eventually, ordinary computers will be perfectly sufficient for running personal AGI models.

AI creates and mitigates its own risks

Regarding AI alignment, the biggest risk is when there's a single misaligned AI that has too much power and control over various systems. Given the commoditization of models, the most practical form of alignment is having different models act as checks on each other, as long as they're trained independently from each other. Different training data and procedures would naturally lead models to have different failure modes, making correlated failures less likely. I don't believe there's such a thing as a single universally good AI that could be trusted for everything. The good thing about competition in AI is that no one wants to lose the AI race, so it's unlikely there will be a single dominant AI.

In other areas like cybersecurity, the increased risks of AI being used to exploit systems are likely counterbalanced by the increased defensive capabilities AI can offer. The areas that have more cause for concern are more subtle risks like misinformation or control of information. Obvious consequences lead to active effort in mitigating issues, whereas subtle consequences may go unchecked for long periods of time.

Wealth won't redistribute itself

I doubt the economic value from increased AI and automation will spread broadly. Money and resources will increasingly be concentrated in the winners of AI, and there are no dynamics in the AI race that incentivize the winners to redistribute their wealth. The same logic from earlier applies here: any individual company or nation benefits from capturing value rather than sharing it. Regulations could change this, but regulation requires coordination, and coordination is exactly what the incentives work against, at least in the short term.


As a whole, I'm predicting AI will revolutionize certain areas like sciences and automation, yet most people's daily lives will feel the same.