Skip to content

2025

Why the Smartest People in AI Disagree

What their disagreement reveals about how organizations should prepare for what comes next


why_the_smartest_people_in_ai_disagree.jpg
why_the_smartest_people_in_ai_disagree.jpg

The Question Behind the Question

Some of the smartest people working on AI disagree about where it is going. Not just on timelines, but on fundamentals. Some argue that progress will slow because we are hitting physical limits. Others believe that new breakthroughs will unlock systems far more general than anything we have today. Still others argue that the very idea of “superintelligence” is a distraction.

For a long time, I found this disagreement confusing. These are people with access to the same research, the same models, and often the same data. If anyone should agree about the future of AI, it should be them.

Over time, I started to suspect that the disagreement wasn’t really about technology.

It was about what counts as success.

When people talk about AI “winning,” they often mean different things. Sometimes they mean being first. Sometimes they mean being most capable. Sometimes they mean building something that looks impressive on a benchmark or in a demo. These goals are easy to measure, and they dominate public discussion.

They are also insufficient.

For me, advanced AI is only a success if three things are true:

  • its benefits are broadly shared rather than concentrated,
  • it makes human work more rewarding instead of hollowing it out,
  • and it can exist within real physical and ecological limits.

Once I started looking at the AI debate through this lens, many disagreements made more sense. People weren’t talking past each other because they misunderstood the technology. They were optimizing for different outcomes.

This essay is an attempt to understand those differences—not to predict who will be right about AGI, but to ask a more practical question: how should organizations act when the technology is powerful, the future is uncertain, and the consequences are unevenly distributed?


How Different Views of Success Shape Different Strategies

Once you define success this way, the disagreement around AI becomes easier to interpret. The question is no longer who is right about the future, but what each group is trying to optimize for.

This becomes especially clear when you look at a small number of influential voices—not as prophets, but as representatives of distinct strategies. Each is responding to the same technological reality. Each sees real risks. Where they diverge is in what they believe should be maximized, and what they believe must be constrained.

Understanding these differences matters, because organizations often copy the assumptions of the loudest or most successful players without realizing it. Before adopting their tools or their rhetoric, it is worth understanding the world they are implicitly trying to build.


Dan Wang: Speed, Scale, and the Logic of Competition

Dan Wang approaches the future of AI from a geopolitical angle. In his book
Breakneck: China’s Quest to Engineer the Future, the central question is not what intelligence is, but how technological capability translates into national power.

Wang’s core observation is simple: China and the United States are locked in a competition where speed matters. Not just speed of invention, but speed of deployment. The advantage does not necessarily go to whoever builds the most elegant system, but to whoever can turn new capabilities into real-world infrastructure fastest.

China, as Wang describes it, excels at this. Once a technology is deemed strategically important, it can be rolled out at scale, embedded into institutions, and iterated on quickly. The United States, by contrast, tends to lead in early research but often struggles with coordination and diffusion.

What matters is that Wang’s argument does not depend on AGI arriving soon—or at all. Even narrow or imperfect AI systems can have enormous impact if they are widely deployed and tightly integrated into society.

This leads to a very specific definition of success: whoever aligns technology, institutions, and incentives most effectively will win.

From this perspective, questions about distribution, meaningful work, or sustainability are secondary. They may matter socially or politically, but they are not the primary drivers of the strategy.

Wang describes the world as it is. And it is against this reality—of competition, pressure, and uneven incentives—that the other perspectives react.


Tim Dettmers: The Case for Physical and Economic Limits

If Wang represents speed, Tim Dettmers represents constraint.

In his essay
Why AGI Will Not Happen,
Dettmers argues that the current AI strategy—relentless scaling through more compute, more energy, and more capital—runs into physical and economic limits much sooner than most narratives admit.

Computation is not abstract. It happens on chips that consume power, generate heat, and depend on complex supply chains. For a long time, progress felt almost free. Bigger models reliably worked better. Hardware improved predictably. Capital was abundant.

Dettmers argues that this era is ending. Linear gains now require exponential resources, and that trajectory cannot continue indefinitely.

This matters because “speed at all costs” assumes scaling is always available as an option. Dettmers challenges that assumption. If compute, energy, and money become binding constraints, then racing faster becomes a gamble rather than a strategy.

There is also an implicit sustainability argument here. Even if massive scaling were technically possible, it raises questions about environmental impact and opportunity cost.

Dettmers does not claim that AI development will stop. His point is more uncomfortable: the easiest path forward is narrowing, and organizations built on assumptions of unlimited growth may find themselves brittle.


Ilya Sutskever: Limits Matter, but Curves Can Still Bend

Where Dettmers sees constraints,
Ilya Sutskever sees a fork in the road.

Sutskever has openly stated that the era of effortless scaling is ending, but he does not conclude that progress must therefore stall. Instead, he argues that limits signal the need for conceptual breakthroughs.

Past progress in AI has not come from scaling alone. Backpropagation, convolutional networks, transformers—each reshaped what scaling even meant. In hindsight they look obvious. At the time, they were not.

This belief explains his focus on long-term research and safety, most recently through
Safe Superintelligence Inc..

What distinguishes this view is its combination of ambition and restraint. Sutskever takes the possibility of extremely powerful systems seriously—and precisely because of that, treats alignment and safety as prerequisites rather than afterthoughts.

For organizations, this suggests a different posture toward uncertainty: build the capacity to adapt, rather than optimizing prematurely for today’s dominant paradigm.


Yann LeCun: Questioning the Curve Itself

If Sutskever believes the curve can bend,
Yann LeCun questions whether there is a single curve at all.

LeCun has long argued that the AGI and superintelligence debate rests on a flawed abstraction: the idea that intelligence is a single scalar quantity that can be increased and extrapolated.

In reality, intelligence is multi-dimensional. Systems can excel in some areas while remaining weak in others. Asking whether one system is “more intelligent” than another is often as misleading as asking whether a hammer is smarter than a screwdriver.

LeCun is particularly skeptical that scaling language models leads naturally to world understanding. Language, he argues, is a surface phenomenon. Much of human intelligence is grounded in perception and interaction with the physical world.

This reframing dissolves both runaway optimism and hard ceilings. If intelligence is not one-dimensional, there is no single curve to race along.

For organizations, the implication is quiet but radical: there is no finish line—only choices.


Synthesis: Strategy When the Future Is Powerful but Unclear

Taken together, these perspectives do not converge on a single prediction. They converge on something more useful: a way to think about action under uncertainty.

  • Wang reminds us that technology is deployed as soon as it exists.
  • Dettmers reminds us that scaling faces real limits.
  • Sutskever argues that breakthroughs can change the curve.
  • LeCun questions whether the curve metaphor even applies.

What unites them is this: the future will not be linear.

If inequality from AI is an organizational and institutional issue, then the most important choices are not technical. They are structural.

Three principles follow:

  1. Avoid irreversible bets.
  2. Preserve human agency where values are involved.
  3. Invest in understanding, not just usage.

These principles work whether progress accelerates, slows, or fragments.


Conclusion: What We Owe the Future

It is tempting to ask who will win the AI race. That question is simple, and it feels urgent. It is also the wrong one.

The systems we are building will be powerful whether or not we ever agree on what AGI means. What matters is not how impressive they become, but how they are woven into institutions, work, and daily life.

This essay itself was written with the help of AI. Not as a substitute for judgment, but as a tool for thinking. The responsibility for the conclusions—and for their consequences—remains human.

Used this way, AI does not diminish meaningful work. It supports it.

The future will not ask whether we were clever enough to build powerful machines.
It will ask whether we were wise enough to use them well.

About Vibe Coding

Impact of vibe coding on product desing

Software development is changing rapidly overnight. Indeed:

  1. OpenAI announced Codex on May 16, 2025, for their $200/month Pro users https://openai.com/index/introducing-codex/.
  2. Microsoft GitHub Copilot released its new coding agent on May 19, 2025. https://bsky.app/profile/github.com/post/3lpjxvgje7s2k
  3. Google announced a tool called Jules (jules.google.com) on May 20, 2025, making it available for free and
  4. Mistral releases devstral, an open-source model for coding agents on May 21, 2025. https://mistral.ai/news/devstral

These new coding agents—along with Cursor, Lovable, Windsurf, V0, Bold.new, and others—are all tools that support some form of “vibe coding” (a term coined by Karpathy indicating AI-assisted coding).

This gives rise to a lot of FUD (fear, uncertainty, and doubt) from the corporate gatekeepers. The short-term opportunity is this: in a design thinking approach, a “research prototype” that checks the basic hypotheses (who is this product for, what problem does the product solve) can be developed much faster using vibe coding.

Even with the expected “valley of disappointment” that may follow (because users tend to overreact to the initial prototype, which will likely need to be rewritten from scratch), in the end, the chance of building a product that resonates with users is much higher and it will be ready sooner—if the same good old software process is followed, from prototype to Minimum Viable Product (MVP) to Version 1 accepted by users.

The future Ai ecosystem will be open

a79b47e679f3b1e93e1d2a2aadbb3461875225293794331ce2b9f471931c3f44.jpg
The future AI ecosystem will be open

I was just reading the article The walled garden cracks: Nadella bets Microsoft’s Copilots—and Azure’s next act—on A2A/MCP interoperability and this is how I see what's happening in the AI landscape:

  • Antropic: best user experience and defined MCP (Model Context Protocol)
  • Google: best model on all leaderboards with Gemini and defined A2A (Agent-to-Agent)
  • Microsoft: let's build an open AI ecosystem with MCP and A2A, releases supports for A2A and MCP in VS Code
  • Deepseek: after they pulled of DeepSeek V3, Deepseek released open model DeepSeek-Prover-V2 that tackles advanced theorem proving achieving an 88.9% pass rate on the MiniF2F-test benchmark (Olympiad/AIME level theorems) and solving 49 out of 658 problems on the new PutnamBench. This means that Deepseek is cracking the reasoning part of LLM's.

And OpenAI:

Seeing all those signals, we conclude: the AI future will be open!