AI-Yai-Yai: Two Wizards Unload on What VCs and Consultants Ignore
December 2, 2025
Another dinobaby original. If there is what passes for art, you bet your bippy, that I used smart software. I am a grandpa but not a Grandma Moses.
I read “Ilya Sutskever, Yann LeCun and the End of Just Add GPUs.” The write up is unlikely to find too many accelerationists printing out the write up and handing it out to their pals at Philz Coffee. What does this indigestion maker way? Let’s take a quick look.
The write up says:
Ilya Sutskever – co-founder of OpenAI and now head of Safe Superintelligence Inc. – argued that the industry is moving from an “age of scaling” to an “age of research”. At the same time, Yann LeCun, VP & Chief AI Scientist at Meta, has been loudly insisting that LLMs are not the future of AI at all and that we need a completely different path based on “world models” and architectures like JEPA. [Beyond Search note because the author of the article was apparently making assumptions about what readers know. JEPA is short hand for Joint Embedding Predictive Architecture. The idea is to find a recipe to all machines learn about the world in a way a human does.]
I like to try to make things simple. Simple things are easier for me to remember. This passage means: Dead end. New approaches needed. Your interpretation may be different. I want to point out that my experience with LLMs in the past few months have left me with a sense that a “No Outlet” sign is ahead.

Thanks, Venice.ai. The signs are pointing in weird directions, but close enough for horse shoes.
Let’s take a look at another passage in the cited article.
“The real bottleneck [is] generalization. For Sutskever, the biggest unsolved problem is generalization. Humans can:
learn a new concept from a handful of examples
transfer knowledge between domains
keep learning continuously without forgetting everything
Models, by comparison, still need:
huge amounts of data
careful evals (sic) to avoid weird corner-case failures
extensive guardrails and fine-tuning
Even the best systems today generalize much worse than people. Fixing that is not a matter of another 10,000 GPUs; it needs new theory and new training methods.”
I assume “generalization” to AI wizards has this freight of meaning. For me, this is a big word way of saying, “Current AI models don’t work or perform like humans.” I do like the clarity of “needs new theory and training methods.” The “old” way of training has not made too many pals among those who hold copyright in my opinion. The article calls this “new recipes.”
Yann LeCun points out:
LLMs, as we know them, are not the path to real intelligence.
Yann LeCun likes world models. These have these attributes:
- “learn by watching the world (especially video)
- build an internal representation of objects, space and time
- can predict what will happen next in that world, not just what word comes next”
What’s the fix? You can navigate to the cited article and read the punch line to the experts’ views of today’s AI.
Several observations are warranted:
- Lots of money is now committed to what strikes these experts as dead ends
- The move fast and break things believers are in a spot where they may be going too fast to stop when the “Dead End” sign comes into view
- The likelihood of AI companies demonstrating that they can wish, think, and believe they have the next big thing and are operating with a willing suspension of disbelief.
I wonder if they positions presented in this article provide some insight into Google’s building dedicated AI data centers for big buck, security conscious clients like NATO and Pavel Durov’s decision to build the SETI-type of system he has announced.
Stephen E Arnold, December 2, 2025
Comments
Got something to say?

