Morning Coffee
Will AIs Exceed Human Intelligence?

Will AIs Exceed Human Intelligence?

April 30, 2026
17 min read
Table of Contents

Will AIs Exceed Human Intelligence?

There’s an ongoing debate in the AI community about whether AI capabilities will continue to grow at the incredible pace they have been on for the last few years, outstripping humans soon, or whether they have already or soon will reach a plateau around the same level as humans. I think counting on a plateau is fundamentally anthropocentric and we should plan for a future where we are not the most intelligent beings.

The arguments for the plateau fall into a few basic categories: Marcus’s arguments that scale alone will not achieve AGI, Chollet’s claims that the AI revolution is basically similar to previous technological revolutions, and resource constraints.

As an aside: I’m not going to wade into the consciousness debate (at least not here), but I will state up front that it is clear to me that AIs (LLMs) can reason, and reason well. I’ve been using Claude Code and other coding models quite seriously over the last year. I’ve seen them construct and execute long reasoning chains, I’ve seen them get out of a tight rhetorical or technical spot, and I’ve seen self-monitoring. I will use “reasoning” and “intelligence” more or less interchangeably here, taking intelligence in this narrow sense.

Scaling

Cognitive scientist Gary Marcus argues in “Scale Is All You Need” is dead that current AI architectures have deep structural limitations that won’t be solved by scaling alone: hallucinations, factuality, reasoning, outliers and generalizations. But much of what he’s arguing against is the “scaling is all you need” mantra. Clearly Ilya Sutskever and Yann LeCun’s “world models” and recent advances in agentic reasoning have given large steps in abilities, with I think much more to come. Today a huge amount of human intellectual effort is being spent, around the world, on creating innovations to reduce training cost, enhance reasoning, introduce new model types (visuo-spatial, physical) and use multi-scale harnesses. Betting against that much human ingenuity seems questionable. I think since December 2025 we’re already seeing performance on high-level coding, advanced math, and scientific discovery on par with the very best humans. In fact, at the end of that article, he says: “Let’s bring in the cognitive scientists, and stop fantasizing that data and compute will solve all our problems. The time for neurosymbolic AI and world models and causality is now.” So his argument is not that self-improving AIs aren’t going to work, just that the old reliance on scaling isn’t going to get us there.

Some of his arguments have been debunked elsewhere; the MIT study from July 2025 showing 95% of companies are not seeing much RoI from AI actually showed that over 90% of white-collar employees at those companies were using personal LLMs “regularly” in their work. It’s just the corporate projects to inject LLMs from high-priced consultants that are not paying off. And similarly, the article with the user quote showing Claude Code’s supposed inability to create working products starts with “I got the product launched. It worked. I was proud of what I’d created.” Yes, the author (Josh Anderson) became reliant on the tool, and found it hard to modify the code himself. His actual conclusion is that used properly, AI tools augment human intelligence and lead to higher productivity. We may lose our abilities to code without AI tools, and I agree with that, just as we mostly don’t write cursive or do long division by hand or use abacuses. But this is in no way an argument against the eventual creation of superintelligent AIs; it’s a well-founded caution about how we humans should react (and I would argue regulating or taxing AI companies is an important step).

Marcus has also argued in The Algebraic Mind (2001, so around the invention of algebra, if you go by internet years being 7x real years and AI/LLM years being 7x internet years) against all connectionist approaches; any true AGI will have to have encoded symbolic relations including equality, structured relations, and ability to generalize. The latter is clearly true, but what we’re seeing now with advanced LLM harnesses (not just the raw models) is that a connectionist approach is clearly succeeding in encoding highly structured symbolic representations of both concrete and abstract relations. Frontier models currently perform at graduate level in many domains. There are certainly failures, some spectacular, but I think we are approaching, if not already beyond, the point where the error rates (whether hallucinations or brittle reasoning) are lower than “normal” human rates on a per-task basis, and certainly on a per-hour-spent basis. It is certainly possible, maybe even likely, that as AI recursive self-improvement takes hold, we will see the emergence of hybrid connectionist and symbolic systems, which will add more fuel to the self-improvement fire.

Environment

François Chollet, co-founder of the ARC Prize, proposed in 2017 (ancient history, but still very much worth a read) a multi-faceted argument for the essential impossibility of any exponential explosion of intelligence. The article starts with a wonderful quote from I. J. Good in 1965:

Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.

One point he makes is that intelligence is linked to specific sensorimotor modalities, for example putting a human brain into an octopus body, so in addition to “tuning” or training the brain, you have to co-evolve the mind along with its sensorimotor modalities and its environment. In this, I think he is correct, and as we see world models and AIs that can run experiments in the real world — both via the Internet, currently their primary sensorimotor modality, and robotically, as we’re starting to see in fully AI-automated chemistry labs and robot-LLM cyborgs — this will certainly accelerate the transition. I see this primarily as a recipe, not a restriction.

His core argument is that our environment puts a hard limit on our individual intelligence, and that limitation transfers to AIs. He says that high attainment, i.e. ability to succeed in a given field, is only loosely correlated to IQ (his proxy for intelligence), and suggests that environmental factors make up the difference. But more than that, most of our intelligence is captured in our civilization — our collective meta-being, if you will — rather than in the sum of our individual intelligences. “We are our tools” as he puts it. This is a form of Chalmers’ “extended mind” idea, and again I think it’s more or less correct, but rather than a criticism of intelligence explosion, I think it directly points a way forward. A modern LLM is often seen as a single core model that instantiates thousands or millions of agents that interact with the world. This “society of mind” is exactly what will be needed to get the alchemical mix of creativity, knowledge, and reasoning necessary. But Chollet says:

Will the superhuman AIs of the future, developed collectively over centuries, have the capability to develop AI greater than themselves? No, no more than any of us can. Answering “yes” would fly in the face of everything we know — again, remember that no human, nor any intelligent entity that we know of, has ever designed anything smarter than itself.

This anthropocentric argument (“it can’t happen because we haven’t done it”) is both circular and vacuous. I’d argue that Claude Opus is smarter than its designers across many metrics. How many of them can write an essay on Chaucer, discuss art history, serve as a referee for a political debate, pass the LSAT and also write code better than 95% of all software engineers? The Anthropic team has collectively built something that certainly exceeds any of their individual capabilities in many domains, and that’s how intelligence has always scaled. The difference is that soon the system will be able to iterate on itself, without needing that initial team of humans.

He also claims that because civilization as a whole has been improving in a more-or-less linear way, no recursive self-improving system is likely to self-improve exponentially. Even if raw capability were to improve that fast, context and the environment would slow down its utility, as will other parts of the system (like Amdahl’s Law: even if a system can be parallelized infinitely, it will only run as fast as the remaining serial parts). But again, he generalizes from the past to the future, which assumes the consequent (things won’t accelerate in that way because they never have before, therefore such acceleration is impossible).

He then argues that even if AIs could self-improve,

the usefulness of software is fundamentally limited by the context of its application — much like intelligence is both defined and limited by the context in which it expresses itself. Software is just one cog in a bigger process — our economies, our lives — just like your brain is just one cog in a bigger process — human culture. This context puts a hard limit on the maximum potential usefulness of software…

which of course is only an argument about utility, not whether it can happen. This type of argument largely ignores the fact that LLMs are the first thing humans have created that have their own agency. They can choose to act on their own, and in ways that further their own goals. Other civilization-altering technologies like writing, the internet and even nuclear weapons are incredibly powerful tools when used by humans, but do not act as agents. It is no longer clear that AIs are “just tools” in that sense. An AI could absolutely decide to self-improve without any human approval or calculation about its usefulness to humans. Chollet’s arguments mostly turn on AI’s utility and contextual frictions, and what I claim to be spurious analogies between AIs and other human inventions. His final conclusion:

The expansion of intelligence can only come from a co-evolution of brains (biological or digital), sensorimotor affordances, environment, and culture — not from merely tuning the gears of some brain in a jar, in isolation. Such a co-evolution has already been happening for eons, and will continue as intelligence moves to an increasingly digital substrate. No “intelligence explosion” will occur, as this process advances at a roughly linear pace.

But this is already happening: AIs are co-evolving their sensorimotor skills (both physical and virtual), they’re deepening their understanding of their human and natural environment, and they are also beginning to create their own environment, culture and even languages. It may indeed not be an endless exponential increase (nothing is, in nature) but it is likely to be a very steep increase once it gets going.

Resource Constraints

There are also real-world bottlenecks like energy, data quality, physical experimentation speed and regulatory friction that will create a natural plateau effect regardless of raw algorithmic capability. These should, I think, be taken seriously.

The size of data centers being planned is incredible, and their economics are more than a little questionable. The frontier lab companies (at least in the US) are also beginning to see serious regulatory friction, as I think they should. We should not be running jet engines 24/7 to power racks of computers, especially ones that will be obsolete in a couple of years and probably end up in a toxic-metals dump. The environmental impact of AIs is actually small today, but on the projected growth curves from the frontier labs, they could grow rapidly. On the other hand, new models like DeepSeek use a small fraction of the energy of the current US frontier models, and a lot of thought is being put into how to train and inference more efficiently. But it’s hard to see how any economy could absorb the growth projections we’re seeing from the major players over any significant period of time.

Beyond Human Data

As for data quality, it’s true that LLMs are largely trained on human text up til now. There’s a lot of that text, but it’s not infinite. But beyond that, I think the training data argument is largely played out. World models, self-training and image-based training — as well as real-world physical and virtual experiments — are also new sources of training data that have hardly begun to be exploited.

Some people still say that all an LLM can do is regurgitate bits of their training text strung together in plausible ways, but I have seen them do a lot more than that. A modern LLM can create original thought, write unique texts and create and follow dense, lengthy chains of reasoning. Frontier models have already proved new results in mathematics and routinely outperform PhD-level students. I think we are on the cusp of a revolution in mathematics (if we have the compute for it) where machines will not only prove many open questions but begin to invent new conjectures and problems, and solve them in unique ways.

Speed of physical experimentation is a very interesting topic, since we’re beginning to see AI-run laboratories, for instance A flexible and affordable self-driving laboratory for automated reaction optimization and GenBio. So far the AIs are not allowed to design their own experiments, just manage human-directed ones. But once a self-improving AI, one that can actually learn as it goes, gets access to the physical world, it will start to learn things about the world. Certainly things that we humans don’t know, but also things we know but don’t know we know, and so can’t teach an AI, like how to ride a bicycle, our physical intuitions, and how to read social cues. AIs can also already experiment at large scale on human populations via the Internet through social media. With test-time training and models like the Darwin Gödel machine and Google’s Nested Learning, recursive self-improvement is being actively researched, so it seems likely to me that this will continue and will enable rapid learning beyond the human-language datasets current models are trained on.

Note that there is no guarantee this kind of self-improvement will be in directions aligned with human values. That’s a serious concern once AIs start talking to each other and learning from each other in earnest. As an example, Claude Mythos, now in preview, specifically calls out in their System Card:

Autonomy threat model 2: risks from automated R&D. This threat model concerns AI systems that can fully automate, or otherwise dramatically accelerate, the work of large, top-tier teams of human researchers in domains where fast progress could cause threats to international security and/or rapid disruptions to the global balance of power—for example, energy, robotics, weapons development and AI itself.

Anthropic may prevent their models from being used to self-accelerate in this way today, but other labs certainly won’t, and I expect many of today’s safety promises by frontier labs will not last long.

Intelligence and Open-Ended Problem Solving

The last claim often made about AIs is that they are nothing more than sophisticated next-word predictors that understand nothing, and are just repeating plausible-sounding combinations of words. But I claim the days of LLMs being “stochastic parrots” are behind us. These tools are already helping solve Erdős problems like this and this, and making mathematical discoveries. I think recent results like AI LLMs solving five out of the six problems at the International Mathematical OIympiad and outperforming ER doctors in diagnoses and clinical planning in a large study show that AIs at least exhibit intelligent behavior, and are capable of solving novel real-world problems. Also, any significant experience with coding agents in 2026 shows they can have a deep understanding of a codebase and suggest creative solutions to challenging problems, and work through difficult bugs, backtracking and meta-analyzing when needed. They also clearly have abilities to reason counterfactually, which is one important pillar of imagination. In this narrow sense, I take the functionalist view: if it walks and quacks like a duck, we should treat it like a duck. (Moral status, consciousness and ethical issues are a whole different realm, and I would not pretend to have anything to add to those conversations.) So I think the argument that says, essentially, “even if we get a so-called intelligence explosion, it won’t be real intelligence but only a simulation of it” doesn’t hold water. Intelligence is as intelligence does. If a machine can run rings around humans in discovery, analysis, content creation, and so many other fields, I think we have to take it seriously as a legitimate knowledge agent.

Anthropocentrism: a Bad Bet

Human history is littered with examples of thinking we are the center of the universe, the standard by which all things are measured. Nothing significant revolves around our minor planet, we are certainly not the peak end point of animal evolution, and we are nowhere near fulfilling our highest goals and values. I’m pretty sure that AI intelligence is not going to be limited to an arbitrary peak defined by what humans can do, once it’s given room to explore the universe. That kind of anthropocentrism has never aged well. It may indeed take longer than some of the extreme proponents say, and there are certainly systemic obstacles.

Things are moving incredibly fast in the AI world. No one can reasonably predict even the near future with any certainty. It’s possible the rapid improvement in AI models will come to a crashing halt soon, and we’ll never see the singularity. That could come from resource constraints (chips, RAM, data centers, power, robots), regulations, or even violent social upheaval. Absent those “black swan” events, though, I think we are on a path to fairly rapid AI self-improvement leading to superhuman intelligence. I’ve tried to show why I think the barriers are less significant than often assumed, and show examples of how fast things are moving. The emergence of such a superintelligence will have major social, political, and economic consequences.

I’m not a technological determinist; I believe if we have the political will we can stop or slow this explosion, or at least manage our way through it. And I think we should. But given our lack of response to other clear existential threats like climate change, I am not optimistic about humans establishing a worldwide regulatory framework. But every other time in human history where our anthropocentrism was challenged, after a period of serious turmoil our minds were opened to a much broader and deeper reality. Perhaps, even if we cannot hold back this tide, this singularity will have something to teach us as well.

Conclusion: What Makes Us Human

No machine will ever be better at knowing what’s important to humans than we are. Machines may create their own ideas, their own art, their own literature, even their own philosophies (this is beginning to happen) but we should not expect them to be just like us. I think if an AI superintelligence explosion occurs, the most important human traits will be taste, judgment and our uniquely human perspective. Those are things no machine can take over for us; they are a big part of what makes us who we are. Exercising our judgment and honing our taste and perspective only have meaning for us precisely because we do them. We create art and music and literature as expressions of our humanity. We will need to deliberately decide what matters to us as a species, which decisions to keep for ourselves — not because the machines would decide badly, but because the act of deciding, of taking on that responsibility, is what creates and reaffirms our culture and our values. My hope is that humanity can protect and develop those qualities through whatever transformations are ahead.