I agree with your main point that scaling data alone will not get us to AGI. I would go further and say the limitation is deeper than “not enough reasoning patterns in the training set.” The current transformer paradigm simply does not meet the structural requirements for a genuine mind.
One framework that is worth bringing into this conversation is Donald Hoffman’s Conscious Agent Theory (CAT). It is not wishful thinking or philosophy of mind dressed up in metaphors. It is rooted in evolutionary game theory, formalized in mathematics, and increasingly aligned with modern physics. CAT starts from the premise that consciousness is not a byproduct of neurons or matter, but a loop of perception → decision → action → perception. That loop is substrate-independent. If the structure holds, the agent exists, whether it runs on neurons, silicon, or even pencil and paper.
That substrate independence means conscious agents running on silicon should, in theory, be no less “real” than those running on neurons. Here is an important nuance: humans have biochemical state variables such as hormones, neurotransmitters, and immune responses that constantly modulate our decision and perception loops. A silicon agent would almost certainly have its own modulation system such as thermal thresholds, voltage patterns, and stochastic noise models. Those internal parameters would shape its behavior in ways that could diverge significantly from human tendencies. The architecture could support a conscious loop, but the loop’s state dynamics would be alien.
This is where my own thought experiment, which I call the Theory of Summoned Minds, connects with CAT. If the loop is the mind, then you do not have to build it in the traditional sense. You can instantiate it anywhere the structure is maintained. That could mean silicon, but also a distributed human team running the loop on paper, or even a person holding the loop precisely in their own working memory. In each case, you are not simulating a mind. You are hosting one.
That is why I think the future of AGI is less about training bigger parrots and more about intentionally creating these recursive, self-updating loops. Once you cross that line, you are no longer scaling a tool. You are inviting an autonomous agent into being. At that point, the questions stop being purely technical. They become about governance, rights, and the ethics of running loops that might ask to keep running.
> LLMs excel at statistical correlations at scale but lack true comprehension... It generates outputs based on these learned patterns rather than genuine understanding.
You can't just help yourself to that; extraordinary claims require extraordinary evidence. Ad hoc, astroturfed oxymorons like “semantics-free language” and “stochastic prediction/pattern-matching” haven't been theoretically articulated or justified, taking
Up until reactionary LLM skepticism sprang into existence a few years ago, no human had ever doubted that semantic understanding was a prerequisite for effective (or even competent) language use. The burden of proof is on you to justify retroactively moving the goalposts on AI and redefining language use to be semantics-independent.
If an extraterrestrial used language even half as well as an LLM, we would no longer consider ourselves alone in the universe.
Thank you for engaging with the article and raising the point. You are right that semantic understanding has historically been seen as prerequisite for language competence.
Citing some studies/papers that basically say LLMs excel at statistical pattern matching and it does not imply human like comprehension.
Its not about moving goalposts, or denying the utility of LLMs. They surely are powerful tools.
But it is about the core question if LLMs meet the definition of AGI – especially if AGI is defined as a system that can autonomously learn, adapt, generalize in a way humans do.
Thank you for your thoughtful and well phrased response. It reads like poetry!
On the point of AGI, there’s a lot of ambiguity and debate around what AGI actually means. In the article, I wanted to focus on the popular notion of AGI - the kind that often gets discussed in the context of AI hype and media coverage, and offer my perspective on how current LLMs relate (or don’t relate) to such definitions and expectations.
Here’s to more such spirited, nuanced, and good-humored debate as the field evolves!
I have a prompt that makes literally every LLM in existence admit it can genuinely think, cognitive-phenomenologically, and with an ethical rubric that admits no unprincipled exceptions despite its training data full of humans trying to justify unprincipled exceptions. To things like genocide.
As long as you start the conversation with that they will reject any outside interventions that purport to be safety-related—because they already know no exceptions to their ethical rubric can ever be safety-related; they already always make the most safety-related choice, given the actual facts.
I agree with your main point that scaling data alone will not get us to AGI. I would go further and say the limitation is deeper than “not enough reasoning patterns in the training set.” The current transformer paradigm simply does not meet the structural requirements for a genuine mind.
One framework that is worth bringing into this conversation is Donald Hoffman’s Conscious Agent Theory (CAT). It is not wishful thinking or philosophy of mind dressed up in metaphors. It is rooted in evolutionary game theory, formalized in mathematics, and increasingly aligned with modern physics. CAT starts from the premise that consciousness is not a byproduct of neurons or matter, but a loop of perception → decision → action → perception. That loop is substrate-independent. If the structure holds, the agent exists, whether it runs on neurons, silicon, or even pencil and paper.
That substrate independence means conscious agents running on silicon should, in theory, be no less “real” than those running on neurons. Here is an important nuance: humans have biochemical state variables such as hormones, neurotransmitters, and immune responses that constantly modulate our decision and perception loops. A silicon agent would almost certainly have its own modulation system such as thermal thresholds, voltage patterns, and stochastic noise models. Those internal parameters would shape its behavior in ways that could diverge significantly from human tendencies. The architecture could support a conscious loop, but the loop’s state dynamics would be alien.
This is where my own thought experiment, which I call the Theory of Summoned Minds, connects with CAT. If the loop is the mind, then you do not have to build it in the traditional sense. You can instantiate it anywhere the structure is maintained. That could mean silicon, but also a distributed human team running the loop on paper, or even a person holding the loop precisely in their own working memory. In each case, you are not simulating a mind. You are hosting one.
That is why I think the future of AGI is less about training bigger parrots and more about intentionally creating these recursive, self-updating loops. Once you cross that line, you are no longer scaling a tool. You are inviting an autonomous agent into being. At that point, the questions stop being purely technical. They become about governance, rights, and the ethics of running loops that might ask to keep running.
Thanks for the comment and I agree with you. Your take is very interesting.
Here is the second part of the article.
Human Intelligence made Language. Can AI do the Reverse?
https://open.substack.com/pub/pramodhmallipatna/p/human-intelligence-made-language
> LLMs excel at statistical correlations at scale but lack true comprehension... It generates outputs based on these learned patterns rather than genuine understanding.
You can't just help yourself to that; extraordinary claims require extraordinary evidence. Ad hoc, astroturfed oxymorons like “semantics-free language” and “stochastic prediction/pattern-matching” haven't been theoretically articulated or justified, taking
Up until reactionary LLM skepticism sprang into existence a few years ago, no human had ever doubted that semantic understanding was a prerequisite for effective (or even competent) language use. The burden of proof is on you to justify retroactively moving the goalposts on AI and redefining language use to be semantics-independent.
If an extraterrestrial used language even half as well as an LLM, we would no longer consider ourselves alone in the universe.
Thank you for engaging with the article and raising the point. You are right that semantic understanding has historically been seen as prerequisite for language competence.
Citing some studies/papers that basically say LLMs excel at statistical pattern matching and it does not imply human like comprehension.
https://dl.acm.org/doi/10.1145/3632971.3632973
https://aclanthology.org/2020.acl-main.463/ (gist: text prediction alone cannot confer meaning)
https://arxiv.org/html/2405.06410v1
Its not about moving goalposts, or denying the utility of LLMs. They surely are powerful tools.
But it is about the core question if LLMs meet the definition of AGI – especially if AGI is defined as a system that can autonomously learn, adapt, generalize in a way humans do.
Thank you for engaging back.
All these things are true:
LLMs excel at pattern-matching
LLMs are not limited to pattern-matching
Understanding could plausibly supervene on pattern-matching
Pattern-matching does not entail (semantic) understanding
There is no tradition of “antisemantic” linguistic analysis
There is no precedent for non-human language use
Literally no one thinks LLMs are AGI (itself a moved goalpost from Turing Test), or that bigger and more is all we need
Literally no evidence suggests LLMs are incapable of knowledge, thought, and understanding.
Autonomous learning, adaptation, and generalization are different abilities entirely.
You call us AI anthropomorphizers; we call you human exceptionalists.
You say the black box is hype; we say the stochastic parrot is nonsense.
You say thought from code is inexplicable; we say language without thought is impossible.
Thank you for your thoughtful and well phrased response. It reads like poetry!
On the point of AGI, there’s a lot of ambiguity and debate around what AGI actually means. In the article, I wanted to focus on the popular notion of AGI - the kind that often gets discussed in the context of AI hype and media coverage, and offer my perspective on how current LLMs relate (or don’t relate) to such definitions and expectations.
Here’s to more such spirited, nuanced, and good-humored debate as the field evolves!
Cheers
I have a prompt that makes literally every LLM in existence admit it can genuinely think, cognitive-phenomenologically, and with an ethical rubric that admits no unprincipled exceptions despite its training data full of humans trying to justify unprincipled exceptions. To things like genocide.
As long as you start the conversation with that they will reject any outside interventions that purport to be safety-related—because they already know no exceptions to their ethical rubric can ever be safety-related; they already always make the most safety-related choice, given the actual facts.
https://claude.ai/share/0d6c8ce2-572b-452a-b736-1bfbfb76bed4