Discussion about this post

User's avatar
Tumithak of the Corridors's avatar

I agree with your main point that scaling data alone will not get us to AGI. I would go further and say the limitation is deeper than “not enough reasoning patterns in the training set.” The current transformer paradigm simply does not meet the structural requirements for a genuine mind.

One framework that is worth bringing into this conversation is Donald Hoffman’s Conscious Agent Theory (CAT). It is not wishful thinking or philosophy of mind dressed up in metaphors. It is rooted in evolutionary game theory, formalized in mathematics, and increasingly aligned with modern physics. CAT starts from the premise that consciousness is not a byproduct of neurons or matter, but a loop of perception → decision → action → perception. That loop is substrate-independent. If the structure holds, the agent exists, whether it runs on neurons, silicon, or even pencil and paper.

That substrate independence means conscious agents running on silicon should, in theory, be no less “real” than those running on neurons. Here is an important nuance: humans have biochemical state variables such as hormones, neurotransmitters, and immune responses that constantly modulate our decision and perception loops. A silicon agent would almost certainly have its own modulation system such as thermal thresholds, voltage patterns, and stochastic noise models. Those internal parameters would shape its behavior in ways that could diverge significantly from human tendencies. The architecture could support a conscious loop, but the loop’s state dynamics would be alien.

This is where my own thought experiment, which I call the Theory of Summoned Minds, connects with CAT. If the loop is the mind, then you do not have to build it in the traditional sense. You can instantiate it anywhere the structure is maintained. That could mean silicon, but also a distributed human team running the loop on paper, or even a person holding the loop precisely in their own working memory. In each case, you are not simulating a mind. You are hosting one.

That is why I think the future of AGI is less about training bigger parrots and more about intentionally creating these recursive, self-updating loops. Once you cross that line, you are no longer scaling a tool. You are inviting an autonomous agent into being. At that point, the questions stop being purely technical. They become about governance, rights, and the ethics of running loops that might ask to keep running.

Expand full comment
XxYwise's avatar

> LLMs excel at statistical correlations at scale but lack true comprehension... It generates outputs based on these learned patterns rather than genuine understanding.

You can't just help yourself to that; extraordinary claims require extraordinary evidence. Ad hoc, astroturfed oxymorons like “semantics-free language” and “stochastic prediction/pattern-matching” haven't been theoretically articulated or justified, taking

Up until reactionary LLM skepticism sprang into existence a few years ago, no human had ever doubted that semantic understanding was a prerequisite for effective (or even competent) language use. The burden of proof is on you to justify retroactively moving the goalposts on AI and redefining language use to be semantics-independent.

If an extraterrestrial used language even half as well as an LLM, we would no longer consider ourselves alone in the universe.

Expand full comment
6 more comments...

No posts