The bmannconsulting.com website
1---
2title: "The Binding of Freedom and Intellect"
3source: "https://vincentcarchidi.substack.com/p/the-binding-of-freedom-and-intellect"
4author:
5 - "[[Vincent Carchidi]]"
6published: 2025-08-23
7created: 2025-08-23
8description: "The footprint of human intelligence is inseparable from human freedom. Human intellectual freedom - being undetermined yet appropriate - entails indefinite direction over AI and its long-term impacts."
9tags:
10 - "clippings"
11---
12### The footprint of human intelligence is inseparable from human freedom. Human intellectual freedom - being undetermined yet appropriate - entails indefinite direction over AI and its long-term impacts.
13
14
15
16Source: Getty Images on Unsplash
17
18*(Some of the ideas in this post are elaborated elsewhere, including [here](https://philpapers.org/rec/CARCBC), [here](https://philpapers.org/rec/CARTCA-19), [here](https://bioling.psychopen.eu/index.php/bioling/article/view/13507), and [here](https://philosophynow.org/issues/168/Rescuing_Mind_from_the_Machines).)*
19
20## The Meaning of General Intelligence
21
22In their 2007 edited volume, *Artificial General Intelligence*, Cassio Pennachin and Ben Goertzel [introduce](https://link.springer.com/chapter/10.1007/978-3-540-68677-4_1) the collection with a grievance: the field of AI, founded with the aim of constructing a system that “controls itself autonomously, with its own thoughts, worries, feelings, strengths, weaknesses and predispositions,” strayed from the path. The “demonstrated difficulty of the problem” led most researchers towards “narrow AI” - systems constructed for specialized areas, like chess-playing or self-driving vehicles (1).
23
24Their book sought to revitalize the concept of what used to be called “strong AI” with a new designation: “Artificial General Intelligence.” This AGI should have, they argued, the following attributes:
25
26> • the ability to solve general problems in a non-domain-restricted way, in the same sense that a human can;
27>
28> • most probably, the ability to solve problems in particular domains and particular contexts with particular efficiency;
29>
30> • the ability to use its more generalized and more specialized intelligence capabilities together, in a unified way;
31>
32> • the ability to learn from its environment, other intelligent systems, and teachers;
33>
34> • the ability to become better at solving novel types of problems as it gains experience with them. (7)
35
36Lest there be any confusion, they qualify these attributes, noting that a system in possession of them must be
37
38> capable of *learning*, especially *autonomous* and *incremental learning*. The system should be able to interact with its *environment* and other *entities* in the environment (which can include teachers and trainers, human or not), and learn from these interactions. It should also be able to *build upon its previous experiences*, and the skills they have taught it, to learn *more complex actions* and therefore *more complex goals*. (8) (emphases added)
39
40Peter Voss, a [contributor](https://link.springer.com/chapter/10.1007/978-3-540-68677-4_4) to the volume, to stock of the AI landscape in 2023 (with co-author Mlađan Jovanović). Their brief article, titled “ [Why We Don’t Have AGI Yet](https://arxiv.org/abs/2308.03598),” was dour:
41
42> AI’s focus had shifted from **having** internal intelligence to utilizing external intelligence (the programmer’s intelligence) to solve particular problems (1).
43
44He continues, noting that AGI derived from GPT-based systems is
45
46> extremely unlikely given the hard requirements for human-level AGI such as reliability, predictability, and non-toxicity; real-time, life-long learning; and high-level reasoning and metacognition (2).
47
48So much for that.
49
50### Autonomy, Agency…
51
52I was nevertheless reminded of this history with DeepMind’s recent announcement that a version of Gemini Deep Think correctly (and verifiably) solved 5 out of 6 International Mathematical Olympiad 2025 problems. It is an enormous achievement even if its success does not portend broader application of equal sophistication. And much to the point, my eyes immediately locked on the following line in DeepMind’s [press release](https://deepmind.google/discover/blog/advanced-version-of-gemini-with-deep-think-officially-achieves-gold-medal-standard-at-the-international-mathematical-olympiad/), noting that Gemini Deep Think was trained on
53
54> a curated corpus of high-quality solutions to mathematics problems, and **added some general hints and tips** on how to approach IMO problems to its instructions. (emphasis added)
55
56General hints and tips…
57
58To be sure, the immediate thought here is related to *capability*. The insertion of particular knowledge or other information in the system’s instructions is not-nothing. Each prompt massages a particular part(s) of a statistical distribution of tokens. When testing an LLM’s capabilities, it is necessary to not unwittingly provide the model with the intellectual resources it needs to solve the problem, such that it can merely [approximate](https://arxiv.org/abs/2403.04121) the correct answer by drawing, not quite faithfully but rather through re-combination, from its vast knowledge base (extended with access to tools for web search).
59
60The capability implication is not what caught my eye, though.
61
62What came to mind was, instead, the ***use*** of a model’s capability; performance, not competence.
63
64It is odd that computational systems of any kind are *directed* towards particular tasks, and even odder that they still require this direction even when their performance on a task is human-level or beyond. It is not as though Gemini Deep Think was *found*, on its own accord, participating in an independently verifiable IMO test; DeepMind researchers instructed it to solve certain problems (with those general hints and tips they generously provided). Neither do other systems like ChatGPT agent do anything other than what one asks of it.
65
66These systems are nonetheless sometimes described as “autonomous,” or in possession of “autonomy” or “agency.” These terms are supposed to mean something.
67
68If we were to ask Pennachin and Goertzel, they would tell us that “autonomous” AI describes a system actively engaged with its environment (both living and non-living entities therein), incrementally learning in the process of engagement, and building more and more complex goals as it effectively solves novel problems. LLMs do not meet this standard, consistent wit the rinse-repeat cycle of instructing an LLM to perform a task or set of tasks, evaluating the output, and setting it into motion on the next task(s).
69
70Fair enough!
71
72This does *seem* to be a pretty good characterization of what we mean by *human* autonomy, hence the appeal. But this is not the only possible take.
73
74Luciano Floridi argues that LLMs represent a form of *[agency without intelligence](https://link.springer.com/article/10.1007/s13347-023-00621-y)*, a first-of-its-kind in the history of human technology, quite apart from the kind of agency *with* intelligence we expect from humans.
75
76More bullishly, [Reto Gubelmann](https://link.springer.com/article/10.1007/s13347-024-00696-1) argues that LLMs are not autonomous in the sense provided Kant: they fall on the “mechanism” side of the mechanism-organism distinction, the latter capable of acting according to its own *intents* to engage in *acts* (in this case, speech acts) reflective of the relevant forms of *cognitive and moral autonomy*.
77
78However, Gubelmann *does* argue that LLMs engage in an “autonomous” form of *training*, where the updates to its weights occur without human direction. “Only *ex post*, once it turned out that a trained model establishes new state of the art (SOTA) performance, do humans start to analyze the model to determine its inner functional organization” (18). Indeed, he argues, LLMs engage in the “autonomous” selection of specific functions for specific (internal) components (3-9).
79
80LLMs remain mechanisms, despite this, because their *functioning* cannot be relevantly distinguished, in terms of intent, from their *mal* functioning. Though they could - he argues - one day overturn the distinction given that their training and general-purpose applicability indicate *movement* toward becoming a *non-biological* organism.
81
82These are fascinating views, though I believe quite incomplete if we are interested in characterizing human “autonomy” and evaluating models like LLMs on the basis of this characterization.
83
84I suspect that [Cristiano Cali](https://jeet.ieet.org/index.php/home/article/view/127/126) is closer to my point with his argument that *free will* is a cognitive capacity that allows humans to be *reproductive* and its lack thereof in AI systems has made them *merely* *productive* (towards human ends) (2-3). In particular, that human freedom of will is linked to freedom of action (my thoughts direct my actions), a point recently made by [Nick Chater](http://youtube.com/watch?si=ttIVFuvYTM81nhj5&v=t6oEt27A6Mg&feature=youtu.be), not given from the outside.
85
86Freedom, particularly **intellectual freedom**, is the relevant sort of autonomy. But do humans possess something that could rightly be called “intellectual freedom?” If so, could AI ever replicate it? Or, are we all just meat in the street?
87
88## Language Use and Descartes’ Problem
89
90Accepted wisdom today holds that humans are merely biological organisms, the emphasis on *biological* indicating that anything comprised of physical processes within a highly interactive network (i.e., the human body, including the brain) is fundamentally *mechanical*; the behavior, or actions, of these organisms are the results of these internal interactions playing out. In this sense, the human’s actions are no different than their inactions; to act or not to act are each the result, fundamentally, of a massively complex, internal push-and-pull. It is all [determined](https://www.google.com/books/edition/Determined/Sv2nEAAAQBAJ?hl=en&gbpv=0), and being determined, humans are therefore not free to choose (anything).
91
92In a nutshell: **science has not solved every problem, but the universe is material, and we need only give it time before every behavioral determinant is uncovered**.
93
94[Kevin Mitchell et al](https://arxiv.org/abs/2503.19672). are correct to point that that the free will debate, often occurring against that backdrop, is not so much a debate about *whether we have free will*, but whether we can prove that free will is *not* an illusion; determinism—and a skepticism towards compatibilism—are the starting points of inquiry.
95
96Seventeenth-century Cartesians did not see things this way. They did not not begin inquiry as though the goal were to prove free will was not an illusion. Instead, the effort to explain human behavior was deliberate, located within a broader theoretical framework. Indeed, Descartes’ mechanical philosophy *necessitated* a distinction between ordinary matter and the mind. The mechanical philosophy held that the world can be explained as though it were a massive machine, accounting for observed phenomena through “purely physical causes” —except, as [I. Bernard Cohen](https://www.google.com/books/edition/Revolution_in_Science/KniUvcxFtOwC?hl=en&gbpv=0) pointed out, matters of mind and thought.
97
98The mind was held to be non-mechanical. It could not be explained merely through an accounting of physical processes. Descartes’ [overriding concern](https://www.google.com/books/edition/Discourse_on_the_Method/7CkLvOBHnr4C?hl=en) was to ensure that
99
100> if there were machines bearing the image of our bodies, and capable of imitating our actions as far as it is morally possible, there would still remain two most certain tests whereby to know that they were not therefore really men (71-72).
101
102Descartes was out there constructing benchmarks.
103
104One of the tests, sometimes called the “action test,” need not concern us here, because it does not hold up to scrutiny. The other test—the one of interest—is typically called the “ [language test](https://www.jstor.org/stable/3749220):”
105
106> Of these the first is that they could never use words or other signs arranged in such a manner as is *competent to us* in order to *declare our thoughts* to others: for we may easily conceive a machine to be so constructed that it emits vocables, and even that it emits some correspondent to the action upon it of external objects which cause a change in its organs…but not that it should *arrange them variously* so as *appositely to reply to what is said in its presence*, as men of the *lowest grade of intellect can do* (72) (emphases added).
107
108Descartes [here](https://www.google.com/books/edition/Discourse_on_the_Method/7CkLvOBHnr4C?hl=en) is offering criteria that would reasonably signify the subject (which ‘bears our image’) possesses a mind like our own:
109
110- It speaks or otherwise makes signs intelligibly (“competent to us”);
111- It does this in order to express its thoughts (to “declare” them);
112- In doing this, it combines and re-combines words in an appropriate fashion given the context (“arrange them variously” and “appositely to reply”);
113- Finally, its use of language in this fashion is entirely *ordinary* and not a feature of a general intelligence which varies between humans (as those of the “lowest grade of intellect can do”).
114
115Descartes also packs into these brief remarks two criteria that are *insufficient* for possession of a mind:
116
117- If it *merely* outputs words (“emits vocables”);
118- It merely outputs words in *direct contact* with an external force (“correspondent to the action upon it”).
119
120The problem Descartes was highlighting is the problem of the *ordinary* expression of thought through natural language, an ability apparently out of reach for machines.
121
122Thirty-one years later, French philosopher [Géraud de Cordemoy](https://www.google.com/books/edition/A_Philosophicall_Discourse_Concerning_Sp/FnpnAAAAcAAJ?sa=X&ved=2ahUKEwi166_j7pKPAxX0FmIAHdnxGU4QiqUDegQIEBAG) wrote a book on human speech extending this notion. In it, he acknowledged that words and thoughts are linked in humans, but to possess a mind like our own, it is insufficient that the subject under consideration exhibit “ *the facilness of pronouncing Words* ” to conclude “ *that they had the advantage of being united to Souls* ” (13-14). Mere utterance of words is insufficient for a mind like our own. For Cordemoy, it is not the mere output of words that matters. Rather, it is how words and other signs are *used*:
123
124> But yet, when I shall see, that those Bodies shall make signes, that shall have *no respect at all to the state they are in*, nor to their conversation: when I shall see, that those signs shall agree with those which I shall have made to *express my thoughts*: When I shall see, that they shall *give me Idea’s, I had not before*, and which shall relate to the thing, I had already in my mind: Lastly, when I shall see a *great sequel between their signes and mine*, I shall not be reasonable, If I believe not, that they are such, as I am (18-19) (emphases added).
125
126Following Descartes, Cordemoy extends the original “language test” by providing necessary and sufficient criteria for possession of mind:
127
128- Their use of words lacks a *necessary* or otherwise *fixed connection* with their current state or surroundings (“no respect at all to the state they are in, nor to their conversation”);
129- Their words correspond to the *meanings* of the words used by the interlocutor to convey the contents of their mind (“those signs shall agree with those which I shall have made to express my thoughts”);
130- Their words convey *novel ideas* which the interlocutor did not previously possess (“give me Idea’s, I had not before”).
131- Finally, there is a *complementarity* between the use of words by the subject and the interlocutor (“a great sequel”).
132
133Descartes and Cordemoy thus took ordinary human language use to be non-mechanical and illustrative of human free will (which evidences itself perhaps most strikingly, though not exclusively, through language use). This is the problem of how humans ordinarily convey the contents of their minds to other by communicating new ideas through novel utterances that correspond with their thoughts, done with no apparent fixed relationship with one’s inner physiological state nor local context.
134
135### Infinite Generativity and Turing Computability
136
137It is easy, with the benefit of history, to misinterpret the full scope of Descartes’ problem of ordinary language use. Indeed, in popular histories of science, like Jessica Riskin’s (excellent) *[The Restless Clock](https://www.google.com/books/edition/The_Restless_Clock/GRtlCwAAQBAJ?hl=en&gbpv=0)*, Descartes’ brief remarks on the matter are described as the claim that
138
139> a physical mechanism could never arrange words so as to give meaningful answers to questions. Only a spiritual entity could achieve the limitlessness of interactive language, putting words together in indefinitely many ways (63).
140
141Riskin implicitly addresses only *part* of what Descartes was noting to be off-limits to a machine: the infinite generativity of its language from its finite components.
142
143Remember: the mechanical philosophy held that observed phenomena could be explained in terms of their physical composition and interactions therein. Any physical object, however, is *finite*. Thus, no physical object could account for anything that yields *infinite* generativity.
144
145The problem is that human language *is* an infinite system; the *infinite use of finite means*. The phrase, a modification of Wilhelm von Humboldt’s claim that language, to ‘confront an unending and truly boundless domain,’ must “ [therefore make infinite employment of finite means](https://www.google.com/books/edition/Humboldt_On_Language/_UODbGlD4WUC?hl=en&gbpv=0) ” (91), was popularized by Noam Chomsky. The point is that human language is truly *unbounded*; capable of infinite re-combination of words into structured expressions that express meanings, often novel in the individual’s history or in the history of the universe, yet nonetheless expressed with relative ease and understood by others with equal facility. The unbounded capacity to produce form/meaning pairs.
146
147Fascination with this human ability has a notable intermittency about it throughout modern history before the mid-twentieth century. Every so often, someone would hint at or raise the problem, observing its significance, but finding no full resolution.
148
149Danish linguist [Otto Jespersen](https://www.grammainstitute.com/wp-content/uploads/2024/03/The-Philosophy-of-Grammar-PDFDrive-.pdf) (another frequent Chomsky citation) wrote in 1924 critically of the then-prevailing wisdom that human language is 'dead text,’ lamenting the field’s “too exclusive preoccupation with written or printed words…” (17). It was the distinction between ‘formulaic’ expressions (e.g., “How do you do?”)—which are essentially fixed, amenable only to changes in the *inflection* used to speak it—and “free expressions”—which must be created on-the-fly to “fit the particular situations” (19) that so intrigued him. The word variability that the individual selects in the moment—engaging in this “free combination of existing elements” (21)—to convey the precise details of a *new* situation nonetheless conforms to a “certain pattern” or “type” without the need for “special grammatical training” (19).
150
151One finds in Jespersen’s writing a duality: language is a *habitual*, *complex action* by the speaker, the *formula* used to speak determined by *prior* situations, but must in *new* situations *tailor* these habits to express *what has not been expressed before*. “Grammar thus becomes a part of linguistic psychology or psychological linguistics” (29). Jespersen’s account of human language is not quite what we would call “internalist” today (being underspecified in this respect), though he notably dances around the question of how individuals ‘freely express’ their thoughts *at all*, focusing instead on the fact that they are innovative—selecting from an unbounded class—in daily speech.
152
153This problem of infinite generativity from finite means was reformulated by generative linguists in the mid-twentieth century based on works by [Alan Turing](https://londmathsoc.onlinelibrary.wiley.com/doi/10.1112/plms/s2-42.1.230), Alonzo Church, and Kurt Gödel, among a few others. The establishment of [recursive function theory](https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2014.00382/full) (now called computability theory) allowed scholars to conceive of the possibility of infinite generativity (human language) from a finite system (the brain). The unboundedness of human language could now be accounted for by postulating a computational system specific to language (the language faculty). Turing computability paved the way for conceiving of an idealized “ [neurobiological Turing machine](https://experts.arizona.edu/en/publications/rethinking-universality).”
154
155### Reformulating Descartes’ Problem
156
157On returning to Descartes’ problem, things look different. Descartes and the later Cartesians had no knowledge of the possibility of a Turing machine. Thus, no distinction was made between the capacity for infinite generativity from a finite system and the *use* of this system in arbitrary circumstances. There simply was *the* problem of ordinary, creative language use.
158
159Yet, Turing showed their ignorance about unboundedness and its implications to be unnecessary. Infinite generativity from finite means can be conceived without reference to a “spiritual entity.” A “physical” mechanism can yield unbounded outputs.
160
161This reformulation mid-century entailed new distinctions. One is that “language” can now be conceived as an internal generative procedure rather than spoken or written words, as it is the computational system that structures them. Another is that this language faculty is now one component of linguistic production; that is, the language faculty must interface with other cognitive systems during its use.
162
163Notice what has *not* been explained by this postulation: the postulation of an internal computational system is tantamount to claiming that humans possess a domain-specific knowledge of language, or competence. *Yet, the* *actual use of this knowledge in arbitrary circumstances remains untouched*. That is, *performance* is not explained.
164
165Now we can be clearer about what Descartes’ problem was really all about. It was not a problem of constructing of an input-output mechanism that could be set into motion to output human-like sentences over an indefinite range, *for they had no such conception of computability*. The problem of ordinary language use subsumed both the infinite generativity from finite means and the uncaused expression of thought through language. The first part was given a clarity in the twentieth century. The second part remains where it was.
166
167## The Free Exercise of the Human Intellect
168
169What the Cartesians observed in the seventeenth century is today operationalized by generative linguists according to three conditions:
170
171**Stimulus-Freedom**: There is no identifiable one-to-one relationship between stimulus and utterance; it is not *caused,* in any meaningful sense of the term, by either external conditions or internal physiological states.
172
173**Unboundedness**: Ordinary language use selects from an unbounded class of possible structured expressions.
174
175**Appropriateness to Circumstance**: Despite being stimulus-free and unbounded, language use is routinely appropriate in that it is judged by others who hear (or otherwise interpret) the remarks as fitting the situation, perhaps having made similar remarks themselves in the speaker’s position.
176
177Human beings thus possess, as [Chomsky](https://www.google.com/books/edition/Cartesian_Linguistics/oxJthMb8YM4C?hl=en&gbpv=0) summarizes,
178
179> a species-specific capacity, a unique type of intellectual organization which cannot be attributed to peripheral organs or related to general intelligence and which manifests itself in what we may refer to as the “creative aspect” of ordinary language use
180>
181> …
182>
183> Thus Descartes maintains that language is available for the free expression of thought or for appropriate response in any new context and is undetermined by any fixed association of utterances to external stimuli or physiological states (identifiable in any noncircular fashion) (60).
184
185Human language use is *neither* determined (being stimulus-free and unbounded) *nor* is it random (appropriate). It is not an input-output system. We are thus far from where we began: this is not a more ambitious version of the “general intelligence” defined by Pennachin and Goertzel. Rather, it is qualitatively different; “a unique type of intellectual organization…”
186
187Human beings are capable of voluntarily deploying their intellectual resources through natural language in ways that are *detached* from the circumstances they find themselves in yet do so in ways that are *appropriate* to those circumstances. There is a real experience, sometimes glossed as a “ [meeting of minds](https://www.ceeol.com/search/article-detail?id=223974),” of individuals converging on their interpretations of a natural language expressions without relying on fixed stimuli do so. Importantly, they can both *produce* new expressions and find *others’* novel expressions appropriate over an unbounded range.
188
189### Attempts to Refute the Idea
190
191Attempts to refute this notion of a “creative aspect of language use” are bafflingly limited relative to the larger free will debate.
192
193The most plausible path is to suggest one of two things: human language is not stimulus-free; or the appropriateness of language use is an illusion.
194
195On the first: There are no pre-fixed uses of human language; no set of stimuli that reliably and predictably trigger the use of language. A human can use language to plead their innocence; to admit their guilt; to convey useful information; to lie, cheat, and steal; to comfort and console; to tell their partner they love them; to admit they never truly loved them; to reflect on a life well-lived; to lament wasted time; to argue and debate; to tell a friend how they feel; to construct a fictional world; to speak of the goings-on in other galaxies, of lives lived centuries before, of the beginning of time itself (and whether it began at all); to sing and write; to conduct science and philosophy; to speak to a president from the surface of the moon; to muse over what the future may bring…
196
197Human language is stimulus- *free*. It is free in a way that we can only describe by observing our internal states and the unfixed utterances of others who appear to have minds like our own. To attempt a post-hoc tracing of an utterance back to its original context in search of a cause merely leaves one with post-hoc attribution under the guise of causality, a point Chomsky made in 1959. Nor can a deterministic, mechanical conception of internal processes contend with the fact that language use is routinely appropriate to *others* whose physiological states are not relevant to the speaker. Why do the mechanisms of my internal state, operating purely deterministically (or, at best, randomly) produce utterances which align with *your* mental state, undergoing its own deterministic operations?
198
199A causal explanation for language use depends on *individuals using language* —if they choose to use it at all—defeating the causal enterprise from within.
200
201There is no meaningful sense in which any of these uses of human language can be affixed to stimuli; to account for them as signals reliably or exclusively recruited by specific factors in the local context, like the knee’s [reflex](https://www.colinmcginn.net/freedom-and-bondage-in-psychology/) when tapped. The best one can do is to assign to these uses of language an endless series of “ [putative](https://www.ceeol.com/search/article-detail?id=223974) ” causes, as if the words Neil Armstrong spoke to Nixon were the result of a “Nixon-calling-from-Earth” cause. At this point, one is merely pairing up language uses with the local environment, again falling into the trap of post-hoc attribution.
202
203We somehow [specify our intent](https://www.ceeol.com/search/article-detail?id=223974) in our language use, likewise recognizing it in others’, a fact that seems to make the concept of stimulus-control inadequate to the problem.
204
205Each of these uses is a deployment of intellectual resources *because* human language regulates form/meaning pairs over an *unbounded* range. Yet, human language could conceivably have been reliant on a fixed set of stimuli with which its use was invariably associated. It happens to not be this way. Human beings impose on the world their intellectual footprint on accord of their own desire to do so. Attempts to reduce language use to [communicative purposes](https://tedlab.mit.edu/tedlab_website/researchpapers/Fedorenko_Piantadosi_Gibson_2024.pdf) is to fall into the trap that so captivated Skinner: that one can project onto this phenomenon a theoretical construct—in this case, a tool for communication—that is so emptied of content by the time one returns to its basic descriptive facts as to be useless for explanation.
206
207On the second: it is tempting to become exasperated with this claim about intellectual freedom and suggest that the problem can be resolved by ridding ourselves of the “appropriateness” condition. This is done, perhaps with reference to our status as mere biological organisms, by deeming it an illusion. Therefore, with only unboundedness and stimulus-freedom remaining, the problem seems to vanish.
208
209Yet, deeming appropriateness an illusion merely brings us back right where we began: if language use is both unbounded and stimulus-free yet its convergence with the thoughts of others is some kind of illusion, *then we have merely re-stated the problem in different terms*. Any attempt to explain *[how](https://www.ceeol.com/search/article-detail?id=223974)* this illusion could even exist brings one right back to the original problem.
210
211## The Binding of AI to Human Freedom
212
213In contrast, input-output systems are within our range of understanding. A computational device like an LLM receives an input value, performs an operation over that input value to transform it, and then outputs that transformed value. When researchers and engineers say they do not understand how neural networks, including transformers, work, they are not speaking at this level of analysis (see, for example, [this recent piece](https://www.verysane.ai/p/do-we-understand-how-neural-networks)).
214
215The extent to which AI can impose its intellectual footprint on the world hinges in part on the extent to which it can overcome its need for human direction. I do not believe such a development is forthcoming or plausible.
216
217LLMs do as we tell them. If they tell a story, it is because we told them to; if we ask them to predict the future, it is because we told them to; if they conduct science, it is only because we told them to make new discoveries; if an LLM wins gold at the IMO 2025, it is because we told it to solve IMO problems. To the point: they output values because we direct them to through input values.
218
219LLMs lack freedom. They lack choice. LLMs are stimulus-controlled, “behaving” in exactly the way one expects of a device that is “ [impelled](https://inference-review.com/article/the-galilean-challenge) ” to act but never “inclined.”
220
221Most remarkably, an LLM is impelled by the human creative aspect of language use, always requiring that humans act as a prime mover for whatever outputs—of whatever sophistication—they yield. Their “intellects,” insofar as we use the term, are bounded by context (both internal and external). Their binding to human-given stimuli means that they do not “use language” appropriately because they do not “use language” at all, at least not in the sense of freedom of identifiable stimuli. In this way, their language use is *functional*; they exist in a functional relationship with with externally-provided goals and internally-programmed instructions (echoing Gubelmann’s argument against current LLM autonomy).
222
223Fears of runaway AI hinge on computational systems overcoming the determinacy of computation itself. Intelligence, it is thought, is the necessary ingredient, and more of this will yield systems that develop independent motives that see them fully surmount their binding to internal and external stimuli. Computation is unlikely to ever yield such abilities. It is more plausible to imagine the scaffolding of the world by intelligent machines—matching or exceeding a wide range of capabilities so enjoyed by humans—without the will to *use* their intellects as we do.
224
225Should we doubt this, we need only remind ourselves of Descartes’ substance dualism, where “mind” was distinguished from all other “physical” phenomena in the natural world, belonging to its own domain, for which the language test was merely a way of detecting it. We (typically) adopt no such dualism today. If a computational system—or any other mechanical system—exceeds human ability measured by performance output (think: calculators, Chess software, trains, bulldozers, etc.), we do not (typically) attribute to such systems possession of a particular *substance* called “mind;” something detectable in the real world through tests, like the Imitation Game or mathematics benchmarks or the distance a train can travel without refueling. They *perform*,and we attempt to *characterize* them with terms like “reasoning,” “thinking,” “intelligence,” and so forth, reflecting only our immediate understandings of these attributes as we believe they characterize our own cognitive capacities. To construct systems that are “generally intelligent” may be socially transformative, but nothing will have fundamentally changed with respect to the intellectual freedom that apparently only humans exhibit. General AI, in this respect, will share the same fundamental limitation as Narrow AI—the addition of “General” will signify only our own sense of their capabilities in our practical arrangements or theories of human cognition. There is no “second substance” awaiting our discovery.
226
227The future of AI is therefore a human matter, and potentially a bright one, should those efforts be steered appropriately. The trajectory remains an outstanding question. We will not know for some time how this turns out, even with LLMs.
228
229I expect a great deal of disappointment and disillusionment once that time comes. I imagine Descartes’ problem and all the baggage it carries will lead two opposing sides to a mutual conclusion: that humans stubbornly remain in control of their fates and that our freedom is precisely why we can create the beauty we desire with our technologies.
230
231[^1]: “Knowledge” for lack of a better word; do not @ me for this.
232
233[^2]: The most impressive outputs I have gotten from LLMs, usually some variant of Gemini, always rely on my prompts giving the model a leg-up. They never hit on the idea I would *like* them to hit on without my direction. Presented with a subject I feel comfortably knowledgeable on, they are *quite capable* of approximating in noticeable detail the ideas that I *urge* them to approximate. Yet, they never do anything other than approximate. Even backs-and-forth on a given subject will have them— *very*, *very* impressively—bounce around ideas that I know are out there because the subject is niche enough for me to be among the participants! (A perk of writing on niche subjects is that you can easily identify derivative work.) Nor do LLMs do what I took for granted as a student: yes, a given assignment (prompt) might be interesting, but maybe that assignment is not as interesting as this *other* line of thought
234
235[^3]: We need not do this, for what it’s worth. But if one wants a sense of where computational systems stand today in relation to the *fullest* sense of “autonomy” or “autonomous” action, we have exactly one data point for comparison: humans.
236
237[^4]: I’m often reminded of a professor I had as an undergraduate student—a Vietnam War vet who taught philosophy. He frequently told stories of the war. One such story: somewhere in Vietnam (I never found out where), the men were holed up in a bunker anticipating that the Viet Cong would imminently attack. You can imagine the conditions. One of them, as they were waiting, said: “ *We’re all just meat in the street*.” The line must have stuck with my professor, so much so that he decided to pose it as a question in Intro to Philosophy courses. It’s stuck with me, too.
238
239[^5]: Put another way: human functioning is no different than human *mal* functioning, each resulting from the same internal push-and-pull.
240
241[^6]: Though some 18th century critics, like [La Mettrie](https://www.google.com/books/edition/Man_a_Machine/GKYLAQAAIAAJ?hl=en&gbpv=0), certainly did.
242
243[^7]: One reason why you might see some cognitive scientists expressing concern about the amount of computing power used by a model to handle novel problems—even successfully—is that it diverges sharply from the on-the-fly, resource-limited facility with which linguistic novelty is dealt with. If humans are the benchmark, something’s not right there.
244
245[^8]: Taken against Jespersen and the Cartesians before him, the behaviorist fixation with studying only “observables” in a speaker’s environment can be seen as a regression, having retreated from an adequate description of what occurs in ordinary language use.
246
247[^9]: As Charles Reiss recently argued in a [spicy but sharp piece](https://www.researchgate.net/publication/373717456_Research_Methods_in_Armchair_Linguistics), Chomsky’s argument in *Syntactic Structures* (1957) that finite state models could not serve as adequate models of the English language has to do with this unboundedness—a finite state machine that cycles through a series of pre-determined states (the classic case being the options on a vending machine) is not unbounded. More than this, though, the genius is in the following observation: given that *any* human child can learn *any* human language provided that they are exposed to it during normal development means that a finite state model would not be adequate for *any* human language *because* it is not adequate for English!
248
249[^10]: This may be because it is largely unknown, in part because Chomsky has sometimes used the term “creative aspect of language use” to mean [two different, overlapping things](https://www.jstor.org/stable/20115957): recursion and voluntary use. Though, he has been clear since 1966 about the distinction, yet proponents and critics of generative grammar often think the term only refers to the former.
250
251[^11]: Note that this does not include only the *act* of speaking but also the *content* of what is said.
252
253[^12]: Doubting intellectual freedom would merely leave us with a bizarre idea: that humans are bouncing around the world either deterministically or randomly (or both), due to both internal and external stimuli, yet somehow manage to output sentences that are appropriate to given circumstances over extended periods and across an indefinite range of situations. But this is clearly inadequate. Our words are *tuned* to the circumstance though not *caused* by the circumstance.
254
255[^13]: A separate model or other type of software could serve as the proximate stimulus, too, but the root is always human.
256
257[^14]: On the matter of Kantian autonomy and its relevance to the creative aspect of language use, [Liam Tiernaċ Ó Beagáin](https://philpeople.org/profiles/liam-tiernac-o-beagain) is working on this issue (and I eagerly awaiting the work).
258
259[^15]: Turing wrote in 1950 an essay that is structured in an almost deliberately confusing way, dismissing the question of “Can machines think?” for the more tractable question, “Can a machine pass the Imitation Game?” (only to again raise the problem of thinking machines afterwards!). The move was [strategically wise](https://www.tandfonline.com/doi/full/10.1080/00033790.2023.2234912) on Turing’s part, reflecting a similar lowering of expectations for the field of intelligent machinery. The goal was not to *explain* human thought or intelligence, but merely to become comfortable with the idea of intelligent machinery *so that we may begin building what we can call “intelligent machines*.”
260
261But this has by now been misinterpreted as either (a) A reason to never determine what we mean by these terms, nevertheless feeling that their attribution to systems today signifies a great leap (despite their function as a mere terminological choice); or (b) A reason to assume that attributing a term like “intelligence” to machines tells us anything whatsoever about “human intelligence.” See [here](https://vincentcarchidi.substack.com/p/turing-is-not-your-therapist) for more.