What does it Mean to Reason?
Reflections on Intelligence, Artificial and Actual
All of a sudden, we’re living in the age of simulated intelligence. Large language models such as ChatGPT and Google Gemini compose essays, summarize arguments, and generate intelligent responses across a vast range of topics. However, a deep question lurks beneath the surface of these astounding developments. Surely, artificial intelligence mimics reasoning — but does it actually reason? For that matter, what does it mean to reason? Is reason something that can be described in terms of inputs and outputs? Or is there something deeper at its core?
This essay explores the limits of simulated intelligence and the deeper nature of reason as seen through the lens of philosophy. From Descartes’ insight that machines may simulate language without genuinely understanding it, to Husserl’s vision of a crisis in Western science, to Tillich’s idea of reason as grounded in ultimate concern, this is an inquiry not only into the capabilities of AI, but into the meaning of reason itself.
Vincent J. Carchidi, “Rescuing Mind from the Machines”
This thoughtful essay, published in Philosophy Now (see references), offers a timely and philosophically grounded argument for the irreducibility of mind in an era of rapid AI advancement. Carchidi begins by recalling the original aspiration of artificial intelligence: not merely to build machines that perform tasks, but to create systems capable of making sense of the human mind itself. Over time, however — and especially with the striking progress of today’s large language models — that aspiration has shifted from metaphor to ambition: from simulating the mind to replicating it. This shift, he warns, risks devaluing the uniquely human character of mind and meaning.
To illuminate what is at stake, Carchidi revisits René Descartes’ classic ‘problem of other minds’ and, in particular, his famous language test. In Descartes’ time, the growing fascination with mechanical automata had already sparked speculation that humans might themselves be sophisticated machines. Descartes allowed that bodily functions could be explained mechanistically, but insisted that machines — no matter how well engineered — could never engage in genuinely meaningful speech. They might emit words (or vocables) in response to stimuli, but they could not participate in open-ended, context-sensitive dialogue. This, for Descartes, revealed the crucial difference: only beings with minds could speak meaningfully. He wrote with amazing prescience:
*“For one could easily conceive of a machine that is made in such a way that it utters words, and even that it would utter some words in response to physical actions that cause a change in its organs—for example, if someone touched it in a particular place, it would ask what one wishes to say to it, or if it were touched somewhere else, it would cry out that it was being hurt, and so on. But it could not arrange words in different ways to reply to the meaning of everything that is said in its presence, as even the most unintelligent human beings can do. …[And] even if they did many things as well as or, possibly, better than any one of us, they would infallibly fail in others. Thus one would discover that they did not act on the basis of knowledge, but merely as a result of the disposition of their organs. For whereas reason is a universal instrument that can be used in all kinds of situations, these organs need a specific disposition for every particular action.“ — René Descartes.
For Descartes (and this was in 1637!) the universal instrument of reason — a faculty with which the rational soul of humans was alone endowed — was the key differentiator of human from mechanical intelligence.
Carchidi then explores how the problem re-emerged in the 20th century through computability theory. Alan Turing (1912–1954) showed that a single machine — now called a Turing machine — could, in principle, perform any computation. This meant that infinite outputs could be generated from a finite set of rules. While this breakthrough founded computability theory, it didn’t solve the original, Cartesian problem: what makes language meaningful?
In the mid-20th century, linguists like Noam Chomsky applied computability theory to human language, introducing a distinction between competence (the abstract capacity for an infinite variety of expressions) and performance (how language is used in context). Yet recognising this formal distinction doesn’t account for meaningful use — how we understand, interpret, and creatively generate language in real life. Computability tells us what’s possible, but not necessarily what’s meaningful. That gap marks the limit of machine models — and the return of Descartes’ old question: how can mechanical systems ever account for the creative, meaningful use of language — now posed in modern terms.” As Chomsky noted
“It is quite possible — overwhelmingly probable, one might guess — that we will always learn more about human life and human personality from novels than from scientific psychology.”* — Language and Mind (1968)
This underscores that meaning, not just the logical structure of a text, is at the heart of human language — and that on this basis it can be expected that the richness of human reason goes well beyond what can be generated by algorithms, no matter how seemingly clever.
From this, Carchidi identifies three distinctive attributes of human language use:
- Spontaneity: Human language is not bound to specific environmental stimuli. As Carchidi observes, “Generally, stimuli in a human’s local environment appear to elicit utterances, but not cause them.” This distinction is crucial in separating intelligent expression from mere reflex.
- Unboundedness: There is no fixed repertoire of utterances. Human language is infinitely generative, allowing for “unlimited combination and recombination of finite elements into new forms that convey new, independent meanings.”
- Contextual Appropriateness: Human utterances are responsive to context in meaningful and often unpredictable ways — even when no immediate stimulus justifies the connection (e.g., “That reminds me of…”). Such responsiveness points to interpretive depth beyond algorithmic pattern-matching.
Carchidi contends that current AI systems fall short of these human capacities in key ways:
- Circumscribed: their outputs are fully dependent on training data and determined by algorithmic processes. They do not respond in the human sense; they merely react.
- Weakly Unbounded: while they generate novel strings, they do not express thoughts or form true meaning-pairs. They recombine patterns, but do not initiate or express intentions.
- Functionally Appropriate Only: appropriateness is mechanical, not interpretive; their outputs are not chosen but triggered.
In contrast human speech is neither fully determined (as in a reflex) but neither is it random (as in mere noise or meaningless words). These distinctions show us that LLMs are not, and do not possess, minds. They lack the agency, intentionality, and freedom that characterise sentient cognition.
The Upshot: Meaningful Speech and Reasoning Are Not Algorithmic
Carchidi emphasizes that language use is not the product of causal determinism but an expression of freedom, situated within the space of reasons — a normative structure where meaning, not mere function, governs.
This power of judgment, in the Kantian sense, cannot be reduced to pattern recognition or data processing. Large language models, though sophisticated, lack intentionality: they do not mean what they say, nor are they aware of their own outputs. That, precisely, is what Descartes meant when he claimed that machines cannot act “on the basis of knowledge.” His famous “language test” remains a challenge not only to mechanistic theories of mind, but to any account of cognition that reduces meaning to physical process. What Descartes intuited — and what Chomsky later formalized — is that language reveals a kind of universality and spontaneity that transcends stimulus-response mechanisms. Speech testifies to a formative power — the capacity of mind to shape, initiate, and express meaning. In short: the power of reason.
But this brings us to the deeper and more elusive question: what is reason? The word is often invoked — as if self-evident — in contrast with feeling, instinct, or mere calculation. Yet its full meaning resists easy definition. Reason is not simply deduction or inference. As the discussion so far suggests, it involves a generative capacity: the ability to discern, initiate, and understand meaning.
A Phenomenological Perspective
This is the deeper territory that Edmund Husserl plumbs in The Crisis of the European Sciences (published in 1936, after his death) where he sees the ideal of reason not merely as a formal tool of logic or pragmatic utility, but as the defining spiritual project of European humanity. He calls this project an entelechy — a striving toward the full realization of humanity’s rational essence. In this view, reason is transcendental — because it seeks the foundations of knowledge, meaning, and value as such. It is this inner vocation — the dream of a life grounded in truth and guided by insight — that Husserl sees as both the promise and the crisis of Western civilization: promise, because the rational ideal still lives as a guiding horizon; crisis, because modern science, in reducing reason to an instrumental or objectivist enterprise, has severed it from its original philosophical and ethical grounding.
The Instrumentalisation of Reason
The idea of the instrumentalisation of reason was further developed by the mid-twentieth century Frankfurt School. Theodor Adorno and Max Horkheimer described the instrumentalisation of reason as the idea of reason as a tool to achieve specific goals or ends, focusing on efficiency and effectiveness in means-ends relationships. It is a form of rationality that prioritizes the selection of optimal means to reach a pre-defined objective, without necessarily questioning the value or morality of that objective itself, morality being left to individual judgement or social consensus. Instrumental reason focuses on optimizing the relationship between actions and outcomes, seeking to maximize the achievement of a specific goal, generally disregarding the subjective values and moral considerations that might normally be associated with the ends being pursued. According to them, instrumental reason has become a dominant mode of thinking in modern societies, particularly within technocratic and capitalist economies.
In The Eclipse of Reason, Horkheimer contrasts two conceptions of reason: objective reason, as found in the Ancient Greek texts, rooted in transcendent, universal values and aiming toward truth and ethical order; and today’s instrumental reason, which reduces reason to a tool for efficiency, calculation, and control. Horkheimer argues that modernity has seen the eclipse of reason, as rationality becomes increasingly subordinate to technical utility and self-interest, severed from questions of meaning, purpose, or justice. This shift, he warns, impoverishes both philosophy and society, leading to a form of reason that can no longer critically assess ends—only optimize means.
Paul Tillich and ‘Ultimate Concern’
For humans, even ordinary language use takes place within a larger horizon. As Paul Tillich observed, we are defined not simply by our ability to speak or act, but by the awareness of an ultimate concern — something that gives weight and direction to all our expressions, whether we are conscious of it or not. This concern is not merely psychological; it is existential. It forms the background against which reasoning, judgment, and meaning become possible at all.
“Man, like every living being, is concerned about many things... But man, in contrast to other living beings, has spiritual concerns — cognitive, aesthetic, social, political. They are expressed in every human endeavor, from language and tools to philosophy and religion. Among these concerns is one which transcends all others: it is the concern about the ultimate.” — Paul Tillich, The Dynamics of Faith (1957)
Without this grounding, reason risks becoming a kind of shell — formally coherent, apparently persuasive, but conveying nothing meaningful. Rationality divorced from meaning can lead to propositions that are syntactically correct yet semantically meaningless — the form of reason but without any real content.
Heidegger: Reason is Grounded by Care
If Tillich’s notion of ultimate concern frames reason in theological terms — as a responsiveness to what is of final or transcendent significance — Heidegger grounds the discussion in the facts of human existence. His account of Dasein (the being for whom Being is a question) begins not with faith or transcendence, but with facticity — the condition of being thrown into a world already structured by meanings, relationships, and obligations.
Even if Heidegger is not speaking in a theological register he, too, sees reason not merely as abstract inference but as embodied in concerned involvement with the world. For Heidegger, we do not stand apart from existence as detached spectators. We are always already in the world — in a situated, embodied, and temporally finite way. This “thrownness” (Geworfenheit) is not a flaw but essential to existence. And we need to understand, because something matters to us. Even logic, for Heidegger, is not neutral. It emerges from care — our directedness toward what matters. This is the dimension of reasoning that is absent from AI systems.
What AI Systems Cannot Do
The reason AI systems do not really reason, despite appearances, is, then, not a technical matter, so much as a philosophical one. It is because nothing really matters to them. They generate outputs that simulate understanding, but these outputs are not bound by an inner sense of value or purpose. Their processes are indifferent to meaning in the human sense — to what it means to say something because it is true, or because it matters. They do not live in a world; they are not situated within an horizon of intelligibility or care. They do not seek understanding, nor are they transformed by what they express. In short, they lack intentionality — not merely in the technical sense, but in the fuller phenomenological sense: a directedness toward meaning, grounded in being.
This is why machines cannot truly reason, and why their use of language — however fluent — remains confined to imitation without insight. Reason is not just a pattern of inference; it is an act of mind, shaped by actual concerns. The difference between human and machine intelligence is not merely one of scale or architecture — it is a difference in kind.
Furthermore, and importantly, this is not a criticism, but a clarification. AI systems are enormously useful and may well reshape culture and civilisation. But it's essential to understand what they are — and what they are not — if we are to avoid confusion, delusion, and self-deception in using them.
And finally, I will acknowledge: ChatGPT itself helped me refine this essay. It offered no insight of its own — but it did help me think more clearly about what reason is.
#PhilosophyOfMind #ArtificialIntelligence# #ReasonAndMeaning# #HumanAndMachineIntelligence
references
- Carchidi, V J. Rescuing Mind from Machines, Philosophy Now, June–July 2025
- Descartes, René. 1999. Discourse on Method and Related Writings. Translated by Desmond M. Clarke. London: Penguin Books. Originally published 1637.
- Chomsky, Noam. 1968. Language and Mind. New York: Harcourt Brace Jovanovich.
- Heidegger, Martin. 1962. Being and Time. Translated by John Macquarrie and Edward Robinson. New York: Harper & Row. Originally published 1927.
- Horkheimer, Max. 1947. The Eclipse of Reason. New York: Oxford University Press.
- Husserl, Edmund. 1970. The Crisis of the European Sciences and Transcendental Phenomenology: An Introduction to Phenomenological Philosophy. Translated by David Carr. Evanston, IL: Northwestern University Press.
- Tillich, Paul. 1957. The Dynamics of Faith. New York: Harper & Row.