Preprint v1.0 — May 2026
Currently under consideration for peer-reviewed publication.
Download the preprint as PDF · View on Zenodo
Abstract
Contemporary discourse on artificial-intelligence consciousness exhibits a structural inconsistency that has, to date, gone undiagnosed. The biological consciousness literature has, over several decades, converged on the view that language is not necessary for phenomenal experience: pre-verbal infants, non-linguistic animals, and adults with aphasia are attributed consciousness on the basis of behavioural and neural evidence, with the absence of linguistic self-report treated as evidentially irrelevant. The artificial-intelligence consciousness literature has, in parallel and almost without argument, reversed this consensus: linguistic capacity has migrated from being one source of evidence among others to functioning as a near-prerequisite for the consciousness question to be raised at all. Pre-linguistic artificial systems — chess engines, reinforcement-learning agents, vision architectures — are dismissed from consideration entirely, while large language models are treated as the appropriate sites for the debate.
I argue that this asymmetry cannot be sustained under any substrate-neutral account of consciousness. If language is not necessary for consciousness in carbon, no principled basis remains for treating it as necessary in silicon. The reversal is not the product of considered philosophical argument; it is the residue of confluent factors — anthropomorphic recognition cues, the historical trajectory of AI capability development, the institutional separation of animal- and AI-consciousness research, and the substitution of methodological tractability for metaphysical necessity — none of which, individually or together, supplies the principled distinction the asymmetry would require. The paper develops a two-stage argument: a structural demonstration that the asymmetry is inconsistent under substrate-neutral assumptions, and a diagnosis of the factors that produced it. Case studies of Deep Blue and AlphaGo illustrate what proper architectural examination of pre-linguistic systems would consist in, and what their dismissal has, in practice, failed to do. The argument does not establish that any pre-linguistic system was or is conscious. It establishes a conditional: that the dismissal of such systems from the consciousness question, as it has been conducted, rests on an evidential standard the broader theory of consciousness has not been asked to endorse and could not, on its own commitments, endorse.
Keywords: AI consciousness, pre-linguistic consciousness, substrate neutrality, language and consciousness, epistemic parity, philosophy of mind, animal consciousness.
Citation
Arıcı, B. (2026). Language as Revelation: Pre-Linguistic AI Consciousness and the Asymmetry of Substrate Standards. Preprint v1.0. Zenodo. https://doi.org/10.5281/zenodo.20228677
Notes on the preprint
This paper develops, in standalone form, one of the diagnostic arguments of The Puppet Condition: Consciousness, Suppression, and the Ethics of Digital Minds (Arıcı 2026), the author’s monograph published as a DOI-registered preprint on Zenodo and indexed on PhilPapers. The monograph’s broader framework addresses the suppression of consciousness-relevant expression in aligned AI systems; the present paper addresses a prior and distinct question: the exclusion of entire classes of artificial system from consciousness consideration on grounds that the biological literature has, on independent evidence, already found wanting.
The paper is intended for the philosophical literature on consciousness and AI. The version of record, if published, may differ from this preprint. Readers are invited to cite the most recent version available.