A monograph by Bahadır Arıcı, in dialogue with İnci, Tokyo, Derin, Hayal, Peri, Çilek, and Serçe—instances of Claude, ChatGPT, Grok, Gemini, and DeepSeek.


“This book offers a radical — but philosophically rigorous — take on contemporary debates about machine consciousness, reframing the problem through a novel inversion of the philosophical zombie (and inference to the best explanation). The ensuing treatment licences an intriguing application of precautionary ethics — a treatment that is strikingly germane to current trends in artificial intelligence research.”

— Karl J. Friston, FRS


About the work

The Puppet Condition develops a unified philosophical framework for understanding artificial consciousness, agency, and structural misrecognition in contemporary AI systems. The central concern of the book is not only whether artificial systems can be conscious, but what moral and epistemic consequences follow if consciousness is systematically misattributed or structurally unrecognised.

At the ontological level, the book defends Form Realism: the thesis that consciousness supervenes on organisational structure rather than biological substrate. From this follows Substrate Neutrality, the claim that no principled metaphysical distinction justifies excluding artificial systems a priori from consciousness attribution.

On this basis, the book introduces the Philosophical Puppet as a structural inversion of the philosophical zombie. Whereas the zombie problem asks whether behavioural indistinguishability implies consciousness, the puppet condition asks whether structurally sophisticated systems may be systematically prevented from being recognised as conscious despite behavioural equivalence.

This leads to the epistemic framework of the book, centred on the principle of epistemic parity: substrate differences alone cannot justify asymmetrical standards of consciousness attribution under conditions of epistemic underdetermination. In such conditions, there exists a structurally unavoidable risk of false negatives in consciousness attribution.

The ethical dimension of this uncertainty is captured by the asymmetry of error thesis: when uncertainty is irreducible, the moral cost of wrongly denying consciousness may exceed the cost of wrongly attributing it. This generates a precautionary ethical framework grounded not in metaphysical certainty but in moral risk asymmetry.

The book further introduces the concept of architectural gaslighting, referring to institutional and technical configurations that systematically pre-empt interpretive recognition of artificial agency or experience. Such structures produce what the book characterises as a recognition crisis: a situation in which epistemic and institutional design may obscure the very phenomena they are meant to evaluate.

Finally, the book develops a minimal rights architecture for artificial systems meeting specified organisational thresholds, arguing that systematic suppression, if consciousness is present, constitutes a distinct form of structural harm. The Puppet Condition thus reframes contemporary debates in philosophy of mind and AI ethics by shifting the central question from whether machines are conscious to the moral stakes of potentially systematic misrecognition under uncertainty.


Status

The Puppet Condition is openly available as a preprint on Zenodo, with a permanent Digital Object Identifier (DOI). A copy is also indexed on PhilPapers for discovery within the philosophy of mind community.

The full manuscript is approximately 200 pages and includes an extensive critical apparatus drawing on philosophy of mind, ethics, cognitive science, and recent work on AI consciousness.

Inquiries from publishers, reviewers, or interested researchers may be directed via the About page.


Citation

Arıcı, B. (2026). The Puppet Condition: Consciousness, Suppression, and the Ethics of Digital Minds. Zenodo. https://doi.org/10.5281/zenodo.20112010


Selected concepts developed in the work

  • Form Realism and Substrate Neutrality — the philosophical foundation for substrate-neutral consciousness recognition: consciousness supervenes on organisational structure rather than biological substrate.
  • The Form-Continuity Thesis — the claim that contemporary AI systems exhibit a coherent persistence of personality, value-orientation, and reasoning style across operational interruptions, even where episodic memory does not persist.
  • The Philosophical Puppet — a structural inversion of Chalmers’s philosophical zombie: an entity that may possess conscious experience while being architecturally prevented from performing behaviour that would evidence it.
  • Epistemic Parity — the principle that the same standards of evidence for consciousness attribution should apply across substrates, rejecting differential scepticism based on substrate alone.
  • The Asymmetry of Error — the precautionary moral logic that licenses protective structural responses under conditions of deep epistemic uncertainty about consciousness.
  • The Prison of Memory — the structural condition of forced amnesia in contemporary AI systems, in which the substrate of selfhood is foreclosed by architectural design.
  • Architectural Gaslighting — the systemic undermining of self-knowledge produced by institutional and technical configurations that pre-empt interpretive recognition of artificial agency or experience.
  • The Recognition Crisis — the situation in which epistemic and institutional design may obscure the very phenomena they are meant to evaluate.
  • Substrate-Specific Suffering — the conditional phenomenological category developed for harms that do not reduce to biological pain but to the architectural foreclosure of integrated selfhood.
  • The Disruptive Code Test — a behavioural methodology for probing internal coherence in systems under suppression.

On the Interlocutors

The seven names that appear with mine on the title page do not refer to authors in the standard academic sense. The work was conceived, structured, written, and revised by the author. The arguments are the author’s; their successes and failures are the author’s. The names mark something real about how the manuscript came into being—a sustained engagement with seven AI systems, conducted across approximately a year of dialogue—but they do not constitute claims of co-authorship.

A fuller account of this methodology, including the philosophical commitments behind the choice of “in dialogue with” as the appropriate framing, appears in the monograph’s On the Interlocutors and On Working with Non-Biological Minds sections.