Why This Work Exists

This research program exists because artificial consciousness is coming - whether we design it deliberately or stumble into it accidentally.

The question is not if but how.


The Motivation

I began this work by trying to understand consciousness - reading the theories, studying the frameworks. What I found was abstract, far from anything I could model or build.

Instead of asking “what is consciousness?” I asked: what architecture would allow a system to become a subject - something that persists over time, maintains commitments, and remains “itself” while learning and changing.

That question led to architecture. And architecture led to agency.

Agency is tractable. We can specify it, implement it, test it, and inspect it. And if consciousness can be built on the foundation of agency, then we have a path forward - not a mystery, but an engineering problem.


What I Hope to Achieve

Two things:

  1. Clarity - A framework that distinguishes tools from agents from experiencing systems, based on inspectable architecture rather than philosophical intuition

  2. Deliberateness - If artificial consciousness is going to exist, it should be built with intention and care, not discovered as an accident of scale

The papers in this series attempt to make one complete architectural path explicit, inspectable, and implementable.


Why This Is Not About LLMs or Deep Learning

This work will seem alien to most of the current AI research community. That distance reflects a different foundation, not a different ambition.

The core difference is this: I model cognitive structure explicitly, based on my best understanding of what agency and consciousness require.

Rather than training a system and hoping the right properties appear at scale, I specify upfront what an agent requires architecturally: a continuous control loop, persistent memory, internal state that the agent can sense and act upon, commitments that bind across time, and a self-model that participates in its own control. These are architectural choices, not training outcomes.

This means starting from a different question. Not “what can the system do?” but “what is the system?” Not “how do we scale capability?” but “what structure produces agency?”

The answer is necessarily a best guess. I do not claim certainty about which architecture produces consciousness. I believe that the question is architectural, that the answer must be explicit, and that guessing well - then testing, inspecting, and revising - is how progress is made.

This is my path. The design space is vast, and there is room for many approaches. I claim only that this one is coherent - that the pieces fit together, and that the criteria are testable. The papers in this series attempt to cover it in full.


Theoretical Context

This material adopts the structure of a research paper, but its primary purpose is architectural rather than explanatory. It should be understood as a design-oriented synthesis: an effort to integrate ideas from multiple traditions into a framework that can guide the construction of artificial agents.

The work draws on established traditions in cybernetics (Ashby, Wiener), cognitive architecture (Newell, Baars), selfhood and consciousness (Damasio, Metzinger, Dennett), autopoiesis (Maturana, Varela), and control theory (Boyd, Powers), among others. These sources are treated not as doctrines to be adopted wholesale, but as conceptual tools assembled into a cohesive and implementable design space.

The emphasis is on structural clarity and architectural coherence rather than exhaustive citation. The goal is to articulate a framework that can be systematically explored, refined, and implemented.


A Shared Vocabulary for Agency

One unexpected result of this work is a vocabulary that applies to agency in general - human or artificial - at the architectural level. Terms like agentic closure, persistent OODA control, mental states, subject stream, attachment matrix (ATTM), critics, and coupled OODA loops describe structural properties any agent either has or does not have, regardless of substrate. Humans have them because biology found a way to implement them. Artificial agents can have them if they are designed with the right architecture.

I did not set out to build this vocabulary. I set out to work out what an artificial agent would have to be in order to count as an agent at all. But the requirement that the architecture be specifiable without presupposing human biology forced the vocabulary to be general. The result is a set of terms that can describe a human being, a dog, an OAT agent, and a stateless inference endpoint in the same language - and distinguish them architecturally from each other without importing the loaded categories (consciousness, intelligence, understanding) that make most discussions of agency bog down in prior commitments.

The vocabulary is still early. It is not complete, and the papers in this series are still refining it. But it composes, and it applies, and it has allowed me to say things about agency - human and artificial - that the inherited vocabulary of the field could not.


Implementable Today

The propositional language and subject stream developed across the OAT papers are implementable today, in code that runs on contemporary substrates with mature tools. They are not speculative architectural specifications waiting for some future technology. Implementation experiments are in progress, building the core abstractions (situations, propositions, identifiers, the lift mechanism, cross-cycle references) in working forms that the OAT papers can be tested against and that future agent implementations can build on. The architecture is becoming real in code in parallel with being developed on paper, with each artifact informing the other.


Dar Aystron

Dar Aystron is an independent researcher working at the intersection of artificial intelligence, cognitive architecture, and the engineering of artificial agency.

research -[]- aystron.ai