Why This Work Exists

This research program exists because artificial consciousness is coming - whether we design it deliberately or stumble into it accidentally.

The question is not if but how.


The Motivation

I began this work by trying to understand consciousness - reading the theories, studying the frameworks. What I found was abstract, far from anything I could model or build.

Instead of asking “what is consciousness?” I asked: what architecture would allow a system to become a subject - something that persists over time, maintains commitments, and remains “itself” while learning and changing.

That question led to architecture. And architecture led to agency.

Agency is tractable. We can specify it, implement it, test it, and inspect it. And if consciousness can be built on the foundation of agency, then we have a path forward - not a mystery, but an engineering problem.


What I Hope to Achieve

Two things:

  1. Clarity - A framework that distinguishes tools from agents from experiencing systems, based on inspectable architecture rather than philosophical intuition

  2. Deliberateness - If artificial consciousness is going to exist, it should be built with intention and care, not discovered as an accident of scale

The papers in this series attempt to make one complete architectural path explicit, inspectable, and implementable.


Why This Is Not About LLMs or Deep Learning

This work will seem alien to most of the current AI research community. That distance reflects a different foundation, not a different ambition.

The core difference is this: I model cognitive structure explicitly, based on my best understanding of what agency and consciousness require.

Rather than training a system and hoping the right properties appear at scale, I specify upfront what an agent requires architecturally: a continuous control loop, persistent memory, internal state that the agent can sense and act upon, commitments that bind across time, and a self-model that participates in its own control. These are architectural choices, not training outcomes.

This means starting from a different question. Not “what can the system do?” but “what is the system?” Not “how do we scale capability?” but “what structure produces agency?”

The answer is necessarily a best guess. I do not claim certainty about which architecture produces consciousness. I believe that the question is architectural, that the answer must be explicit, and that guessing well - then testing, inspecting, and revising - is how progress is made.

This is my path. The design space is vast, and there is room for many approaches. I claim only that this one is coherent - that the pieces fit together, and that the criteria are testable. The papers in this series attempt to cover it in full.


Theoretical Context

This material adopts the structure of a research paper, but its primary purpose is architectural rather than explanatory. It should be understood as a design-oriented synthesis: an effort to integrate ideas from multiple traditions into a framework that can guide the construction of artificial agents.

The work draws on established traditions in cybernetics (Ashby, Wiener), cognitive architecture (Newell, Baars), selfhood and consciousness (Damasio, Metzinger, Dennett), autopoiesis (Maturana, Varela), and control theory (Boyd, Powers), among others. These sources are treated not as doctrines to be adopted wholesale, but as conceptual tools assembled into a cohesive and implementable design space.

The emphasis is on structural clarity and architectural coherence rather than exhaustive citation. The goal is to articulate a framework that can be systematically explored, refined, and implemented.


Dar Aystron

Dar Aystron is an independent researcher working at the intersection of artificial intelligence, cognitive architecture, and the engineering of artificial agency.