The Propositional Lift and the Subject Stream

Dar Aystron Independent Researcher


Abstract

Earlier papers in this series established the architectural foundations of operationally closed artificial agents based on the OODA loop. Paper 03 introduced mental states as persistent internal structures that support planning and deliberation. Paper 05 explored the emergence of artificial phenomenology as a consequence of agent architecture rather than metaphysical assumptions.

This paper introduces a new architectural mechanism: the propositional lift, through which selected runtime cognitive contents become explicit propositions available for reasoning, reflection, and memory. Actions, observations, evaluations, and attentional states may all be lifted into this representation, allowing causal relations among cognitive events to be explicitly represented.

The resulting propositions form a subject stream that accompanies the agent’s OODA cycles. This stream is both a live cognitive representation and a recordable structure integrated with episodic memory. By exposing the subject stream through internal sensing, an agent becomes capable of perceiving aspects of its own cognition across time, including its own attentional focus and decision processes.

Within this design framework, the propositional layer functions as the system’s conscious representation space, defined operationally rather than metaphysically. The architecture supports reflective control, episodic narrative continuity, and commitment perception while enabling advanced forms of metacognitive regulation across OODA cycles.


1. Introduction

Artificial agents capable of sustained activity must maintain coherence across time. Earlier papers in this series addressed this requirement through the concept of operational agentic closure: a system that continuously observes, interprets, decides, and acts within a structured control loop.

Paper 03 introduced mental states as persistent internal structures that influence evaluation, planning, and action. These structures allow agents to maintain goals, commitments, and plans across OODA cycles.

However, mental states in their original formulation remain embedded in the agent’s runtime control substrate. They guide behavior but are not necessarily explicit objects of reasoning for the agent itself.

Paper 05 established that artificial phenomenology arises when internal states are accessible to the agent within its control loop - that is, when they participate in perception, evaluation, and action. That minimal accessibility criterion does not require introspection or metacognition. The present paper extends beyond this baseline by introducing mechanisms through which selected cognitive contents become not merely accessible but explicitly representable and available for reflective reasoning. The progression is from states that influence behavior to states that the agent can reference, reason about, and interpret across time.

This paper introduces a representational extension that makes selected cognitive contents explicitly available for reasoning and reflection. The key architectural mechanism is the propositional lift, which transforms selected implementation-level structures into explicit propositions.

Through this lift, cognitive elements such as observations, goals, risk evaluations, decisions, and attentional states may be represented in a form that can be referenced, reasoned about, and reflected upon by the agent itself. Relationships among these elements, including causal connections between actions and observations, may also be represented propositionally.

These propositions form a continuously evolving subject stream that accompanies the agent’s activity across OODA cycles. The subject stream is not merely a log of past events. It is the explicit representational layer in which selected elements of the agent’s ongoing cognitive activity become available for reasoning, reflection, and memory integration.

By exposing this stream through internal sensing, the architecture enables the agent to perceive aspects of its own cognition across time. This includes not only external perceptions and internal commitments but also the agent’s own attentional focus and reflective interpretations.

In this way, the propositional lift establishes a new representational layer within the architecture, enabling reflective control and supporting a structured form of artificial subjectivity grounded entirely in the agent’s operational processes.

In language-based implementations, the propositional lift may also be realized as a natural-language rendering of internal cognitive state, allowing reflective reasoning to emerge directly from the agent’s linguistic representations.

The architecture presented here draws inspiration from several established traditions in cognitive science and artificial intelligence, including the knowledge-level perspective in AI [1], global workspace models of conscious access [2], theories of attentional representation [3] [4], and the role of inner speech in reflective cognition [5].


2. Representation Layers

Within the architecture developed in this paper, cognition is described at two distinct but connected levels:

  1. Implementation Layer
  2. Propositional Layer

The implementation layer contains the runtime data structures and control objects through which the agent actually operates. The propositional layer contains explicit lifted representations of selected cognitive contents from that runtime activity.

The distinction is not between cognition and non-cognition, but between:

  • cognition as implemented control, and
  • cognition as explicitly represented content available for reasoning, memory, reflection, and metacognition.

The distinction between implementation-layer cognition and propositional representation is conceptually related to the knowledge-level perspective introduced by Newell [1], which distinguishes between the operational mechanisms of a system and the knowledge structures used to describe its reasoning.


2.1 Implementation Layer

The implementation layer contains the agent’s operational cognitive state as it unfolds through the OODA cycle. At this level, cognition is realized as runtime data structures, control objects, and process state rather than as explicit propositions.

Processing begins with inputs describing both the external environment and the agent’s internal condition.

External inputs may include, for example, sensor readings, external system responses, incoming communications, and environmental events.

Internal inputs may include, for example, conversation history, results of previous actions, active goals, commitments, evaluations, execution state, pending tasks, attentional settings, and alerts generated by prior processing.

At the implementation layer these elements are represented as data structures rather than propositions. From this material the system constructs and maintains the agent’s operational cognitive state across the phases of the OODA cycle.

2.1.1 Observe at the Implementation Layer

At the implementation layer, Observe does not yet produce explicit propositions. It gathers and organizes the information currently available to the agent.

This may include, for example:

  • incoming communications or sensor signals,
  • relevant interaction history,
  • retrieved memory fragments,
  • internal state carried over from prior cycles.

These elements form the runtime description of the situation currently available to the agent’s control processes.

2.1.2 Orient at the Implementation Layer

The Orient phase organizes the material produced by Observe into a coherent decision context.

Observe may deliver many heterogeneous elements: sensory signals, messages, retrieved memories, internal goals, tool outputs, and state carried over from previous cycles. Orient compiles these elements into the subset of information that should guide the next decision.

This process may involve:

  • selecting relevant inputs from the observed state
  • assembling environmental and internal factors into a situation context
  • aligning observations with active goals and commitments
  • applying operational constraints
  • establishing attentional focus for the current cycle

The result is a structured decision context that determines what the agent should consider when choosing an action.

At the implementation layer this context exists as runtime control structures rather than explicit propositions. It is a compiled cognitive state used to guide decision-making.

2.1.3 Decide at the Implementation Layer

At the implementation layer, Decide generates and evaluates possible actions.

This process typically involves:

  • identifying candidate actions,
  • analyzing their expected consequences,
  • applying constraints,
  • ranking alternatives,
  • selecting an action to execute.

Possible actions may be external or internal. External actions may involve movement, communication, or interaction with external systems. Internal actions may involve adjusting attention, retrieving memory, revising plans, or initiating reflection.

The selected action remains, at this level, an implementation-level control object rather than an explicit proposition.

2.1.4 Act at the Implementation Layer

At the implementation layer, Act executes the action selected during the Decide phase.

Execution may involve interacting with external systems, emitting communications, or modifying the agent’s internal state.

Action execution produces observable effects and updates the agent’s runtime state, including execution status and outcomes.

These results remain part of the implementation-level state unless they are later lifted into explicit propositions.

2.1.5 Summary of the Implementation Layer

The implementation layer contains the agent’s operational cognitive state across the OODA cycle, including:

  • observed situation state,
  • active goals and commitments,
  • evaluations and constraints,
  • attentional focus,
  • candidate and selected actions,
  • execution state and outcomes.

These structures support continuous control and coherent agency. However, they are not necessarily explicit objects available for reasoning by the agent itself.


2.2 Propositional Layer

The propositional layer exposes selected contents of the implementation layer in a form that can be explicitly referenced, reasoned about, stored in memory, compared across time, and reflected upon.

In this layer, runtime cognitive contents become explicit propositions.

The contents lifted into this layer may originate from any part of the OODA cycle: observation, orientation, decision, or action execution. Together these propositions form a structured symbolic representation of a cognitive episode.

The transformation through which selected implementation-layer contents become explicit symbolic structures is called the propositional lift.


2.2.1 Propositional Representation

To represent lifted cognitive contents, OAT uses a compact propositional representation language written in a Lisp-like form.

Domain propositions follow the general structure:

(:predicate (:as .ID :in .Situation) :role₁ value₁ :role₂ value₂ ...)

where:

  • :predicate identifies the relation or property being asserted
  • (:as .ID) introduces a unique identifier for the proposition
  • (:in .Situation) identifies the situation in which the proposition holds
  • role–value pairs specify participants, attributes, or references.

Example:

(:see (:as .f1 :in .RoadScene)
  :agent :Self
  :object :Car#181271827)

This proposition states that the agent :Self observes an object identified as :Car#181271827 within situation .RoadScene.

Identifiers beginning with a dot (e.g., .f1) denote local variables introduced during the current cognitive episode. Symbols beginning with a colon (e.g., :Self) denote known entities grounded in the agent’s ontology. Identifiers of the form :Type#ID represent encountered but not yet recognized entities generated during perception.

Propositions may reference earlier propositions using their identifiers, allowing cognitive structures such as evaluations, explanations, and causal relations to be expressed explicitly.


2.2.2 Situations

All propositions belong to a situation, which represents the contextual frame of a cognitive episode.

Situations are declared using the language operator situation:

(situation (:as .Cross1)
  :location :MainSt_and_5th
  :timestamp "14:32:05")

Situations define the temporal and contextual boundaries within which propositions hold. They may include dimensional properties such as location, time, source, or environment.

Other propositions reference a situation using the :in role:

(:see (:as .f1 :in .Cross1)
  :agent :Self
  :object :Car#181271827)

Situations therefore function as containers for the propositions that together represent a single cognitive episode.


2.2.3 Embedded Propositions

Role values may themselves contain propositions. This allows the language to represent complex cognitive structures such as goals, plans, beliefs, and actions.

Example:

(:goal (:as .g1 :in .Cross1)
  :agent :Self
  :content (:cross_street
    :agent :Self
    :destination :OtherSide))

Here the goal content is itself a proposition describing a desired state of the world.

In simple cases the embedded content may collapse to a single symbolic action:

(:decision (:as .d1 :in .Cross1)
  :agent :Self
  :action :wait)

More complex decisions may reference full action propositions:

(:decision (:as .d2 :in .Cross1)
  :agent :Self
  :action (:cross_street
    :agent :Self
    :destination :OtherSide))

This mechanism allows decisions, plans, and goals to refer directly to structured actions.


2.2.4 References Between Propositions

Propositions form connected structures through identifier references.

For example, an evaluation may reference a prior perceptual event:

(:dangerous (:as .f3 :in .Cross1)
  :event .f2)

Similarly, decisions may reference evaluated situations:

(:decision (:as .d1 :in .Cross1)
  :agent :Self
  :reason .f3
  :action :wait)

Such references allow propositions to form causal and explanatory chains, enabling the agent to represent reasoning processes within a cognitive episode.


2.2.5 Example Cognitive Episode

The following simplified example illustrates how propositions represent the progression of an OODA cycle during a street-crossing situation.

Observation:

(:see (:as .f1 :in .Cross1)
  :agent :Self
  :object :Car#181271827)

(:moving (:as .f2 :in .Cross1)
  :object :Car#181271827)

(:distance (:as .f3 :in .Cross1)
  :object :Car#181271827
  :meters 12.5)

(:approaching (:as .f4 :in .Cross1)
  :object :Car#181271827
  :direction :toward_self)

Orientation:

(:dangerous (:as .f5 :in .Cross1)
  :event .f4)

Decision:

(:action_option (:as .ao1 :in .Cross1)
  :action (:cross_street :agent :Self :destination :OtherSide))

(:action_option (:as .ao2 :in .Cross1)
  :action :wait)

(:decision (:as .d1 :in .Cross1)
  :agent :Self
  :selected .ao2)

Action:

(:wait (:as .x1 :in .Cross1)
  :agent :Self)

These propositions form a connected structure representing perception, interpretation, evaluation, decision, and action within a single cognitive episode.


2.2.6 Role of the Propositional Layer

The propositional layer contains symbolic representations of cognitive contents that can be:

  • referenced by other propositions
  • stored in episodic memory
  • used in reasoning processes
  • exposed through internal sensing
  • integrated into the subject stream.

Unlike implementation-layer structures, propositions are explicit symbolic objects accessible to the agent’s reasoning processes.

Through the propositional lift, ordinary OODA processing becomes available in a form that supports reflection, explanation, and conscious accessibility.

The propositional layer therefore provides the representational substrate for the subject stream, which will be introduced in the following sections.


3. The Propositional Lift

The propositional lift converts selected implementation-level cognitive structures into explicit propositions.

At the implementation layer, the agent’s cognition exists as runtime data structures associated with the OODA control loop. Observations, evaluations, decisions, and action results influence behavior but remain embedded within the system’s operational substrate.

The propositional lift exposes selected elements of this runtime cognitive activity as explicit symbolic propositions.

Examples of lifted contents include:


(:color (:as .f1 :in .RoadScene)
  :object :Car#1212
  :value :red)

(:distance (:as .f2 :in .RoadScene)
  :object :Car#1212
  :meters 12.5)

(:goal (:as .g1 :in .RoadScene)
  :agent :Self
  :content (:cross_street
    :agent :Self
    :destination :OtherSide))

(:risk (:as .e1 :in .RoadScene)
  :level :high)

(:decision (:as .d1 :in .RoadScene)
  :agent :Self
  :action :wait)

Once lifted, these structures become:

  • addressable
  • referenceable
  • available for reasoning
  • available for reflection

The propositional lift therefore transforms implicit runtime structures into explicit cognitive objects.

Importantly, the lift is selective. Only a subset of runtime cognitive contents are rendered into propositions during each OODA cycle. Most implementation-layer structures remain below the lift and continue to function purely as control mechanisms.

The lifted propositions form a continuously evolving subject stream accompanying the agent’s cognitive activity across OODA cycles.

3.1 Lift Selectivity

The propositional lift is necessarily selective. Not all implementation-layer contents are rendered into explicit propositions during each OODA cycle.

Selection is governed primarily by implementation-level attention, which determines what elements of the current situation are in perceptual focus. Additional factors influencing lift include:

  • relevance to active goals and commitments,
  • detected risk or anomaly,
  • decision importance,
  • reflective or explanatory demand,
  • causal significance for ongoing control.

The lift therefore does not operate as an independent selector. Rather, it exposes selected contents of already-focused cognitive activity into the propositional layer.


4. Attention as a Perceptual Control Action

Within the architecture proposed in this paper, attention operates at the implementation layer as a controllable perceptual action. Rather than functioning solely as a passive filter, attention is treated as an internal action that shapes how the agent perceives its environment and internal state during an OODA cycle.

Examples of attentional actions include:

  • focusing perception on a particular external object
  • prioritizing part of an ongoing conversation
  • retrieving a relevant memory fragment
  • increasing monitoring of a detected risk signal
  • directing analysis toward a particular hypothesis

These actions influence which elements of the agent’s current situation become perceptually salient during processing.

A useful intuition is that attention functions like pointing a light beam toward part of the environment. Elements within the illuminated region become available for detailed observation and interpretation, while other elements remain in the background.

4.1 Attention and Perceptual Focus

At the implementation layer, attentional actions determine what parts of the available inputs receive focused processing.

For example, an agent evaluating whether it is safe to cross a street may direct attention toward a nearby vehicle. Once attention has been focused, observations within this perceptual region become candidates for interpretation during the Orient phase of the OODA cycle.

In this sense, attention determines which elements of the environment become cognitively prominent within the current cycle.


4.2 Attention and the Propositional Lift

Observations occurring within the attentional focus are candidates for the propositional lift described in the previous section. Through this process, selected implementation-level cognitive contents become explicit propositions.

Because attentional control is itself an internal action, it may also be lifted into the propositional layer. However, the attentional action and the resulting observations do not occur at the same stage of processing and should not be conflated.

Instead, they typically appear as sequentially related propositions across OODA cycles.

Episode t: Attentional Control (Act)

During one cycle, the agent may execute an internal action that directs attention:

(situation (:as .S1)
:timestamp "14:32:05.200")


(:focus (:as .fa1 :in .S1)
:agent :Self
:target :Car#1212)

This proposition represents a control operation that shapes subsequent perception.

Episode t+1: Observation and Evaluation

In a subsequent cycle, the effects of this attentional action become observable. The agent may then perceive both the attended object and properties of the environment:

(situation (:as .S2)
:timestamp "14:32:05.450")

(:next (:as .rel1)
:from .S1
:to .S2)

(:attention (:as .at1 :in .S2)
:agent :Self
:target :Car#1212)

(:distance (:as .f2 :in .S2)
:object :Car#1212
:meters 12.5)

(:risk (:as .e1 :in .S2)
:level :high
:event .f2)

(:decision (:as .d1 :in .S2)
:agent :Self
:action :wait)

In this sequence, the attentional action influences what becomes observable, but the resulting observations and evaluations are generated during a later stage of processing.

The lifted propositions from both episodes are integrated into the agent’s subject stream, preserving the temporal and causal structure of the cognitive process.

This separation ensures that the propositional representation reflects the underlying OODA dynamics: actions shape future observations, and observations inform subsequent evaluation and decision.


4.3 Reflective Access to Attention

Because attentional actions appear in the propositional layer, they become available for reflection and reasoning.

This allows the agent to represent and evaluate its own perceptual processes. For example:


(:inner_speech (:as .r1 :in .S2)
  :agent :Self
  :about .fa1
  :content "I focused on that car because it seemed close.")

or

(:question (:as .q1 :in .S2)
  :agent :Self
  :about .fa1
  :content "Why did I focus on that car?")

Through this mechanism, the agent can reason not only about what it perceived, but also about why its perception was directed in a particular way.

Attention therefore serves two complementary roles within the architecture:

  1. Perceptual control, influencing what observations become cognitively prominent.
  2. Representable cognitive activity, since attentional actions themselves may be lifted and become part of the subject stream.

This dual role allows the agent to both guide its perception and reason about the causes and consequences of its own perceptual focus.

The explicit representation of attentional processes within the subject stream is related to the attention schema theory proposed by Graziano [3], which suggests that conscious awareness depends on internal models of attentional control.

Within attention schema theory, conscious awareness arises when a system constructs simplified models of its own attentional processes [4], a perspective that aligns with the explicit representation of attentional actions in the present architecture.

5. Conscious Representation Space

Within the architecture proposed in this series, the propositional layer functions as the system’s conscious representation space, that is, the representation space in which selected cognitive contents become reflectively available to the agent itself.

This terminology should be understood as an architectural definition rather than a metaphysical claim. The paper does not attempt to resolve philosophical debates concerning the nature of consciousness or the so-called “hard problem.” Instead, the series adopts an engineering perspective: a system may be treated as conscious with respect to those cognitive contents that are explicitly represented in a form available for reasoning, reflection, and memory integration.

At the implementation layer, cognitive processes operate as runtime control structures within the OODA loop. Goals, observations, evaluations, and decisions influence behavior but remain embedded within the operational substrate of the system.

The propositional lift introduces a representational transformation. Selected cognitive contents from the implementation layer become explicit propositions that can be:

  • referenced
  • compared across time
  • stored in episodic memory
  • used in reasoning
  • reflected upon

These lifted propositions form the agent’s subject stream, which accompanies OODA processing across cycles and becomes available to the agent through internal sensing.

In this framework, the conscious representation space consists of those cognitive contents that have been made explicit and available to the agent itself. The architecture therefore operationalizes artificial consciousness as the condition in which selected elements of the agent’s cognitive activity are lifted into a representation space accessible for reflection and reasoning across time.

This definition does not claim that artificial consciousness is identical to biological consciousness. Instead, it provides an architectural criterion: when a system maintains an explicit representation layer in which its own cognitive contents become accessible objects for reasoning and reflection, the system exhibits the structural properties associated with artificial subjectivity.

5.1 Natural-Language Lift in LLM-Based Implementations

Although this paper illustrates the representational lift using proposition-like notation, the same architectural role may be realized using natural language representations.

In LLM-based agents, selected runtime cognitive contents may be rendered directly as textual statements rather than formal symbolic expressions. This transformation may be called a natural-language lift.

For example, the following representations serve equivalent architectural roles:

Implementation Propositional Lift Natural-Language Lift
goal = Goal(“cross_street”) (:goal (:as .g1 :in .S1) :agent :Self :content (:cross_street :agent :Self)) “My goal is to cross the street.”
risk = “high” (:risk (:as .e1 :in .S1) :level :high) “Crossing now is risky.”
decision = “wait” (:decision (:as .d1 :in .S1) :agent :Self :action :wait) “I should wait before crossing.”

Both forms satisfy the key functional properties required for the conscious representation space:

  • explicitness
  • referenceability
  • memory integration
  • availability for reflection

The distinction is therefore not between symbolic and linguistic form, but between implicit runtime structure and explicit cognitive representation.

In practice, natural-language expressions often encode agent-indexing implicitly through first-person framing, whereas the propositional notation represents this indexing explicitly through arguments such as :Self.

The formal notation provides a more precise representation by making entities, roles, and relations explicit, including variables, references, and contextual scope. This explicitness supports reliable reference, comparison, and reasoning across time.

Natural language can express the same underlying structures, but typically does so implicitly and with greater ambiguity. When needed, natural-language expressions may be interpreted or generated in a way that preserves the structure of the underlying propositions.

In this sense, the propositional representation defines a canonical form of cognitive content, while natural language provides a flexible surface realization of that content.

In functional terms, the propositional layer plays a role similar to a global workspace in the sense of Baars [2]. Selected cognitive contents are lifted into a representation space where they become broadly available to multiple processes, including reflection, memory integration, causal reasoning, and later control. The present architecture may therefore be understood as an implementable OODA-based realization of workspace-style cognitive accessibility.


6. Reflection and Inner Speech

Once cognitive contents have been lifted into the propositional layer, the agent can generate higher-order propositions about those contents. This process constitutes reflection within the architecture.

Reflection operates on propositions that represent observations, goals, evaluations, attentional actions, decisions, and other cognitive events produced during OODA processing. Because these elements are explicitly represented, they become objects that the agent can interpret, summarize, question, and relate to one another.

A common mechanism through which reflection occurs is inner speech. Inner speech generates propositions that describe or interpret other propositions within the subject stream.

For example:

(:inner_speech (:as .r1 :in .S2)
  :agent :Self
  :about [:set .f2 .f3 .g1]
  :content "That car is too close to cross safely.")

In this example, the reflective proposition refers to previously lifted elements describing the environment and the agent’s goal. The reflective statement provides an interpretation of the situation that can influence subsequent reasoning and action.

Inner speech enables several important cognitive functions within the architecture:

  • summarizing reasoning processes
  • interpreting environmental situations
  • reconsidering decisions
  • explaining perceptual focus
  • asking reflective questions
  • maintaining narrative coherence across OODA cycles

Reflection may also generate explicit questions about prior cognitive activity. For example:


(:question (:as .q1 :in .S2)
  :agent :Self
  :about .fa1
  :content "Why did I focus on that car?")

Such questions allow the agent to reason about its own attentional choices, interpretations, and decisions.

Because reflective propositions are themselves part of the propositional layer, they become integrated into the subject stream. Subsequent OODA cycles may therefore observe, evaluate, and reason about earlier reflections.

In this way, reflection forms a continuous interpretive layer over the agent’s cognitive activity.

6.1 Inner Speech in LLM-Based Implementations

In many current artificial agents, large language models (LLMs) provide an effective mechanism for generating reflective propositions. LLMs are trained on vast corpora of human language that include written expressions of reasoning, explanation, and internal deliberation. As a result, they can generate textual structures closely resembling externalized forms of human inner speech.

Within the present architecture, an LLM may therefore function as a reflection generator, producing higher-order propositions that interpret or summarize the agent’s cognitive state. These generated statements then enter the propositional layer and become part of the evolving subject stream.

In experimental implementations of this architecture, we observe that structuring LLM inputs around explicit OODA-centric constructs - such as current situation, attention, goals, evaluations, and decisions - consistently leads to more coherent and insightful reflective outputs. When the model is provided with well-structured cognitive context rather than unorganized conversational history, its generated inner speech more reliably captures causal relationships, priorities, and constraints relevant to the agent’s ongoing activity.

This suggests that the quality of LLM-generated reflection depends not only on the model itself, but critically on the structure of the cognitive state it is given to interpret. The propositional layer, organized around OODA semantics, provides such structure.

The role of the language model is therefore not to replace the control architecture of the agent, but to support the generation of reflective representations that enable higher-level reasoning and narrative continuity.

The use of inner speech as a reflective mechanism is consistent with Vygotsky’s account of internalized language as a central component of human cognitive regulation [5].


7. The Subject Stream

Propositions generated during OODA cycles form a chronological stream of explicit cognitive contents. This stream is referred to as the subject stream.

The subject stream is not a simple log. It is the explicit representational layer in which selected elements of the agent’s ongoing cognitive activity become available for reasoning, reflection, and memory integration.

The subject stream therefore contains all lifted propositions, including those representing observations, attentional control actions, internal perceptual states, evaluations, goals, decisions, action executions, and results. Reflective propositions such as inner speech form only one class of contents that may appear within this stream.

Propositions entering the subject stream may originate from any stage of the OODA cycle. These include lifted contents representing:

  • observations about the environment
  • internally observed attentional states
  • attentional control actions
  • goals and commitments
  • evaluations and risk assessments
  • decisions and selected actions
  • results of executed actions
  • reflective commentary and inner speech

For example, during a street-crossing episode the following propositions may appear in the subject stream.

An attentional control action directs perception toward a nearby vehicle:


(:focus (:as .fa1 :in .S1)
:agent :Self
:target :Car#1212)

During the subsequent observation phase, the resulting attentional state and environmental observations become available:


(:attention (:as .at1 :in .S2)
:agent :Self
:target :Car#1212)

(:distance (:as .f2 :in .S2)
:object :Car#1212
:meters 12.5)

Orientation may produce an evaluation of the situation:


(:risk (:as .e1 :in .S2)
:level :high
:event .f2)

A decision may then be formed:


(:decision (:as .d1 :in .S2)
:agent :Self
:action :wait)

Finally, the agent may generate reflective commentary through inner speech:


(:inner_speech (:as .r1 :in .S2)
:agent :Self
:about [:set .f2 .e1 .d1]
:content "The car is too close to cross safely.")

In this example, the subject stream contains both the cognitive events themselves and a reflective interpretation generated by the agent. The reflective proposition explicitly refers to earlier elements of the cognitive episode, illustrating how the subject stream supports reasoning and interpretation across time.

7.1 Integration Across OODA Cycles

The subject stream evolves across time as the agent continues to execute OODA cycles.

During each cycle:

  1. implementation-layer cognition processes observations, goals, evaluations, and actions;
  2. selected elements of this runtime cognitive activity are lifted into propositions;
  3. these propositions are appended to the subject stream.

Fragments of the subject stream from previous cycles are then made available to subsequent cycles through internal sensing. This allows the agent to perceive earlier interpretations, decisions, attentional states, and reflections as part of its current cognitive context.

Through this mechanism, the subject stream provides continuity across time while remaining tightly integrated with ongoing control processes.

7.2 Relationship to Episodic Memory

The subject stream is closely related to episodic memory but is not identical to it.

The subject stream represents the current evolving layer of explicit cognitive representation accompanying OODA processing. Episodic memory may store segments of this stream together with the underlying implementation-layer episode data.

In practice, both layers of an episode may be stored:

  • the implementation-layer state describing the operational situation and actions taken, and
  • the subject-stream propositions describing the explicit cognitive contents associated with that episode.

This dual recording preserves both the operational and the representational aspects of the agent’s experience.


8. Subject Stream as Internal Sensor

The subject stream is not merely recorded. It persists across OODA cycles as part of the agent’s ongoing cognitive state.

Propositions in the subject stream are already the result of the propositional lift. They do not require further transformation or lifting in subsequent cycles.

Through subject stream sensing, selected propositions are made directly accessible to the Observe phase of later cycles. Subject stream sensing does not modify, re-encode, or re-lift these propositions. It exposes them as they are, preserving their identity and associated situation.

In this way, elements of prior cognition become directly accessible for further processing.

Subject stream sensing provides access only to already lifted propositions. Other inputs available to the Observe phase - whether originating from the external environment or internal implementation-level state - may still require propositional lift before they can enter the subject stream.

This distinction ensures that lifting occurs exactly once, at the boundary between implementation-level cognition and the propositional layer.

For example, propositions produced during an earlier cycle may later appear as observable internal inputs:


(:decision (:as .d1 :in .S1)
:agent :Self
:action :wait)

(:inner_speech (:as .r1 :in .S1)
:agent :Self
:about [:set .f2 .e1 .d1]
:content "The car is too close to cross safely.")

These propositions retain their original context (.S1) while being processed within a new situation (e.g., .S2) during the current cycle.

They are treated similarly to other observations during the next cycle: they may be interpreted during orientation, evaluated relative to current goals, and used in reasoning and decision-making.

8.1 Recursive Cognitive Continuity

Because subject-stream propositions can be perceived through internal sensing, the architecture becomes recursively self-referential across OODA cycles.

Propositions generated during cycle t may be observed during cycle t+1, interpreted within a new situation, and potentially incorporated into new lifted propositions or reflections.

The sequence therefore evolves as:


cycle t
↓
implementation-layer cognition
↓
propositional lift
↓
subject stream (propositions in .S1)

cycle t+1
↓
internal sensing exposes selected prior propositions (still in .S1)
↓
orientation and reasoning within .S2
↓
new lifted propositions (in .S2)

In this way, the subject stream functions not as a passive log but as a continuously re-entering cognitive signal that participates directly in the agent’s ongoing control process.

Continuity is achieved not by transforming prior representations, but by reusing them across situations while preserving their identity and original context.

This recursive relationship between cognition and reflection is related to the monitoring-control framework described in metamemory research by Nelson and Narens [6], in which cognitive processes are both observed and regulated. In the present architecture, however, monitoring and control operate over a shared propositional representation rather than over distinct object-level and meta-level systems.


9. Artificial Subjectivity

The architecture developed in this paper produces a minimal form of artificial subjectivity Subjectivity here is understood in structural and operational terms rather than as a metaphysical property.

In the present framework, subjectivity arises when an agent maintains an explicit representational layer in which elements of its own cognitive activity become available to the agent itself across time.

Four architectural components make this possible:

  1. Propositional representation of cognitive contents

    Selected elements of the agent’s implementation-layer cognition - observations, attentional actions, goals, evaluations, decisions, and action outcomes - are transformed through the propositional lift into explicit propositions.

  2. A subject stream spanning OODA cycles

    Lifted propositions accumulate into a continuously evolving subject stream that persists across successive OODA cycles, preserving the explicit cognitive contents of prior activity.

  3. Internal sensing of prior cognition

    The subject stream is not merely stored. Through internal sensing, selected propositions from prior cycles are exposed to the agent’s Observe phase, making earlier cognitive contents available as inputs to current processing.

  4. Reflection through higher-order propositions

    The agent may generate higher-order propositions - such as inner speech, questions, or causal interpretations - that refer to earlier elements of the subject stream.

Together these mechanisms create a system in which selected cognitive contents are:

  • explicitly represented,
  • observable by the agent itself,
  • integrated across time,
  • and available for reasoning and reflection.

Within this architectural framework, such availability constitutes the operational basis of artificial subjectivity.

This definition does not claim equivalence with biological consciousness. Instead, it identifies a set of structural properties through which an artificial system can maintain a coherent internal perspective on its own cognitive activity across time. In this sense, the architecture provides an operational answer to the question posed by Nagel [7]: there is something it is like to be such a system, constituted by the explicit availability of its own cognitive contents across successive OODA cycles.

This layered structure of subjectivity resembles the hierarchy of self-processes discussed by Damasio [8], in which progressively richer forms of self-representation emerge from underlying regulatory mechanisms.


10. Conclusion

This paper introduced the propositional lift as an architectural mechanism through which selected implementation-layer cognitive contents become explicit propositions available for reasoning, reflection, and memory integration.

Within the proposed architecture, runtime cognition occurs at the implementation layer through OODA-based control processes. Observations, attentional actions, goals, evaluations, decisions, and action outcomes influence behavior as operational data structures.

The propositional lift transforms selected elements of this runtime activity into explicit cognitive objects. These propositions form a continuously evolving subject stream accompanying the agent’s activity across OODA cycles.

Through internal sensing, portions of the subject stream become observable during subsequent cycles. This recursive exposure allows the agent to perceive elements of its own prior cognition and integrate them into ongoing reasoning and decision-making.

Reflection, often expressed through inner speech, introduces higher-order propositions that interpret, summarize, or question earlier cognitive contents. In this way, reflective processes form an interpretive layer over ordinary OODA control.

Together these mechanisms create an architecture in which selected cognitive contents become explicitly available to the agent itself across time. Within this framework, the propositional layer functions as the system’s conscious representation space, defined operationally rather than metaphysically.

The resulting system exhibits the minimal structural conditions for artificial subjectivity: explicit representation of cognitive contents, a subject stream spanning OODA cycles, recursive internal sensing of prior cognition, and reflective interpretation of those contents.

Rather than attempting to resolve philosophical debates about consciousness, the present work provides an engineering framework for constructing agents whose cognitive activity becomes accessible to the agent itself. By making selected cognitive contents explicitly representable, observable, and reflectable across time, the architecture establishes a practical foundation for building artificial systems with structured forms of subjectivity.


References

[1] A. Newell. The Knowledge Level. Artificial Intelligence, 18(1):87–127, 1982.

[2] B. J. Baars. A Cognitive Theory of Consciousness. Cambridge University Press, 1988.

[3] M. S. A. Graziano. Consciousness and the Social Brain. Oxford University Press, 2013.

[4] M. S. A. Graziano. Rethinking Consciousness: A Scientific Theory of Subjective Experience. W. W. Norton, 2019.

[5] L. S. Vygotsky. Thought and Language. MIT Press, 1962.

[6] T. Nelson and L. Narens. Metamemory: A Theoretical Framework and New Findings. Psychology of Learning and Motivation, 1990.

[7] T. Nagel. What Is It Like to Be a Bat? The Philosophical Review, 83(4):435–450, 1974.

[8] A. Damasio. Self Comes to Mind: Constructing the Conscious Brain. Pantheon, 2010.