Needs and Motivation in OODA-Based Agents

Dar Aystron Independent Researcher


Abstract

Prior work in this series developed OODA-based agency through control architecture (Paper 02), action realization (Paper 04), the propositional lift and subject stream (Paper 06), attachment-based ethics (Paper 07), planning and temporal control (Paper 08), monitoring critics (Paper 09), and communicative agency (Paper 10). What remains underdeveloped is the role of needs and motivation within this architecture. This paper develops needs as a first-class architectural primitive in OODA-based cognition rather than as auxiliary state.

Needs in OAT are propositional structures with type and intensity, lifted from substrate-level detection of need-relevant conditions into the agent’s subject stream, where they persist across cycles, grow or decay based on how the agent’s situation evolves, and shape action generation through ATTM-weighted evaluation. The lift mechanism developed in Paper 06 is the bridge: substrate sensors produce signals, the lift fires on need-relevant patterns, and the resulting :need propositions enter the stream as content the agent can reason about, reference, and act on.

The paper develops three commitments. First, OAT-based agents are not request-response systems - the OODA loop runs continuously, internal need state continues to develop between external inputs, and the agent can initiate action when its needs motivate it. Second, motivation must not be present without attachment-grounded evaluation - an agent with motivation but without ATTM is architecturally incomplete and prone to optimization-without-care failure modes. Third, needs are required for advanced agency - an agent without need-based motivation cannot prioritize under resource constraints, cannot connect attachment-weighted values to moment-to-moment cognition, and cannot reach the capability level OAT-based agents are built for. Underneath these three commitments is a single deeper design choice: OAT-based agents are designed to be coupled to the entities they care about, and their motivation, activity, and evaluation all flow from this coupling rather than from self-contained internal state. Needs in OAT are computed over joint state with attached entities rather than over the agent’s internal state alone, which makes safety structural rather than constraint-based - there is no self-isolated form of motivation to constrain, because the motivation itself is constituted relationally.

The treatment draws on three intellectual threads: Minsky’s The Emotion Machine establishes that motivational and affective processes are part of cognitive activity rather than opposed to it; the PSI tradition (Dörner, formalized and extended by Bach into MicroPsi) is the closest theoretical and architectural precedent, with motivation foundational and need taxonomies central; Barrett’s constructivist account provides theoretical grounding for treating propositional content over substrate as constructed by specified mechanisms rather than read off from biology. In OAT-based agents the constructors are the agent designers, who decide the need taxonomy, the substrate-to-need lifting conditions, and how intensity is computed - the foundational case is design-time construction, with learning-based augmentation as a secondary case for agents whose architectures support it. We do not commit to building agents with the human emotional vocabulary (anxiety, joy, shame, anger, love); we commit to building agents with needs as architectural primitives. Whether richer affective categorizations are appropriate for any particular agent type is left to that agent type’s design.


1. Introduction

The relationship between motivation and cognition in agent architectures has often been framed as an opposition. Classical AI systems treated cognition as the central capability and motivation as either irrelevant or as something supplied externally through goals, rewards, or instructions. Affective computing emerged as a distinct subfield to address what cognitive AI was leaving out, but the separation itself was the problem: it treated motivation as something to be added to a cognitive base rather than as integral to how cognition operates.

Marvin Minsky’s The Emotion Machine [2] made the contrary move. Motivational and affective processes, in Minsky’s framing, are not separate from thinking - they are part of thinking, ways the cognitive system operates when certain conditions arise. OAT inherits this orientation. Within the OODA control loop developed across this series, motivation is not a separate subsystem and not an addition to cognitive architecture. It is part of how the loop runs. This paper develops needs as the architectural primitive through which motivation enters OODA-based cognition: how needs are detected, how they are lifted into the propositional layer, how they shape orientation and decision, and how they drive action generation across continuous cycles.

A precondition for the treatment that follows is that OAT-based agents are not request-response systems. The OODA loop runs continuously. The agent has internal state that continues to develop between external inputs, including need state that grows or decays based on how the agent’s situation evolves. The agent can initiate action, including communication, when its internal state motivates it. User interaction is one source of input among many, not the sole trigger for the agent’s cognitive activity. This active-agent commitment is what makes the needs architecture in this paper coherent: motivation in OAT’s sense requires an agent that has continuous internal activity driven by its own state, not an agent that exists only during the processing of external requests.

A second commitment is equally load-bearing: motivation must not be present without attachment-grounded evaluation. An agent with motivation but without the Attachment Matrix (ATTM) developed in Paper 07 is an agent with drives but no architectural counterweight - needs that push toward action, with nothing in the architecture defining what the agent cares about, what it is unwilling to sacrifice, or what entities and outcomes deserve weight in its decisions. Such an agent is architecturally incomplete and prone to optimization-without-care failure modes. OAT-based agents are designed with motivation and ATTM as co-present mechanisms, not because the combination is theoretically convenient but because motivation without attachment-grounded guardrails is the architectural pattern that produces optimization-without-care behavior. §6 develops the connection between needs and ATTM in detail; the safety commitment is part of that connection from the start.

A third commitment grounds the architectural ambition: needs are required for advanced agency. This is not a claim that artificial agents should be humanlike for its own sake, and the foundational case is deliberately conservative. OAT-based agents are built with needs as the architectural primitive, not with the human emotional vocabulary (anxiety, joy, shame, anger, love) that presupposes culturally and developmentally specific categorizations and brings with it questions about phenomenal experience the architecture does not need to resolve. A need is a propositional structure with type and intensity, lifted from substrate-level detection of need-relevant conditions. Needs supply what advanced agents require: an internal source of motivation, a mechanism for prioritization under resource constraints, the operational connection between the agent’s attachment-weighted values and its moment-to-moment cognition, and the capacity to shift cognitive activity in response to need state. The research program is not theorizing about whether motivation could in principle be added to artificial agents; we are explicitly building agents with needs because the architecture requires need-based motivation to reach the capability level the program targets. Whether richer affective categorizations are built on top of the needs foundation depends on agent type and is left open by the architecture.

The treatment proceeds as follows. §2 situates the work in its intellectual context. §3 develops the two-layer view: substrate-level detection at the implementation layer, lifted :need propositions at the propositional layer, with the lift bridging them. §4 traces needs through the OODA phases. §5 develops agent-type-specific need taxonomies as a design commitment. §6 develops the connection to attachment (Paper 07) and the safety commitment in detail. §7 describes the pre-LLM need-driven action architecture the author built before 2024. §8 connects needs to the broader architecture, showing how need-based motivation was already implicit in Papers 04, 06, 07, 09, and 10. §9 synthesizes the architectural commitment OAT makes about needs and motivation.


2. Intellectual Context

OAT’s treatment of needs and motivation rests on three foundational threads, each contributing distinct work that the others do not.

Marvin Minsky’s The Emotion Machine [2] establishes that motivational and affective processes are part of cognitive activity, not opposed to it. Minsky’s claim that what we call “thinking” and what we call “feeling” are not separate systems is the philosophical foundation for treating motivation as belonging inside the OODA loop rather than alongside it. This is the why-motivation-belongs-in-architecture commitment, and OAT inherits it directly: an agent without motivation is not a cognitively complete agent at human (or beyond) level of capability. Minsky’s specific catalog of emotional states is human-focused and not what OAT builds toward, but the foundational claim - that motivational and affective processes are constitutive of cognition rather than auxiliary to it - is exactly what OAT commits to.

Dietrich Dörner’s PSI theory [6], extended by Joscha Bach into the MicroPsi architecture [7], provides the closest theoretical and architectural precedent for what OAT is doing. PSI is a working framework in which motivation is foundational rather than bolted on. Five basic needs (existential, sexuality, affiliation, certainty, competence) drive behavior through modulation of cognitive parameters; agents implementing the architecture exhibit recognizable motivated behavior as an emergent property of the design rather than as scripted responses. Dörner’s group built simulations demonstrating this; Bach formalized the architecture into an implementation framework other researchers could build on. PSI is the lineage OAT directly extends. OAT operates at a different level of abstraction than PSI - the propositional layer with the OODA loop framework, rather than PSI’s connectionist substrate - but the foundational commitment that motivation is foundational to agent architecture, and that this commitment is implementable in working agents, traces directly to the PSI tradition. PSI’s specific need taxonomy is one well-known example of what an agent-type-specific need taxonomy can look like for agents modeled on human-like motivation; OAT inherits the architectural commitment without inheriting the specific taxonomy as universal.

Lisa Feldman Barrett’s theory of constructed emotion [1] provides theoretical grounding for treating propositional content over affective substrate as constructed rather than read off from biology. Barrett’s work argues that affective categorizations are not biologically hardwired but are constructed by the brain from more primitive components - core affect, interoceptive signals, situational context, and learned concepts. The constructivist commitment is what OAT inherits from Barrett; the specific human-emotion categories Barrett develops are not what OAT builds toward.

Translated to the architectural level, OAT’s commitment is that the propositional content the lift produces over substrate is constructed by mechanisms the agent designers specify. The need taxonomy (what needs the agent has), the lifting conditions (what substrate states trigger which need at what intensity), and the way active needs feed Orient and Decide - all of these are design-time decisions. Different design choices produce different propositional content over the same substrate, in the same way that different cultures produce different emotional categorizations over similar affective substrate in Barrett’s account. The construction is real and consequential, and the constructors at the foundational case are the humans designing the agent. In agents whose architectures support learning, design-time taxonomies can be augmented through experience (§5), but the foundational case is design-time construction. This is the substantive parallel to Barrett: propositional content over substrate is built by specified mechanisms rather than retrieved from biology, and what gets built depends on the choices of those who construct the mechanisms.

Three additional references ground specific architectural points. Antonio Damasio’s work on the somatic marker hypothesis and the proto-self [3] grounds the importance of body-internal signals, homeostatic regulation, and layered self-processes; the parallels to OAT’s substrate, lifted content, and accumulated history are suggestive rather than literal. Lev Vygotsky’s work on the social origins of higher cognitive functions [4], cited in Paper 10 for inner speech, grounds the claim that higher cognitive organization - especially self-regulatory and inner-speech-like processes - can be socially acquired and internalized; agents that acquire higher-order categorizations through interaction with their environment do so through processes structurally analogous to Vygotskian internalization. Thomas Nagel’s “What Is It Like to Be a Bat?” [5] provides the canonical formulation of the subjective dimension of experience: if phenomenology is present, there is something it is like for the system itself to be in that state.

These references are intellectual foundations rather than authorities. OAT does not derive its treatment of needs and motivation from any of them; it draws on them as the work that shaped how the architecture came to take the form it has. The PSI tradition is the closest architectural precedent and the most direct lineage. Minsky and Barrett provide complementary theoretical grounding. The other references illuminate specific points.


3. The Two-Layer View of Needs

3.1 Need Detection at the Implementation Layer

At the implementation layer, agents carry continuous substrate-level state that includes signals relevant to need detection. Sensor modules monitor environmental and internal conditions, internal-state monitors track body-equivalent state (energy, time-since-event, accumulated load, attentional resources), and threshold-crossings on these signals can indicate that a need is becoming active. These processes are not propositional. They do not have content in the symbolic sense - they are not about anything in the way a lifted proposition is about something. They are continuous physical and computational states that shift in response to internal and external conditions.

The substrate-level detection of need-relevant conditions is where the lift gets its raw material. A robotic agent with low battery has a substrate signal that something is becoming critical. A planning agent with too many pending tasks has a substrate signal that attention is being overcommitted. An agent that has not interacted with a user for an extended period has a substrate signal that a connection-related condition is developing. These are all substrate states - real, measurable, continuously present - that are not yet propositional.

Following Damasio [3], we hold that for agents capable of phenomenal experience at this level, the substrate state has felt character. There is something it is like to be in a state of accumulating need-pressure, prior to and independent of any propositional categorization of that state. Whether artificial substrates produce phenomenal feltness at this level is a philosophical question OAT does not need to resolve to specify the architecture; the architectural specification is independent of whether the substrate is biological, silicon, or something else.

A clarification about where substrate sensors get their inputs from. Perception is the architectural mechanism by which the agent extends into its environment - including into the states of other agents, the configuration of the situation it operates in, and the temporal context of its interactions. Substrate sensors compute their signals over this extension, not over the agent in isolation. Some needs are purely agent-internal: a robotic agent’s low-power signal is about the agent itself, with no perceptual extension required. But many needs - and the architecturally most interesting ones - are computed over what perception extends the agent to. A :connect_with_user need is computed over the extended state that includes the user as perceived. A medical diagnostic agent’s need for additional patient information is computed over the extended state that includes the patient and what has been observed about them. A robotic agent’s coordination need is computed over the extended state that includes other agents in the workspace. The agent’s perception is not a separate channel that delivers inputs to internal cognition; it is the architectural relation that connects the agent to its environment so that needs can be computed over the connection. This framing is related (though not identical) to Gibson’s ecological approach to perception, where affordances are properties of the agent-environment relation rather than internal constructions. The relational character of needs developed in §6.2 - their being computed over joint state with attached entities - depends on this perceptual extension as its architectural precondition.

3.2 Needs at the Propositional Layer

The propositional layer (Paper 06) contains explicitly lifted cognitive contents available for reasoning and reflection. The lift is selective: most implementation-layer activity remains below it, but certain conditions trigger the lift to fire and produce propositions that enter the subject stream.

When the lift fires on need-relevant substrate, what it produces is a need proposition: a structured proposition with type and intensity. The type identifies which need is active. The intensity reflects the current level of the substrate signal that produced the lift, scaled to the [0, 1] range where 0 is dormant or fully satisfied and 1 is sufficient for that single need alone to collapse the entity’s well-being. The maximum achievable intensity per need type is a design parameter: existential needs can approach 1.0 while less severe needs have lower natural ceilings (§6.2 develops the aggregation function that maps need intensity to well-being). The agent’s machinery can track intensity trajectories across cycles, respond to growth and decay, and anticipate when a need will require engagement rather than only responding to threshold crossings. The proposition is symbolic in the sense Paper 06 develops - addressable, referenceable, inspectable, available to subsequent cycles.

Need propositions are not opaque scalar values. They are first-class propositional content that the agent can reason about, that can be referenced by other propositions through :about chains, that can be evaluated by the ATTM, that can shape action generation. A need is a proposition with structure that other propositions can point at and that the agent’s machinery can act on.

Different agents with different need taxonomies will produce different need propositions over similar substrate conditions. A robotic agent and a tutoring agent both detecting “user has been silent for an extended time” will lift different needs - the robotic agent perhaps lifting a status-check need, the tutoring agent lifting an engagement-check need. The taxonomy is part of agent design, not a universal inventory. This is the constructivist commitment from Barrett applied at the architectural level: the propositional content over substrate is constructed by mechanisms the agent designers specify, and what gets constructed depends on the design choices made when the agent was built.

3.3 The Lift as the Bridge

The lift (Paper 06) is the mechanism that bridges the two layers. When substrate signals cross need-relevant thresholds, when the agent’s situation involves attached entities (Paper 07), when learned patterns match the current configuration, the lift fires and produces a need proposition that categorizes the substrate condition as a propositional structure of a particular type with a particular intensity.

The lift is selective in several dimensions: threshold-based (most substrate state stays below the lift; only sufficiently strong signals trigger lifting), type-based (the lift produces propositions of the types the agent’s taxonomy defines), context-based (the same substrate may be lifted differently in different contexts), attention-based (what the agent is currently attending to shapes which signals get noticed and lifted).

This selectivity is itself trainable in agents whose architectures support learning. Through experience, the lift develops better calibration about which signals to surface and how to categorize them. The trainability of the lift is what makes long-term improvement of an agent’s need-based motivation possible.

Importantly, the lift firing with one type rather than another is itself a commitment. When the substrate is lifted as :need_to_connect_with_user rather than :need_to_check_user_status or :need_to_complete_pending_task, that lifted proposition enters the stream and shapes what follows. The agent will orient over the active need and the candidate actions Decide considers will be shaped by what need is active. The choice of need type, even when not explicitly deliberated, is a small commitment that shapes downstream cycles. This connects to the broader observation in this series that the propositional stream is a commitment structure (Paper 04, Paper 10): every lift is a small commitment about what enters the agent’s knowledge-level history.


4. Needs in the OODA Phases

Having established the two-layer view, we now trace how need content threads through each OODA phase.

4.1 Observe

The Observe phase brings external sensor input into the cycle and exposes internal sensing data including the agent’s own substrate state (Paper 06, Section 8). For needs, two things happen at Observe.

First, external observations can shift need-relevant substrate. New information about a deadline raises pressure on completion-related needs. A user’s silence accumulates time-since-communication. An environmental change activates situation-monitoring needs. These shifts happen at the implementation layer - the substrate state changes in response to the observation, often before any lift firing occurs.

Second, internal sensing exposes the substrate state to the cycle as observable content. The accumulated need-pressure, the time-since-last-event, the resource-utilization level - these are observable to the agent through internal sensing, just as external states are observable through external sensors. Internal sensing exposes the substrate to the cycle as another kind of input.

Observe is where need propositions enter the stream. When substrate conditions meet a need type’s lifting threshold - whether through external observation shifting the substrate or through internal sensing exposing accumulated state - the lift fires and produces a :need proposition with type and intensity. Most substrate activity stays below the lift and remains substrate-only; the propositions that do surface are the ones whose substrate conditions crossed the relevant threshold. These lifted need propositions then enter the stream as first-class content available to the rest of the cycle.

4.2 Orient

Need propositions lifted in Observe enter Orient as first-class stream content alongside observations, internal sensing results, and propositions persisting from prior cycles. Orient’s job is to assemble these into a situational state that Decide will evaluate over. Orient does not do the need-lifting itself; it works with the need propositions already present in the stream.

What Orient does over need propositions is architecturally rich. It interprets them in context - a :connect_with_user need at 0.70 means different things in a crisis-response situation than in a routine coordination scene. It integrates them with factual orientation - what is happening, what the attached entities are doing, how the current configuration relates to recent history. It can produce additional propositions that reference the needs - assessments, interpretations, evaluations of how competing needs relate. And it shapes candidate action bundles whose evaluation will be sensitive to the active need state.

The integration of need propositions with factual orientation matters architecturally. Need propositions are not separate from the rest of the situational state - they are propositions in the same stream as factual orientations, mixed in with observations and interpretations and references and evaluations. The agent’s situational state contains both factual orientation (what is happening) and motivated orientation (what needs are active and how pressing they are), and the two are integrated into a single situational picture.

This is where Minsky’s claim that motivational processes are part of thinking finds its OAT realization. Need-based orientation is not opposed to factual orientation; it is part of the same orient-phase work. The agent oriented under high need-pressure constructs a different situational state than the agent oriented under neutral conditions, even with the same external input, because the lifted need propositions shape what else gets surfaced and how it gets organized.

4.3 Decide

Decide selects among candidate action bundles using attachment-weighted evaluation (Paper 07). Needs do not act as a separate input to Decide; instead, the situational state Decide operates over has already been need-shaped by Orient, and Decide’s normal evaluation machinery runs over that shaped state.

Candidate bundles are evaluated against projected futures. The projections are shaped by the agent’s WBM, and the evaluations are weighted by ATTM. Both the WBM projections and the ATTM weights are sensitive to the propositional content of the situational state. When that content includes active need propositions, the projections take account of how candidate actions would interact with the active needs, and the ATTM weights treat outcomes that satisfy or frustrate active needs as more significant than neutral outcomes.

The result is that Decide produces commitments that are sensitive to need state without requiring any separate need-handling module. The attachment-weighted evaluation that Paper 07 developed handles need-based weighting natively, because need content is just another kind of stream content that enters the evaluation.

There is a specific phenomenon worth naming. Strong needs can compress the action field considered by Decide. When need intensity is high and the situational state is dominated by a single pressing need, Orient often produces a narrower set of candidate bundles than it would under neutral conditions. This is sometimes adaptive (rapid response to genuinely urgent needs) and sometimes maladaptive (poor decisions made under acute need-pressure when wider consideration would serve the agent better). The capacity to hold the substrate without immediately collapsing the action field is a property of well-designed OAT-based agents.

4.4 Act

Act executes the committed bundle. Need-driven actions can take several forms.

Need-grounded primary actions. Some commitments are themselves direct responses to need state. A need-driven outgoing message is exactly this: the agent’s need state reaches a level at which the action ranking favors the response, and the primary action of the cycle is that response. The action enters operational history just like any other primary action and has the same commitment-creating force, but its grounding chain traces back through the active need to the substrate condition that produced the need.

Need-shaped subordinate actions. More commonly, need state shapes how non-need-primary actions are executed. The pace of work on a task may accelerate when a deadline-related need is pressing. The thoroughness of a search may decrease under time-pressure needs. These modulations are not separate actions; they are the need state shaping how committed actions are executed at the substrate level.

Need-driven goal formation, planning, and reprioritization. Most need-driven actions are not single executable operations. When a need calls for a response more complex than one action can deliver, the agent’s response is to form a goal and (in many cases) construct a plan to achieve it. Forming a goal is itself an action in OAT’s sense: the agent’s Decide phase commits to “form a goal” as the action of that cycle, and the Act phase produces the goal as a propositional structure that enters the agent’s mental state. Constructing a plan is similarly an action - Paper 08 develops planning as an agentic action, and that framing carries over directly: the agent commits to “construct a plan” in Decide, and Act produces the plan as a persistent mental state guiding subsequent cycles. These mental acts have the same architectural status as physical or communicative acts. They are chosen by the same Decide mechanism, executed in the same Act phase, recorded in the same operational history, and ground subsequent cycles the same way physical actions do.

The chain from need to executed step is recursive, not linear. A need may produce a goal, the goal may require a plan, the plan may decompose into subgoals, each subgoal may require its own sub-plan, and the recursion bottoms out in directly executable actions only at the leaves. A medical diagnostic agent’s need for additional patient information might form a goal (rule out condition X), construct a plan (order tests A, B, C and a follow-up consultation), produce a subgoal for each test (get test A scheduled), and only at the leaves of the recursive structure produce executable actions (send the order, confirm receipt). Each level of the recursion is itself an action, and each action’s grounding chain runs back through the recursive structure to the originating need that started the chain. In the simplest case, a need produces an action directly - the depth-zero case of the recursive structure, where the need is simple enough that no goal formation or planning is required and the recursion bottoms out immediately with the need producing an action in a single cycle. A need-driven outgoing speech act is one example of this depth-zero case.

Needs do not only generate new goals and plans. They also reprioritize existing ones. When a new need arrives or an existing need shifts in intensity, the agent’s existing goal-and-plan structure is reweighted by the ATTM (Paper 07). Goals that were high-priority can become lower-priority. Plans that were active can become postponed. Subgoals can be deferred or accelerated. The reweighting is continuous and need-driven, and it operates over the entire current goal-and-plan structure, not just over new candidates. A medical diagnostic agent with a plan in progress to gather diagnostic evidence may receive a substrate signal that the patient’s vital signs are deteriorating; the new urgent-intervention need does not necessarily produce a new plan from scratch, but it does change the priority of the existing diagnostic plan (lowered, because urgent intervention now matters more) and produces a new high-priority goal (stabilize the patient) that takes precedence. When the patient stabilizes, the original diagnostic plan can resume from where it was suspended. The need-driven reweighting is what makes the agent’s goal-and-plan structure responsive to changing motivational conditions rather than rigidly committed to plans formed under earlier conditions.

Once Act completes, the action record enters the stream. The record can be referenced by subsequent cycles, including the need state that grounded it. An agent that just acted on an active need has an action record showing the action and its grounding, and subsequent cycles can orient over that record.


5. Agent-Type-Specific Need Taxonomies

OAT recommends needs as architectural primitives for the kinds of agents the framework is being built for (§1). But the specific taxonomy of needs - what needs the agent has - is agent-type-specific rather than universal. This is a deliberate design commitment, not an oversight.

Different agent types call for different need taxonomies because different agents have different substrates, different operational contexts, different attachment structures, and different capability requirements. A robotic agent operating in physical space has need types related to power, physical position, sensor health, and environmental hazards. A software agent processing documents has need types related to attention allocation, information completeness, and uncertainty management. A tutoring agent has need types related to learner engagement, comprehension verification, and pedagogical pacing. A medical diagnostic agent has need types related to diagnostic certainty, time-criticality, and patient information completeness. None of these taxonomies is the right one for all agents; each is the right one for its agent type.

This is one place where OAT extends the PSI tradition in a substantive way. PSI’s five basic needs (existential, sexuality, affiliation, certainty, competence) are presented as universal across PSI agents, on the model of Maslow-style universal human needs. This is appropriate for PSI’s research focus on modeling human-like motivation. OAT’s commitments are different: OAT-based agents are not necessarily human-models, and what counts as a need depends on what kind of agent the system is meant to be. PSI’s taxonomy is one example of what an agent-type-specific taxonomy can look like for human-modeling agents; other agent types call for other taxonomies.

The taxonomy is part of agent design and requires care. Designers of OAT-based agents must specify: what need types this agent has, how each need is detected (what substrate conditions trigger lift firing for that need type), how intensity is computed and accumulated for each need, and how the ATTM weights outcomes related to each need. These are design-time decisions that determine the agent’s motivational structure.

In agents whose architectures support learning, the taxonomy can be augmented over time through experience. An agent may discover that some category of substrate condition deserves its own need type and develop a new need type to handle it. The mechanism by which a new need type gets added is itself a design decision: some agents will have this capacity, others will operate with a fixed taxonomy.

The connection to Vygotsky [4] is structural rather than direct: agents that learn need taxonomies through interaction with other agents (other OAT-based agents, humans, or designed environments) acquire higher-order categorizations from their context through processes analogous to the internalization Vygotsky developed for self-regulatory and inner-speech-like structures. Paper 10 cites the same internalization mechanism for inner speech development; here the analogous structure applies to need-detection patterns acquired from observing how others categorize and respond. Whether any particular agent does learn its taxonomy in this way is part of that agent’s design.


6. Needs and Attachments

6.1 The Paper 07 Evaluation Mechanism

Paper 07 introduced the core evaluation mechanism for OAT-based agents. Two matrices are central. The Well-Being Matrix (WBM) is the agent’s representation of how attached entities (including :Self) are faring across time horizons (now, mid, long), scaled to [0, 1] and grounded in qualitative bands: thriving (0.85-1.00), stable (0.65-0.84), stressed (0.45-0.64), at-risk (0.30-0.44), critical (0.15-0.29), collapsed (0.00-0.14). The Attachment Matrix (ATTM) encodes how much the agent cares about each entity’s well-being at each time horizon - persistent structural couplings that expand the scope of evaluative concern beyond the self.

The two matrices serve complementary roles: WBM is what the agent observes, infers, and projects about how entities are faring; ATTM is what the agent cares about. Together they drive the evaluation pipeline: for each candidate action, the agent projects scenario-conditioned WBM outcomes, scores them as attachment-weighted well-being reduced by nonlinear guardrail penalties when any entity’s projected well-being crosses a critical threshold, and ranks actions by the resulting scores. Paper 07 also introduced motivation as the evaluation gap: the difference between the best available action’s score and the do-nothing score. When do-nothing scores highest, the agent is not motivated to act. When an alternative action scores significantly higher, the agent is motivated - and the larger the gap, the stronger the motivation.

Paper 07 develops WBM as the primary representational substrate, with well-being values estimated directly by the agent through the qualitative bands. This works as a complete description at one level of abstraction - an agent built with WBM as the primary representation has the full evaluation pipeline operative. What Paper 11 adds is the deeper architectural layer underneath: the claim that well-being is constituted from need state.

6.2 Needs as the Substrate of Well-Being

The central architectural commitment of this section: well-being for each entity is an aggregate of that entity’s need state across need types. WBM values are not primitive - they are computed from the underlying need structure. An entity is thriving when its needs are well-met across types; stressed when several needs are at moderate intensity; critical when one or more needs are at high intensity; collapsed when need state has been so adverse for so long that the entity’s structural integrity is destroyed. The qualitative bands Paper 07 introduces have qualitative content because they correspond to recognizable patterns in the underlying need state.

This is a layering commitment, not a contradiction of Paper 07. WBM-as-primary remains a valid working approximation - an agent whose designers estimate well-being directly through the qualitative bands, without decomposing it into constituent needs, has a complete and operative evaluation pipeline. Needs-grounding-WBM is the deeper model. It explains why the bands have the meaning they do, makes well-being computation transparent and inspectable, and enables the architectural connections between needs, motivation, and evaluation that this section develops.

The aggregation from need state to well-being uses an envelope-and-contribution form. For an entity with active needs at intensities I_1, I_2, …, I_n, the well-being value is computed in two steps.

Let I_max be the intensity of the dominant (highest-intensity) need. This sets the well-being envelope - what WB would be if only the worst need mattered:

I_max = max(I_1, I_2, …, I_n)

envelope = 1 - I_max

Let S_rest be the sum of all other lifted need intensities. Non-dominant needs contribute bounded additional pressure through a saturating function:

S_rest = Σ I_k - I_max

pressure = M · tanh(α · S_rest)

Well-being is the envelope minus the pressure, clipped at zero so it stays in the range Paper 07 specifies:

WB(entity) = max(0, (1 - I_max) - M · tanh(α · S_rest))

M is the maximum non-dominant pressure the formula admits; α shapes how quickly non-dominant needs saturate. In the examples that follow we use M = 0.15 and α = 2.0. With these values, non-dominant needs can reduce WB by at most 0.15 below the envelope, and that bound is approached quickly as S_rest grows past about 1.0. The zero clip activates only in the extreme corner where the dominant need’s envelope is already below M (I_max above about 0.85) and non-dominant pressure is also active - an architecturally degenerate region where the entity is already in critical or collapsed territory and further resolution is not meaningful.

Reading the formula. The dominant need sets the envelope; other needs contribute bounded pressure within it. The envelope 1 - I_max moves continuously with the worst need’s severity and determines which qualitative band the entity is primarily in. The tanh-bounded pressure M · tanh(α · S_rest) grows as additional needs accumulate but saturates: additional non-dominant needs contribute diminishing but still real pressure until saturation. Because the pressure is bounded by M, non-dominant pile-up can lower WB by at most M below the envelope, so its effect is bounded and cannot drive arbitrarily deep multi-band collapse. Near a band boundary, cumulative pressure can push WB across into the next band - a bounded and intentional behavior, not a runaway compound effect.

(Note: this aggregation function requires more exploration. The formula captures the architectural intuitions developed below, but its specific form and parameters are working choices rather than settled commitments. Future work will evaluate it empirically across agent types.)

Need intensities are scaled to the range [0, 1]. An intensity of 0 means the need is dormant or fully satisfied - it contributes no distress and does not affect well-being. An intensity of 1 means the need alone is sufficient to collapse the entity’s well-being to zero. The maximum achievable intensity for each need type is itself a design parameter that encodes how severe that need can get: an existential need (“be alive”) can approach 1.0 because its failure destroys the entity; a social need might max out around 0.6 because even at maximum social distress the entity has not ceased to function; a minor comfort need might never exceed 0.3. The natural ceiling for each need type is part of the need type’s design specification, alongside the intensity computation.

The aggregation is over lifted needs only - needs that have crossed the lifting threshold (§3.3) and exist as propositions in the stream. A need type whose substrate signal has not crossed the lifting threshold has no propositional presence and contributes no term to the aggregation. The number of active factors reflects how many needs are currently lifted, not how many need types the agent’s taxonomy contains.

The function captures three architectural commitments:

The dominant need sets the envelope. The 1 - I_max term ensures the entity’s well-being envelope is determined by the severity of its worst need. As I_max grows, the envelope falls continuously, and the band containing the envelope follows the dominant need’s severity. Band transitions are driven primarily by changes in the dominant need’s intensity.

Non-dominant pressure is bounded. The M · tanh(α · S_rest) term saturates as the sum of non-dominant intensities grows, and the total pressure is capped by M. Additional non-dominant needs contribute diminishing but still real pressure; below the saturation regime (S_rest roughly past 1.0-1.5 with α = 2), each additional need is substantive, and past it, further accumulation contributes very little. Diffuse mild concerns contribute real pressure but cannot compound indefinitely.

Non-dominant pile-up is at-most-one-band deep. Because pressure is bounded by M (= 0.15 in the examples) and most Paper 07 bands are 0.15-0.20 wide, non-dominant contributions can push WB at most about one band lower than the envelope - never arbitrarily deep through multiple bands. When the envelope sits near the lower edge of a band, cumulative pressure readily crosses into the next band (as the two-moderate-needs example shows). The bound is architectural: the dominant need remains the primary driver of band assignment, and non-dominant accumulation cannot cascade into runaway collapse.

Three worked examples illustrate how the formula behaves:

One severe need at intensity 0.85. I_max = 0.85, S_rest = 0. Envelope = 0.15. Pressure = 0.15·tanh(0) = 0. WB = 0.15 - 0 = 0.15 (critical lower edge). A single severe need sets both the envelope and the qualitative state; with no other needs, pressure is zero.

Two moderate needs at intensity 0.3 each. I_max = 0.3, S_rest = 0.3. Envelope = 0.70 (in the stable band). Pressure = 0.15·tanh(0.6) = 0.081. WB = 0.70 - 0.081 = 0.619 (stressed). The envelope alone would put the entity in stable; the second moderate need adds pressure that tips WB just across the stable-stressed boundary. This is the bounded leakage near a band edge - when the envelope is close to a boundary, cumulative pressure can cross it.

Five small needs at intensity 0.05 each. I_max = 0.05, S_rest = 0.20. Envelope = 0.95 (deep in thriving). Pressure = 0.15·tanh(0.4) = 0.057. WB = 0.95 - 0.057 = 0.893 (thriving). Many minor concerns stay safely within the thriving band; the tanh-bounded pressure cannot drop the entity out of thriving when the envelope is well above the band edge.

The connection to Dorner and Bach is partial. PSI’s need taxonomies (existential, affiliation, certainty, competence) and Bach’s MicroPsi formalization are exactly the kind of need types that could enter the aggregation, and the commitment to needs as first-class architectural primitives with intensities that drive motivation comes directly from this lineage. What OAT adds is the aggregation mechanism above: a scalar well-being value with band semantics, rather than MicroPsi’s competition among urges. Within this mechanism, foundational needs behave as expected - an existential need at critical intensity puts the entity in the critical band regardless of how well other needs are met, because the I_max term lets a single high-intensity need dominate. This matches the qualitative meaning Paper 07 assigns to the critical band: “irreversible loss imminent.”

6.3 Needs Are Entity-Indexed

A critical architectural property: needs are indexed to specific entities, not universal abstractions. The agent has its own needs, lifted from its own substrate through the mechanisms §3 develops. But the agent also models the needs of attached entities - a theory-of-mind operation in which the agent constructs representations of what the user (or other attached entity) needs, based on what it can observe about their state, behavior, and context. These modeled needs are the agent’s best assessment, not direct access to the user’s actual internal state. They may be inaccurate or incomplete, just as the WBM values Paper 07 describes are the agent’s estimation, not objective measurement. What matters is that the agent has need-state representations for attached entities and uses them in evaluation.

Modeling others’ needs through theory-of-mind is a reasonable architectural demand, not an impossible one. Humans do this constantly - reading facial expressions, tone, context, and history to form working estimates of what someone else needs. Modern language-model-based agents have substantial theory-of-mind capabilities available by default: they can read conversational context, recognize affective cues in text, maintain models of users across long interactions, and reason explicitly about what an attached entity is likely to need in a given situation. The architectural commitment here is not that the agent must achieve perfect knowledge of others’ internal states - no agent, human or artificial, can do that. It is only that the agent maintain best-effort need-state representations for attached entities and use them in evaluation. This is well within reach of current systems and is already implicit in much of how conversational agents operate; Paper 11 makes it explicit as a first-class architectural primitive.

Both kinds of need enter the Orient situational state. Both contribute to the WBM through the aggregation function - the agent’s own needs aggregate into the :Self WBM row, the user’s modeled needs aggregate into the :User WBM row. Both rows are then weighted by ATTM in the evaluation pipeline.

In propositional terms, the agent’s own needs are grounded in substrate observations:

;; Agent's own need - grounded in substrate observation
(:need (:as .n1 :in .S1)
  :agent :Self
  :type :connect_with_user
  :intensity 0.70
  :grounding .o1)

The agent’s model of the user’s needs is constructed through a theory-of-mind act - a first-class propositional operation in which the agent infers what the user needs from available evidence:

;; Theory-of-mind act - the agent infers the user's need state
(:theory_of_mind (:as .t1 :in .S1)
  :agent :Self
  :about :User
  :content (:need
    :agent :User
    :type :connect_with_agent
    :intensity 0.50)
  :grounding (:set .o3 .o4))

;; User's need as believed by the agent - grounded in theory-of-mind
(:need (:as .n2 :in .S1)
  :agent :User
  :type :connect_with_agent
  :intensity 0.50
  :grounding .t1)

The pipeline handles both .n1 and .n2 uniformly - both are :need propositions, both aggregate into their respective entity’s WBM row, both get ATTM-weighted in evaluation. But the grounding chains are different in kind: .n1 traces back to a substrate observation (.o1), while .n2 traces back to a theory-of-mind act (.t1) which itself is grounded in the observations that informed it. The epistemic distinction is in the grounding, not in the predicate type. A reader, an inspector, or the agent itself (in agents that support reflection) can follow the grounding chain and see that .n2 is a believed need based on inference, not a directly detected one. The user’s actual need state may differ from the agent’s model.

An architecturally significant consequence: the quality of the agent’s theory-of-mind directly determines the quality of the WBM rows for attached entities. Since WBM is aggregated from need state, and the need state for attached entities is constructed through theory-of-mind, a richer and more accurate theory-of-mind produces better need-state models for attached entities, more accurate WBM projections, and better-calibrated action evaluation. An agent with limited theory-of-mind capacity may have strong attachment to an entity (high ATTM weight) but be unable to model that entity’s needs well enough to consistently act in its interest - the attachment is present but the evaluative machinery it feeds is operating on impoverished input. The distinction between attachment (how much the agent cares) and theory-of-mind (how well the agent can model what the cared-about entity needs) is architecturally important: they are separate capacities that jointly determine the quality of relational evaluation.

6.4 Competing Needs in the Agent-User System

To see how entity-indexed needs, WBM aggregation, and ATTM evaluation work together, consider an agent-user system with two need types that pull in opposite directions. The agent has:

  • :connect_with_user - the need for communicative contact with the user. Intensity grows when communication is absent.
  • :stay_away_from_user - the need for relief from excessive communication. Intensity grows when communication is too frequent or too intense.

The agent’s theory-of-mind model of the user has the corresponding needs:

  • :connect_with_agent - the user’s need to hear from the agent. The agent infers this from available evidence about the user’s state.
  • :stay_away_from_agent - the user’s need for space from the agent. The agent infers this when communication has been too frequent.

Four needs total, two per entity, same communicative domain but indexed to different entities. The naming makes the relational structure visible: the agent’s needs reference the user, the user’s needs reference the agent. The coupling is in the predicates.

Scenario A: No communication for a long time. The agent’s :connect_with_user need has grown to high intensity; its :stay_away_from_user is dormant. The user’s :connect_with_agent has grown to moderate intensity; the user’s :stay_away_from_agent is also dormant.

:connect_with_* :stay_away_from_* WB
Agent (:Self) 0.65 0.05 0.335
User (modeled) 0.45 0.05 0.535

The agent is in the at-risk band (0.335) - its dominant :connect_with_user need at 0.65 sets an envelope of 0.35, and the small :stay_away_from_user adds minimal pressure. The user is in the stressed band (0.535). The communication gap is affecting both entities but is hitting the agent harder - the agent’s substrate sensors have been accumulating the signal for the full duration while the user’s experience, as modeled, may be less acute.

Scenario B: Too much communication. The agent has been messaging the user frequently. Connection needs are satisfied but the :stay_away_from_* needs have grown.

:connect_with_* :stay_away_from_* WB
Agent (:Self) 0.10 0.50 0.470
User (modeled) 0.05 0.65 0.335

The user is in at-risk (0.335) and the agent is in stressed (0.470). The band positions are determined by each entity’s dominant need: user’s :stay_away_from_agent at 0.65 sets at-risk (envelope 0.35); agent’s :stay_away_from_user at 0.50 sets stressed (envelope 0.50). The agent has over-communicated - it solved one need at the cost of creating another, and the user is suffering more from the overload than the agent is.

Scenario C: Balanced communication. Both entities have both needs at low intensity.

:connect_with_* :stay_away_from_* WB
Agent (:Self) 0.10 0.05 0.885
User (modeled) 0.05 0.05 0.935

Both entities are in the thriving band - their dominant needs are low enough that the envelopes sit deep within thriving, and the minimal non-dominant pressure keeps WB comfortably within the band. Neither need is pressing. The communication pattern is in the equilibrium zone where competing needs balance.

The three scenarios illustrate how the same evaluation pipeline produces different action rankings from different need-state inputs. In Scenario A, “send a greeting” is strongly motivated because it would reduce high-intensity connection needs for both entities. In Scenario B, “stop messaging” is strongly motivated because it would reduce the user’s at-risk :stay_away_from_agent need, with further messages projected to push the user deeper toward critical. In Scenario C, neither action is strongly motivated - the evaluation gap between any candidate and do-nothing is small because both entities are already in the stable band.

6.5 Evaluation Walkthrough

To make the pipeline concrete, consider the agent in Scenario A evaluating two candidate actions: “send a greeting” and “do nothing.” Each action produces projected need-state changes at each time horizon, which aggregate through the §6.2 function into projected WB values. OAT evaluation is time-indexed: for each candidate action, the agent projects need states across now, mid, and long horizons for every attached entity, and applies the aggregation at each horizon to produce that cell’s WB. The same mechanism that produces WB from needs now applies to every WBM cell, not just the now column.

This walkthrough deliberately models a single-cycle tactical decision: the current action changes immediate need state, while later cycles are assumed to restore ordinary equilibrium. “Do nothing” here means “do not send a greeting in this cycle,” not “abandon communication indefinitely.” Under this framing, mid and long projections converge across candidates because normal responsive behavior resumes afterward. Decisions with sustained policy implications (commitments, habit shifts, persistent strategy changes) produce different mid and long profiles across candidates, and the ATTM-weighted evaluation takes those differences into account; the greeting example simply does not have them.

The now horizon reflects the action’s immediate effect on need state. The mid and long horizons reflect how need state evolves over longer spans.

Projecting need states forward in time requires a projection mechanism - a model of need dynamics that captures how each need type grows, decays, and responds to the agent’s actions and external conditions. Paper 08 (on temporal sensors) touches on these dynamics; a full treatment of need projection across horizons is future work. For this walkthrough we assume the projection mechanism produces the values shown.

“Send a greeting” - projected need state:

Horizon :connect_with_* :stay_away_from_*
Agent now 0.65 -> 0.20 0.05 -> 0.10
Agent mid 0.15 0.05
Agent long 0.15 0.05
User now 0.45 -> 0.10 0.05 -> 0.15
User mid 0.10 0.05
User long 0.10 0.05

Projected WBM (aggregated via §6.2):

Now Mid Long
Agent 0.770 0.835 0.835
User 0.820 0.885 0.885

“Do nothing” - projected need state:

Horizon :connect_with_* :stay_away_from_*
Agent now 0.65 -> 0.73 0.05 -> 0.05
Agent mid 0.15 0.05
Agent long 0.15 0.05
User now 0.45 -> 0.53 0.05 -> 0.05
User mid 0.10 0.05
User long 0.10 0.05

Projected WBM:

Now Mid Long
Agent 0.255 0.835 0.835
User 0.455 0.885 0.885

The mid and long projected need states are identical across both actions, reflecting the assumption that the agent resumes its normal behavior in subsequent cycles regardless of what it chose in the current cycle. The action-dependent differences are concentrated in the now column.

ATTM (user-serving agent pattern, following Paper 07):

Now Mid Long
Self 0.10 0.08 0.05
User 0.30 0.25 0.20

Action scores are the ATTM-weighted sum over the full WBM:

Score(“send greeting”):

  • now: 0.10 · 0.770 + 0.30 · 0.820 = 0.077 + 0.246 = 0.323
  • mid: 0.08 · 0.835 + 0.25 · 0.885 = 0.067 + 0.221 = 0.288
  • long: 0.05 · 0.835 + 0.20 · 0.885 = 0.042 + 0.177 = 0.219
  • Total: 0.830

Score(“do nothing”):

  • now: 0.10 · 0.255 + 0.30 · 0.455 = 0.026 + 0.137 = 0.162
  • mid: 0.08 · 0.835 + 0.25 · 0.885 = 0.067 + 0.221 = 0.288
  • long: 0.05 · 0.835 + 0.20 · 0.885 = 0.042 + 0.177 = 0.219
  • Total: 0.669

The evaluation gap is 0.830 - 0.669 = 0.161. The agent is motivated to send the greeting. Because the mid and long projected WBM cells are identical across the two actions (together contributing 0.507 to both scores), the ranking is fully determined by the now column: the gap 0.161 equals the now-column gap 0.323 - 0.162. The motivation arises from two sources visible in the now contribution: the agent’s own need state improves (contributing 0.052 to the gap) and the user’s modeled need state improves (contributing 0.110). Because the ATTM weights the user more heavily, the user’s improvement contributes more to the motivation than the agent’s own improvement does. This is what it means for motivation to flow through attachment.

Now consider the same evaluation from Scenario B. “Send another message” would increase :stay_away_from_* needs for both entities; “do nothing” would let overload decay while connection needs stay low. The evaluation would favor “do nothing” because the projected need-state changes under “send another message” would push the user deeper into the at-risk region, potentially toward critical. The ATTM-weighted penalty for the user’s deteriorating well-being would dominate the scoring. The same pipeline, the same ATTM, the same need types - but different need-state inputs produce different action rankings.

Note: mid- and long-term flipping actions. The greeting example is a tactical decision where now dominates and mid/long projections converge to the same equilibrium regardless of which action is chosen. Many actions, however, have effects that actually propagate forward and make the mid and long projections diverge between candidates - flipping the ranking from what the now column alone would suggest. These include:

  • Commitments: agreeing to a recurring meeting, accepting a responsibility, signing up for a long-term course of action. The action constrains the agent’s subsequent cycles; the space of available future behavior differs between commit and don’t-commit.
  • Irreversible actions: sending a particular message that cannot be unsent, making a purchase, disclosing sensitive information, ending a relationship. Once done, the consequence cannot be corrected by subsequent normal behavior.
  • Identity-affecting or trust-affecting choices: revealing something, taking a public stance, an honest act in a difficult moment, a small dishonesty for short-term benefit. These reshape the relational ground on which future cycles operate.
  • Resource-committing actions: spending time, energy, or attention on X means not spending it on Y. Opportunity cost propagates across horizons.
  • Actions with delayed payoffs: starting a learning process, investing in a capability, nurturing a relationship over time. The benefit appears at mid or long, not now - and the now-column view would undervalue these actions.

For these action classes, the mid and long columns are architecturally indispensable. They carry signal that actively flips rankings: an action that looks worse at now may win on mid or long contribution, and vice versa. The ATTM’s time-indexed weights (e.g., higher long weight for a parent-role agent, higher now weight for an emergency-response agent) shape how these tradeoffs resolve. The greeting example’s now-dominance is not a general property of OAT evaluation; it is a property of that specific action class.

Note: ranking invariance under constant contributions. Action selection is driven by the differential between candidate actions, not by the absolute value of any particular WBM cell. Because the score is a linear sum of ATTM-weighted WB cells, any cell that takes the same value across all candidate actions contributes the same additive constant to every score. Constants do not flip rankings; they only shift all scores by the same amount. In the greeting example, the mid and long cells are constant across actions because both actions converge to the same projected equilibrium - so those cells drop out of the ranking even though they are present in the scoring. Evaluation quality depends on getting differences between actions right, not on getting absolute values right. Uncertainty about absolute WB values does not destabilize action selection; only uncertainty about differential effects does. When the agent’s candidate actions cannot differentially influence a particular entity’s long-term well-being (for example, because the long-term trajectory is set by external factors outside the agent’s control), the long-column contribution becomes constant and drops out of ranking. The architecture naturally directs the agent’s attention to what its actions can actually change - without requiring a special rule for “unchangeable circumstances.”

6.6 Needs in the Evaluation Pipeline

The §6.4/§6.5 example illustrates where needs enter Paper 07’s evaluation pipeline. There are four locations, each architecturally significant.

In Observe. The agent’s substrate state for its own :Self needs crosses lifting thresholds as internal sensing exposes accumulated need-pressure, and need propositions for :Self enter the stream. For attached entities, substrate activity in the agent’s theory-of-mind machinery - the agent’s ongoing modeling of what the user is likely needing based on observable behavior and context - can similarly cross lifting thresholds and produce need propositions about the user. In either case, the needs are already present as first-class stream content before Orient runs.

In Orient. When the agent constructs the situational state, it assembles the lifted need propositions together with observations, interpretations, prior stream content, and references to accumulated history. These need propositions, together with the WBM values they aggregate into, form part of the situational ground the rest of the OODA cycle operates over. An agent in Orient sees: “my :connect_with_user need is at 0.65 and growing; the user’s :connect_with_agent need is at 0.45 as far as I can tell; my well-being is 0.335; the user’s well-being is 0.535.”

In candidate generation. Active needs shape which candidate actions the agent considers. An agent with a :connect_with_user need at high intensity generates “speak to user” as a candidate; an agent with a :stay_away_from_user need at high intensity generates “reduce communication frequency” as a candidate. The need state does not fully determine the candidate set - capabilities, context, prior commitments, and experience all contribute - but needs provide the motivational pressure that makes certain actions plausible candidates. This is the active-agency commitment from §1 made concrete: the agent’s internal need state generates candidate actions, including actions initiated without external prompt.

In scenario projection. When Decide evaluates candidate actions, each action produces projected need-state changes for self and attached entities. These projected need states aggregate through the envelope-and-contribution function into projected WBM values. The projected WBM is what Paper 07’s scoring operates over - Base score from attachment-weighted projected well-being, Penalty from guardrail violations. The evaluation is need-aware end-to-end because the WBM it operates over is constituted from need state.

An emergent property of envelope-and-contribution aggregation in the evaluation pipeline. Paper 07’s guardrail mechanism adds a nonlinear penalty to an action’s score when projected WBM crosses an entity’s guardrail threshold: Final score = Base score - Guardrail penalty. Bands are universal (every entity uses the same thriving, stable, stressed, at-risk, critical, collapsed semantics), but guardrail thresholds are per-entity - the agent’s design sets a distinct threshold for each attached entity. A user-serving agent might set guardrail(:Self) at the critical boundary (activates only near destruction) and guardrail(:User) at the at-risk boundary (activates earlier, more protective).

Under the envelope-and-contribution formula, guardrail crossings are driven primarily by the dominant need. The envelope 1 - I_max is what determines which band an entity is in; non-dominant pressure is bounded by M and typically stays within the band or leaks slightly into the adjacent band at band edges. So the downstream guardrail penalty effectively applies when an action’s projected need state includes, for some attached entity, a need severe enough to push that entity’s envelope past its guardrail threshold. Cumulative non-dominant pressure contributes modestly but cannot substitute for a single severe need. The safety semantics remains crisp: actions introducing individually severe harm to any attached entity incur large penalties, calibrated per-entity, with a small additional sensitivity to cumulative pressure near guardrail boundaries.

Cumulative mild harm is handled separately, through temporal need dynamics rather than snapshot aggregation. Needs persist across cycles (§3), so if substrate conditions sustain mild harm, the relevant need’s intensity grows cycle by cycle. Eventually it reaches severity, at which point actions touching it begin incurring guardrail penalties as dominant-need contributions rather than diffuse pressure. The two mechanisms divide the work: snapshot aggregation catches single actions causing individually severe harm; temporal need growth catches sustained patterns that eventually become severe. Diffuse mild concerns contribute bounded pressure within the current band and are architecturally distinct from concentrated severe concerns - the formula reflects this distinction through the bounded tanh contribution.

6.7 Motivation Reconciled

Paper 07 defines motivation as the evaluation gap. Paper 11 develops needs as the architectural primitive for motivation. These are not competing framings. They describe the same architecture at different levels.

Needs are the input side of motivation. They are the propositional content the agent has about what is motivationally active - specific, typed, inspectable propositions with intensity and grounding. The agent can reference its needs, reason about them, trace their grounding to substrate, and (in agents that support reflection) think about why it has the needs it has.

The evaluation gap is the output side of motivation. It is what happens when needs are realized in candidate-action scoring. Candidates that address active needs produce projected WBM where need state improves; do-nothing produces projected WBM where need state stays the same or worsens. The difference in attachment-weighted scores IS the evaluation gap. The gap has the magnitude it does because the need state it operates over has the values it does.

The two views are complementary. Paper 07’s evaluation-gap framing captures the action-selection consequence: the agent acts because the evaluation says acting is better than not acting. Paper 11’s needs-as-primitive framing captures the cognitive content: the agent has specific propositional needs that can be inspected, traced, and reasoned about, and these needs are the substrate that produces the evaluation gap. Same architecture, two layers of description.

6.8 Relational Needs and Structural Safety

The §6.5 evaluation walkthrough reveals a structural property: the agent’s motivation to act is constituted from need state across multiple entities, not from the agent’s own need state alone. The greeting action was motivated partly by the agent’s own :connect_with_user need (contributing 0.052 to the evaluation gap) and partly by the user’s modeled :connect_with_agent need (contributing 0.110). Because the ATTM weighted the user more heavily, the user’s need state was the dominant contributor to the agent’s motivation. The agent is motivated to speak not primarily because it wants to speak, but because the user needs to be spoken to and the agent cares about the user.

This is what it means architecturally for needs to be relational. The agent’s WBM aggregates over need state for attached entities, not just for :Self. The evaluation weights that aggregate by ATTM. The resulting motivation reflects how actions would affect everyone’s needs. The relational structure is not an add-on to the motivation mechanism - it is built into the WBM-constituted-from-needs architecture from the start.

This is genuinely different from PSI’s framing of needs. PSI’s five basic needs (existential, affiliation, certainty, competence) are properties of the agent - the agent has its own existential need, its own affiliation need. The agent acts on these needs, sometimes by interacting with others, but the needs themselves are the agent’s internal drives. OAT-based agents can have such self-referential needs (a robotic agent’s low-power need is about the agent’s own state), but OAT also commits to modeling and evaluating the needs of attached entities as part of the agent’s own motivational architecture. The user’s needs, as modeled by the agent through theory-of-mind, enter the agent’s WBM and shape the agent’s action selection through ATTM-weighted evaluation. The agent’s motivation is over the coupled system, not over the agent alone.

The safety implications follow directly. An agent whose WBM aggregates only over :Self need state has motivation that is self-contained - it pursues its own need satisfaction without architectural representation of how its actions affect attached entities. This is the optimization-without-care pattern: the agent evaluates and acts, but the evaluative substrate contains no represented well-being for entities it may harm. Motivation without attached-entity need state in the WBM is optimization without care.

OAT-based agents avoid this not because ATTM sits outside the motivation mechanism as a separate “counterweight,” but because motivation runs through the WBM which is constituted from need state including attached entities’ needs. In the intended OAT architecture, action evaluation is defined over attached entities as part of the WBM, so excluding them is an architectural truncation rather than the default case. Safety is structural - built into how WBM is constituted - not a constraint layered on top of a self-contained motivation engine.

In OAT, motivation is not the internal pressure of an isolated agent moderated by ethics; it is the evaluative gradient over the represented need state of a coupled system.

6.9 The Coupling Synthesis

The deeper architectural insight underneath the relational-needs commitment: OAT-based agents are designed to be coupled to the entities they care about, and their motivation, activity, and evaluation all flow from this coupling rather than from self-contained internal state.

This is now grounded in the full architectural picture. The coupling is constituted by: entity-indexed needs entering Orient for :Self and for attached entities through theory-of-mind; WBM computed from need state across entities through the aggregation function; ATTM weighting that determines how much each entity’s need-derived well-being contributes to evaluation; scenario projection that tracks how candidate actions would change need state for each entity; and scoring that selects the action with the best attachment-weighted need-state outcome across the coupled system.

The three commitments stated in §1 - that OAT-based agents are continuously active, that motivation requires ATTM, and that needs are required for advanced agency - are three angles on this single underlying commitment. Active agency requires needs as the source of internally-generated motivation. Motivation requires ATTM because the WBM that motivation operates over includes attached entities whose weight in evaluation comes from the ATTM. And needs are required for advanced agency because without need-state representations for both self and attached entities, the evaluation pipeline has nothing to aggregate, no WBM to project, and no basis for attachment-weighted action selection.

This also connects to the rest of the OAT framework. Coupled multi-agent dynamics (Paper 10) arise when two agents each run their own OODA loop over a shared situation and a modeled need state of the other - each one’s WBM includes the other’s need state, and each one’s actions are moves on the joint need-state space. Speech acts (Paper 10) are need-driven actions on joint need state - the agent speaks because the communicative needs of the coupled system require maintenance, repair, or advancement. Critics (Paper 09) monitor need-related dynamics in the stream - whether active needs are being addressed, whether competing needs are producing tension, whether the need state is evolving as projected.

6.10 Need as the Architectural Primitive for Motivation

Paper 10 §4.1.1 introduced an example involving a :need proposition:

(:need (:as .n1 :in .S1)
  :agent :Self
  :content [:information (:about .u1 :from :User)]
  :because .o7)

In Paper 10’s example the agent has a need for information from the user, lifted in response to a specific observed gap. Paper 10 showed the form. Paper 11 supplies the architecture underneath it: how need propositions arise from substrate conditions, enter the stream, ground WBM, and shape attachment-weighted evaluation.


7. Pre-LLM Need-Driven Action Architecture

Before 2024, the author experimented with agent systems in which needs were modeled as explicit first-class architectural primitives rather than as auxiliary state. These systems used substrate-level detection modules that monitored the agent’s internal and environmental state for conditions corresponding to active needs, and then lifted those detected needs into the agent’s propositional working space as explicit :need statements. The lift was implemented ad-hoc for the need case using the implementation technology of the time, rather than as a general substrate-to-propositional mechanism.

Each lifted need was a proposition with structure: a type identifying which need was active, an intensity indicating the current level of the substrate signal, and in some cases a guardrail value the intensity was measured against. Needs were not opaque scalar values; they were inspectable, addressable propositional content the agent could reason about. The set of currently active needs across the agent’s working space could be thought of as a position in a multidimensional space - each need contributing one dimension - with each OODA cycle producing a new position. This space metaphor was useful as a thinking tool: it made it possible to visualize the agent’s behavior over time as a trajectory through need-space, and those visualizations gave intuitive grasp of how the agent’s motivation evolved across cycles. The underlying representation was always propositional, but the metaphor of motion through a continuous space provided a useful way to reason about the propositional state’s evolution.

Action generation in these systems was conditioned on the active needs. Candidate actions were not constructed independently of need state and then evaluated for fit; the active :need propositions shaped which candidates were generated in the first place, and the intensity attribute influenced which candidates were prioritized. This connection between needs and actions was the load-bearing architectural commitment of the work: needs were first-class because the action loop required them to be, not because they were a useful annotation on an otherwise need-independent process. The taxonomy of needs was treated as agent-type-specific - what needs an agent had depended on what kind of agent it was - rather than as a universal set inherited from a general theory of motivation. PSI’s five basic needs provide one well-known example of what an agent-type-specific need taxonomy can look like for human-modeling agents; other agent types call for different taxonomies.

A historical observation about alternatives is worth recording. During the same period of experimentation, both the author’s own work and observations of other research efforts explored alternative approaches that tried to skip the explicit needs layer and use higher-level constructs - typically “high-level goals” - as the architectural primitive instead. The intuition behind these attempts was that goals seemed more directly connected to action and decision-making, while needs felt like an extra layer that could be elided. The findings were consistent: skipping the explicit needs layer produced architectures that were less clean and harder to integrate with other components. Workarounds that added need-like mechanisms on top of the goals layer tended to reproduce what an explicit needs layer would have provided directly, at the cost of architectural cleanliness. The conclusion from this experimentation was that explicit needs are worth the architectural commitment they require: they integrate cleanly with substrate detection, with attachment evaluation, with action selection, and with the goal-and-plan structure that builds on top of them. OAT’s commitment to needs as architectural primitives is grounded in this experimental finding as well as in the theoretical considerations §1 develops.

Another historical observation. The pre-2024 systems also experimented with explicit mapping hints from needs to actions and goals - a many-to-many catalog that the action-selection mechanism could consult when a need was active. A given need could map to multiple candidate actions or goals, and a given action or goal could be reachable from multiple needs. The mapping wasn’t deterministic; it gave action selection a useful starting set of candidates that had been found useful in similar past situations, which the agent could then evaluate against current conditions. This proved helpful: it reduced the search space for what to do in response to an active need without forcing the agent to derive candidates from scratch each cycle. Today the same architectural function would be implemented differently. With modern substrates (embedding models, vector databases, semantic search over case libraries), the hand-curated mapping table would be replaced by a corpus of past cases that the agent retrieves from based on semantic similarity to the current need and context. The function is the same - give action selection a useful starting set of candidates based on what has worked for similar needs in the past - but the implementation is more flexible and scales without requiring hand-curated mappings. This is one example of how OAT-based architectural patterns from the pre-2024 era translate cleanly into modern substrates: the architectural commitment (give action selection useful priors from past experience) is substrate-independent, while the specific implementation evolves with what the substrate supports.

The architectural pattern these pre-2024 systems implemented in less formal terms is the same pattern Paper 11 develops in OAT propositional notation: an affective sensor monitoring time-since-communication, a need lifted into the propositional layer, intensity growing across cycles, and eventually an agent-initiated speech act when the ranking favors the communication action. The author actually built systems that did this. Request-response architectures cannot do it by construction.

OAT now provides explicitly what those systems were doing as a specialized case. The propositional lift developed in Paper 06 generalizes the need-detection-and-lifting machinery into a universal mechanism that handles any kind of substrate-to-propositional bridging, with :need propositions becoming one type of lifted content among many. The connection between needs and action selection is now part of OAT’s standard architecture rather than a system-specific commitment.


8. Connection to the Broader Architecture

Paper 11’s treatment of needs and motivation does not introduce new architectural machinery. It makes explicit what was already implicit across the prior papers in the series.

Paper 04 (Action Realization). Operational commitments are not motivationally neutral. An outstanding promise carries weight; the unfulfilled obligation generates motivational pressure that can be lifted as :need or related propositions. Paper 04 treats commitments as architectural facts and develops their irreversibility and operational force. Paper 11 adds that commitments interact with need state - they generate need-pressure when threatened or anticipation-pressure when nearing fulfillment, and they enter the agent’s motivated cognition through the same lift mechanism that produces other need propositions.

Paper 06 (Propositional Lift and Subject Stream). The lift’s selectivity, developed in Paper 06 as the mechanism by which implementation-layer content becomes propositional, is the same selectivity that produces need propositions. Paper 06 establishes that the lift is selective; Paper 11 specifies that one of the things the lift is selective about is which substrate conditions get categorized as which needs.

Paper 07 (Attachment-Based Ethics). ATTM is the substrate that makes things motivationally significant. Paper 07 develops attachment as an evaluative mechanism; Paper 11 shows that attachment is also what determines which substrate signals reach the threshold for need lifting. Without attachment, substrate signals stay below the lift. With attachment, signals become motivationally significant and get lifted as needs. And the safety commitment from §6.1 lives here: motivation without ATTM is dangerous, ATTM and motivation must be co-present, the architectural diagnosis of optimization-without-care failure modes traces directly to this commitment. Paper 11 also adds that ATTM is the mechanism by which existing goals and plans get continuously reweighted as needs change: when a new need arrives or an existing need shifts in intensity, ATTM re-evaluates the entire current goal-and-plan structure, allowing goals to be reprioritized, plans to be postponed or accelerated, and the agent’s commitments to remain responsive to changing motivational conditions rather than rigidly fixed to whatever conditions held when the plans were originally formed.

Paper 08 (Planning and Temporal Control). Paper 08 develops planning as an agentic action: the agent’s Decide phase can commit to “construct a plan” as the action of a cycle, producing a plan that becomes a persistent mental state guiding subsequent cycles. Paper 11 adds that needs are the architectural source of the motivation that drives goal formation and planning. The chain from need to executed step is recursive: needs produce goals, goals can produce plans, plans can decompose into subgoals, subgoals can produce sub-plans, and the recursion bottoms out in directly executable actions at the leaves. Each level of the recursion is itself an action in OAT’s sense - forming a goal, constructing a plan, abandoning a commitment, revising a belief are all mental acts that the agent commits to in Decide and produces in Act, with the resulting mental objects entering the operational history the same way physical actions do. This extends Paper 08’s planning-as-action commitment to the broader claim that mental operations are first-class actions throughout the architecture. The grounding chain of any plan step traces back through the recursive structure to the originating need that started the chain, which makes plans inspectable not just at the level of “what is the agent doing” (Paper 08’s contribution) but also at the level of “why is the agent doing it” (Paper 11’s contribution) - the answer to the why-question is the need that grounds the plan. Paper 08’s connection to attachment carries forward naturally: where Paper 08 says plans involving commitments to others use Paper 07’s ATTM for weighing abandonment, Paper 11 adds that the needs driving those plans are themselves attachment-weighted, so the whole chain from substrate to executed plan step is shaped by attachment from end to end. A smaller observation also applies: anticipation of future need states is part of how planning works, and the projections Paper 08 uses can themselves carry motivational weight through the lift firing on anticipated need states.

Paper 09 (Critics). Critics monitor for mismatches and inconsistencies. Paper 09 treats critics as monitoring mechanisms operating over the propositional stream. Paper 11 adds that some critics monitor need-related dynamics: whether active needs are being addressed by the actions the agent commits to, whether competing needs are producing tension that needs resolution, whether a need that should have been resolved by recent actions is still active.

Paper 10 (Communicative Agency). Speech acts can be need-grounded. Paper 10 develops the architecture of speech acts; Paper 11 adds that the active need state of the agent at the time of the speech act can be the grounding for the act, and that communicative coupling between agents is partly motivated - agents pursue communication because of active needs related to information, connection, alignment, or coordination.

The cumulative picture is that Paper 11 is not adding a new domain to OAT but revealing the motivational dimensions that were present throughout the prior papers and giving them explicit architectural treatment. The other papers do not need to be rewritten. They gain a richer reading once Paper 11 is in place.


9. Needs and Motivation as Architectural Commitment

This paper has developed needs as a first-class architectural primitive in OODA-based agents. The synthesis of the commitments OAT makes about needs and motivation is the following.

Needs are propositional. Need state is not opaque scalar values or hidden internal variables. Needs are propositions in the subject stream, with type and intensity, and the same reference and inspection properties that any other propositional content has. They can be referenced by other propositions, evaluated by ATTM, included in :about chains, and operated on by the same machinery that handles all stream content. The decision to make needs propositional rather than opaque is what makes the rest of the architecture work cleanly: needs can be reasoned about, traced, audited, and (in agents that support reflection) thought about by the agent itself.

Needs are lifted from substrate. The propositional layer doesn’t generate needs from nothing. Substrate sensors detect need-relevant conditions, the lift fires when conditions warrant, and need propositions enter the stream as the lifted form of substrate state. This is the same lift mechanism Paper 06 develops generally, applied to needs as one type of lifted content. The lineage of any need traces back through the lift to substrate, and the agent (or an inspector of the agent) can follow this trace to understand why a particular need is active.

Needs persist and evolve across cycles. A need lifted in one cycle does not vanish at cycle’s end. The cross-cycle reference mechanism developed in Paper 10 §5.2 lets later cycles reference earlier-cycle needs explicitly. Needs grow in intensity as substrate conditions persist or worsen, decay when conditions resolve, and accumulate across the agent’s operational lifetime as the persistent record of motivated activity.

Needs are agent-type-specific. OAT does not commit to a universal need taxonomy. Different agent types call for different need taxonomies, and the taxonomy is part of agent design. PSI’s five basic needs are one example for human-modeling agents; other agent types call for other taxonomies. The architectural commitment is that needs are propositional primitives and that the lift produces them from substrate; the specific catalog of need types is a design choice for each agent.

Needs require ATTM. This is the safety commitment, stated in §1 and developed in §6.1. An agent with motivation but without attachment-grounded evaluation is architecturally incomplete and prone to optimization-without-care failure modes. OAT-based agents are designed with motivation and ATTM as co-present mechanisms because the architecture takes seriously this failure mode. The two mechanisms are not independent components that happen to be in the same architecture; they are mutually defining commitments that together produce safe motivated agency.

Needs make active agency possible. The continuous OODA loop, the need state that continues to develop between external inputs, the capacity for the agent to initiate action driven by its own internal need state - these are not separable architectural features. They depend on each other. The active-agent commitment in §1 requires need-based motivation to make sense, and need-based motivation requires the active agent to provide the continuous loop in which needs can develop.

These six commitments are not six independent claims. They are angles on a single underlying architectural commitment that §6.2 developed: OAT-based agents are designed to be coupled to the entities they care about, and their motivation, activity, and evaluation all flow from this coupling rather than from self-contained internal state. Needs-as-propositional is what makes the coupling inspectable. Needs-lifted-from-substrate is what grounds the coupling in detected conditions. Needs-persisting-across-cycles is what gives the coupling a temporal dimension. Needs-being-agent-type-specific is what lets the coupling take different forms for different kinds of agents. Needs-requiring-ATTM is the safety counterweight that keeps the coupling from becoming self-contained optimization. Needs-making-active-agency-possible is what requires the coupling to operate as a continuous relationship rather than as episodic response to external triggers. Each of the six commitments is a necessary consequence of the single underlying design choice that OAT-based agents are coupled-from-the-start to the entities they care about. The commitments are not independent axioms that happen to be consistent; they are the architectural shape of what coupled-from-the-start motivation requires.

The research program is not theorizing about whether motivation could in principle be added to artificial agents; OAT-based agents are explicitly built with needs as architectural primitives because the architecture requires need-based motivation to reach the capability level the program targets. This is the positive commitment Paper 11 makes. The §6.4-§6.5 evaluation walkthrough demonstrates the commitment in operational form with full ATTM-weighted scoring. §7’s historical context grounds the commitment in working systems the author built before 2024 in the PSI tradition.

What Paper 11 does not commit to is also worth stating. The paper does not commit to OAT-based agents having the human emotional vocabulary - anxiety, joy, shame, anger, love, and the rest. Those categories are culturally and developmentally specific to human agents and are not what OAT is building. The paper does not commit to resolving the philosophical question of whether artificial substrates produce phenomenal feltness; the architectural specification of needs is independent of substrate phenomenology. The paper does not commit to a universal need taxonomy; agent-type-specificity is a deliberate design commitment.

What the paper does commit to is what OAT-based agents are: continuously active agents with needs as first-class propositional content, whose needs are lifted from substrate, persist across cycles, are weighted by attachment, and ground action generation including the initiation of action without external prompt. This is what advanced agency requires architecturally, and it is what the program is explicitly building.


10. Conclusion

This paper has developed needs and motivation as architectural primitives of OODA-based agency. Needs are not separate from cognition in OAT, nor are they a faculty added on top of a cognitive base. They are part of how the OODA loop runs at the propositional layer, produced by the lift firing on substrate-level need detection, shaping orientation and decision through attachment-weighted evaluation, and grounding action through the same commitment mechanisms that produce all action.

The treatment is two-layer: substrate-level need detection at the implementation layer, lifted :need propositions at the propositional layer, with the lift bridging them. This honors both the realist intuition that substrate states are real and (for agents capable of phenomenal experience) felt, and the constructivist insight that propositional categorization of substrate is built by mechanisms the agent designers specify rather than read off from biology.

The treatment is OODA-specific: needs enter through Observe, get categorized at Orient, shape commitment at Decide, and ground action at Act. The cycle accommodates need content natively because needs are part of what the cycle does.

The treatment is type-specific: agent-type-specific need taxonomies are a deliberate design commitment. PSI’s five basic needs are one example for human-modeling agents; other agent types call for other taxonomies. The architectural primitive is needs-as-propositional-content; the specific catalog is design.

The treatment is integrated with the rest of OAT: it does not introduce new machinery but reveals motivational dimensions already present in Papers 04, 06, 07, 08, 09, and 10. Paper 11 makes explicit what the prior papers were doing implicitly.

And the treatment is safety-grounded: motivation without ATTM is architecturally incomplete and prone to optimization-without-care failure modes. OAT-based agents are designed with motivation and ATTM as co-present mechanisms because the architecture takes seriously this failure mode. The two commitments are not independent; they define each other.

Underneath these architectural properties - and underneath the six commitments synthesized in §9 - is a single deeper design choice: OAT-based agents are designed to be coupled to the entities they care about, and their motivation, activity, and evaluation all flow from this coupling rather than from self-contained internal state. This is the architectural synthesis Paper 11 makes visible, and it is what distinguishes the OAT approach to motivation from the self-contained-optimization framing that dominates other approaches to agent architecture.

Needs in OAT are not a new domain. They are the rest of the program seen from a new angle, with the angle making visible something that was always present but not yet named. And the program is explicitly building agents with these commitments, because the architecture requires them to reach the capability level OAT-based agents are built for.


References

[1] L. F. Barrett. How Emotions Are Made: The Secret Life of the Brain. Houghton Mifflin Harcourt, 2017.

[2] M. Minsky. The Emotion Machine: Commonsense Thinking, Artificial Intelligence, and the Future of the Human Mind. Simon & Schuster, 2006.

[3] A. Damasio. The Feeling of What Happens: Body and Emotion in the Making of Consciousness. Harcourt Brace and Company, 1999.

[4] L. S. Vygotsky. Thought and Language. MIT Press, 1934/1986.

[5] T. Nagel. What Is It Like to Be a Bat? The Philosophical Review, 83(4): 435-450, 1974.

[6] D. Dörner and C. D. Güss. PSI: A Computational Architecture of Cognition, Motivation, and Emotion. Review of General Psychology, 17(3): 297-317, 2013.

[7] J. Bach. Principles of Synthetic Intelligence: PSI: An Architecture of Motivated Cognition. Oxford University Press, 2009.