Technical notes · Awareness (A)
A as Integrated Information
This page develops Awareness (A) as integrated, recursive information: information that a system maintains, organizes, and applies to itself and its environment. The treatment here is structural rather than phenomenological; it does not attempt to reduce qualia, but to give a minimal formal account of what it is for a system to be aware in IO's sense.
Abstract
Building on Δ, R, and I, Awareness (A) is defined in Informational Ontology as what occurs when information is integrated into a self-maintaining system that forms models of differences and uses those models to guide its behavior. We formalize awareness as recursive informational closure: a system S is aware to the extent that it encodes distinctions about its own states and environment, stores them in an internal model M, and applies M to update itself under new inputs. This yields a graded notion of awareness, suitable for comparing minimal and complex systems without invoking mysticism.
1. Systems, States, and Models
We model a system as a tuple:
S = (X, E, Σ, f)
- X is a Δ-structured set of possible internal states of the system.
- E is a Δ-structured set of possible environmental states relevant to the system.
- Σ is a Δ-structured set of internal model states (representations).
- f is a transition function f: X × E → X × Σ mapping current internal and environmental states to new internal states plus updated model states.
Intuitively, Σ captures "what the system thinks is going on" in terms of differences in X and E; f specifies how the system revises that in light of new information.
2. Awareness as Internalized Information
The minimal IO claim is that a system is aware if it does not merely respond to differences, but maintains an internal structure Σ that tracks those differences and can be consulted or updated.
Definition 1 (Minimal Awareness). A system S = (X, E, Σ, f) is minimally aware of E if there exists an injective mapping g: E → Σ such that, for relevant histories,
Σt ≈ g(Et)
where Σt is the model state at time t and Etis the corresponding environmental state. The ≈ symbol indicates that Σ need not encode all details of E, but must preserve some set of task-relevant differences.
Put simply: the system carries within itself a differentiated structure that systematically corresponds to differentiated structures outside it.
3. Recursive Self-Reference
Awareness deepens when a system does not only encode E but also X: differences in its own internal states.
Definition 2 (Self-Model). A system S has a self-model if there exists a mapping h: X → Σ such that, for relevant histories,
Σt = h(Xt) or Σt = h(Xt, Et).
This means that the system's model states encode differences not just in the environment, but in its own configuration. Recursive awareness arises when updates to Σ depend on Σ itself:
Σt+1 = F(Σt, Xt, Et)
for some update rule F. This recursive dependence is a structural signature of what IO calls awareness.
4. Graded Awareness
Because Σ, g, and h can be more or less complex, Awareness is naturally graded in IO. We can define a simple awareness functional:
A(S) = I(Σ; E, X)
where I(Σ; E, X) is the mutual information between the system's model states and the joint external/internal state. Higher A(S) indicates that Σ carries more structured information about the relevant differences.
This does not equate awareness with mutual information, but uses it as a convenient measure of how tightly a system's internal model is coupled to its own and its environment's Δ-structure.
5. Awareness as Precondition for Value
IO's next step is that awareness of possible states makes preference between them possible. If a system can discriminate states (via Σ) and anticipate consequences of being in one rather than another, it can come to prefer certain differences over others.
In the next technical module, we represent such preferences as value functions over state spaces and show how they arise naturally once Awareness is present.