Research

Peer-citable research establishing why AI decisions require settlement.

These publications trace a single, unavoidable conclusion: without per-decision closure, accountability in agentic AI is structurally impossible.

This work does not propose compliance checklists or ethical guidelines. It demonstrates—step by step—why responsibility must be closed, not merely observed.

Cognitive Leakage: A Unified Framework for AI Accountability in the Age of Autonomous Agents

Chang, YC · OIA Lab · 2026

Defines cognitive leakage operationally and introduces Δ1 as a binary condition for responsibility finality.

Δ1 specifies when an AI decision can be considered complete, rather than perpetually open. It formalizes per-decision closure by requiring that all responsibility conditions are met at a specific moment in time.

DOI: 10.5281/zenodo.18525055 ↗

Responsibility Completion for Agentic AI

Chang, YC · OIA Lab · 2026

Defines Conceptual Sovereignty—the requirement that each agent’s accountability boundary remain independently verifiable—and frames Responsibility Completion as a prerequisite for decision settlement in multi-agent systems.

This work establishes why accountability cannot be inferred post-hoc and must instead be engineered as a completion state.

DOI: 10.5281/zenodo.18524766 ↗

From Data Leakage to Intent Leakage: A Hierarchical Risk Taxonomy for Agentic AI Systems

Chang, YC · OIA Lab · 2026

Proposes a five-level leakage taxonomy (Data, Information, Cognitive, Intent, Conjunctive) and introduces Decision Reconstruction as an adversarial primitive.

Demonstrates why leakage-based risk models and continuous scoring cannot produce responsibility closure, motivating the need for settlement-based governance.

DOI: 10.5281/zenodo.18524974 ↗

Intent Leakage by Design: Governance Gaps in MCP, Tool Use, and A2A

Chang, YC · OIA Lab · 2026

Identifies protocol-level behaviors in MCP, tool-augmented AI, and agent-to-agent systems that prevent responsibility closure by design.

Shows how decision sequences and behavioral patterns expose intent without any mechanism to define when a decision is finished—creating persistent liability gaps at the protocol layer.

DOI: 10.5281/zenodo.18526555 ↗

Research Arc

Why settlement is unavoidable:

Risk taxonomyIncomplete governance primitivesNo decision finalityLiability gaps at the protocol layer

Each publication addresses a different layer of the same structural failure: AI decisions accumulate responsibility, but nothing closes it.

What this research establishes

• Accountability cannot rely on observation alone

• Logging and scoring cannot define completion

• Governance without closure produces unbounded liability

• Responsibility finality must be engineered

These findings motivate a single engineering conclusion:

AI decisions require settlement.

How this research is used

This research does not certify systems or assign blame. It establishes the conditions under which responsibility can be considered complete.

Those conditions are implemented in the Decision Settlement Layer.