Foundational Papers.

  • This paper establishes deterministic interpretation as a system-level solution to non-determinism in generative AI.

    We show that variability arises from semantic under-specification, where multiple admissible task interpretations lead to divergent outputs.

    By enforcing a single interpretation prior to inference—via substrate-first architectures—independently trained models converge to identical outputs without changes to model execution.

    This reframes non-determinism as an architectural property of interpretation rather than computation.

    [view paper]

  • This paper establishes interpretation drift as a fundamental source of non-determinism in large language models.

    We show that semantic invariance does not hold: identical inputs admit multiple valid task interpretations, leading models to generate divergent yet internally coherent outputs.

    Through minimal, reproducible experiments, we demonstrate that this variability persists across frontier systems and cannot be eliminated through decoding controls or output constraints.

    [view paper]

  • This paper presents empirical evidence that large language models construct unstable task representations under identical inputs and instructions. Using non-linguistic ARC-style reasoning tasks, we show that frontier models diverge in perceived object structure and dimensionality before reasoning begins, unifying hallucinations, inconsistency, and unreliability as symptoms of representational instability.

    [view paper]

  • This work establishes a structured taxonomy for interpretation drift in large language models, separating representational instability from hallucination, inconsistency, and output variance. The taxonomy provides a common language for diagnosing reliability failures across models, prompts, and deployment contexts.

    [view paper]

  • We introduce a triangulation framework—causal integrity, external constraint, and negative space—to distinguish grounded reasoning from ungrounded coherence.

    This establishes hallucination as a structural failure of verification and positions falsification as the minimum condition for trustworthy AI systems.

    [view paper]

Papers I–III establish interpretation drift as a fundamental failure mode that is systematically overlooked — and present the first empirically validated solution.

Essays & Manifesto.

The essays and manifesto serve articulate the lens under which this work operates: which assumptions are accepted, which boundaries are non-negotiable, and which forms of judgment must remain human regardless of future technical advances. Rather than proposing mechanisms or optimizations, these pieces make explicit the constraints that govern interpretation, responsibility, and authority in human–AI systems.

Essays I–III extend the technical findings of the white papers into conceptual territory. They examine the implications of interpretation drift and human–AI collaboration at the level of cognition, institutional practice, and accountability. These essays are exploratory by design, intended to surface consequences and tensions that cannot be resolved through empirical measurement alone.

The Truth First Manifesto formalizes these constraints as principles. The accompanying essays explore what follows once intelligence is no longer scarce, but judgment remains irreducibly human.

  • This essay establishes a strict separation between intelligence and intention as a prerequisite for deploying AI systems without collapsing accountability. It defines the proper roles of models, humans, and external reasoning structures, showing why conflating capability with agency leads to agentic misframing, sentience panic, and organizational abdication of responsibility.

    The essay argues that the current disruption in the AI workforce is not primarily about job replacement, but about role confusion. As model capabilities outpace institutional clarity, organizations are forced to confront a missing layer of governance: the explicit design and maintenance of reasoning, constraints, and intent. The emergence of AI substrate architects is framed not as a future specialization, but as a structural necessity—an adaptive response to systems that can execute intelligence at scale without possessing judgment.

    [view paper]

  • This paper argues that AI’s true risk and opportunity lie not in its raw capabilities, but in how organizations structure authority, intent, and accountability around its use. It distinguishes between “bad AI”—systems that amplify organizational chaos, ambiguity, and misalignment by embedding unstable interpretations into decision paths—and “good AI,” where mechanical intelligence is deliberately constrained and governed by human-defined reasoning frameworks.

    Rather than treating AI as a decision-maker or cognitive equal, the paper frames effective human–AI systems as asymmetrical by design: machines provide execution and mechanical intelligence, while humans retain sole responsibility for intent, judgment, and authority. Under this model, organizations gain new problem-solving capacity not by surpassing human cognition, but by externalizing constraint-holding, interpretation stability, and reasoning discipline—capabilities that humans rely on but cannot reliably maintain at scale.

    This distinction clarifies why many AI deployments fail despite impressive model performance, and why extending organizational problem-solving requires architectural governance, not stronger models or tighter prompts.

    [view paper]

  • This essay examines how the deployment of AI makes something newly visible inside organizations: ambiguity that was previously survivable, contradictions that were previously manageable, and authority that was never clearly defined to begin with. As machine outputs scale, organizations lose the ability to rely on narrative, discretion, or inertia to smooth over misalignment. What once remained socially tolerable becomes structurally impossible to ignore.

    The essay introduces organizational singularity not as a triumph of intelligence, but as a failure of governance. It is the point at which silent authority transfer becomes dominant—when decisions are increasingly shaped by systems whose confidence and fluency mask unresolved human intent. At this threshold, breakdown is no longer primarily technical. It is organizational: unclear ownership of meaning, unstable reasoning paths, and authority that has drifted without being explicitly assigned.

    Rather than treating AI as an autonomous agent or cognitive peer, the essay argues that the real challenge is preserving truth and coherence in human systems under scale. Machines do not hold intent. They surface inconsistency, accelerate consequence, and remove the comfort of ambiguity. Humans remain responsible—but only if authority and meaning are anchored outside the machine and enforced structurally, not culturally.

    Under this view, progress does not come from smarter models or more capable agents. It comes from organizations being forced to confront whether they actually mean what they say, and whether their structures can hold that meaning steady over time. Organizational coherence is not a virtue signal or a mindset. It is a condition imposed at the boundary—where truth is explicit, authority is traceable, and responsibility cannot quietly drift away.

    [view paper]

  • The Truth First Manifesto defines the non-negotiable principles that govern this body of work. It begins from a simple premise: artificial intelligence does not possess intent, judgment, or responsibility, and therefore must not be treated as an authority—regardless of how capable, fluent, or convincing it appears.

    The manifesto fixes the lens before any solution is evaluated. It specifies where reasoning may be delegated and where it must not be, clarifies which forms of stability are meaningful and which are cosmetic, and makes explicit the boundary beyond which trust becomes abdication. These principles are not technical optimizations or ethical preferences. They are structural constraints, imposed to prevent the quiet transfer of authority from humans to machines.

    The manifesto does not prescribe implementations. It establishes limits. Any system, method, or breakthrough may function while violating these principles, but only by redefining responsibility out of existence. The cost is not performance. The cost is accountability.

    [view paper]