Belief Revision System

AI Belief System

Your AI Doesn’t Know You. Every New One Starts From Zero.

You've had hundreds of conversations with AI. You've told it where you work, how you like your code formatted, that you're vegetarian, that you moved to SF last year, that your daughter Lily starts kindergarten in the fall. You've built documents, made decisions, and explained your preferences over and over until the AI actually knows you.

Then you switch to a new agent, a new model, or a new tool and it knows nothing. So you have to start over.

This is the state of AI memory today: locked inside individual products and siloed per session. Even within a single product, the memory degrades. It forgets things you told it. It holds onto things that are no longer true. It can't tell the difference between something you said directly and something it guessed from context.

The problem isn't that AI can't remember. It’s that AI doesn't know what it means to believe something about you.


Memory is not a feature. It’s infrastructure.

Every AI agent you use needs to know you to be useful. Your coding assistant, your writing tool, your calendar planner, your research agent. Not just your name, but your preferences, constraints, context, history, decisions and why you made them.

Today, each agent builds this understanding from scratch. Every product maintains its own memory silo. When you switch tools, the understanding stays behind. When you use multiple agents in a workflow, each one operates with a partial, inconsistent view of who you are.

This is the wrong architecture. Memory shouldn’t live inside agents. Memory should live with the user.

XTrace is building private, portable memory: a unified belief system about you that any agent can connect to. When an agent plugs into XTrace, it doesn’t start from zero. It inherits a rich, structured understanding of who you are, what you’ve decided, and what you care about. When you correct something in one agent, every agent knows.

Your memory. Your data. Portable across every AI you use.

Not a database. A belief system.

The conventional approach to AI memory is storage: extract facts from conversations, embed them in a vector database, retrieve them when relevant. This is what ChatGPT, Claude, and every major AI product does. It works for demos. It breaks in production.

It breaks because storage systems have no theory of belief. They can’t tell the difference between:

  • A fact you stated directly vs. one the AI inferred from context

  • A fact that was true last month vs. one that’s true now

  • A correction (“that’s wrong, I don’t have a dog”) vs. a new statement (“I got a dog”)

  • A temporary state (“on vacation until Friday”) vs. a permanent trait (“I’m vegetarian”)

Without these distinctions, every memory system degrades over time. It accumulates stale facts. It lets weak inferences overwrite strong statements. It can’t handle retractions. It can’t handle time.

XTrace is built on a different foundation: belief revision theory. Forty years of formal epistemology applied to the problem of how an agent should maintain and update beliefs about the world.

In our system, the world is you. The agent is the AI. And the memory isn’t a pile of facts. It’s an evolving model of what is believed to be true about you, right now, given everything observed.


Memory system vs. belief system


Memory system (current)

Belief system (XTrace)

Data model

Flat list of facts

Belief graph with opinions, links, and temporal scope

What it stores

“User is vegetarian”

“User is vegetarian” (b=0.95, u=0.05, corroborated 50x, stated by user)

Conflict resolution

Newer wins

Evidence-weighted. Stronger belief wins. Weak inferences can’t override strong statements.

Correction

User says “that’s wrong.” System extracts a new fact. Old fact may or may not get found and removed.

User says “that’s wrong.” System searches the knowledge base, retracts the old belief, no replacement needed.

Retraction

Not possible. “I don’t have a cat anymore” has no fact to store, so nothing happens.

Dedicated contraction pipeline. Old belief marked RETRACTED. No new belief created.

Temporal

“On vacation until Friday” stored as timeless fact. Still active next month.

“On vacation until Friday” expires automatically. No one has to say “I’m back.”

Confidence

All facts are equal. A hallucinated inference has the same weight as a direct user statement.

Every belief carries an opinion (evidence for, evidence against, uncertainty). Weak inferences can’t trigger changes to strong beliefs.

Corroboration

Duplicate extractions are discarded silently.

Duplicate extractions strengthen the existing belief. 50 mentions = deeply entrenched.

Derived facts

“Worked at Stripe, now at Meta.” Job title updates, but the previous “Work email is alex@stripe.com” stays stale forever.

When a parent belief changes, derived beliefs are flagged for review. Stale beliefs don’t persist silently.

Multi-agent

Each agent has its own silo. No sharing.

Unified belief system. Cross-agent corroboration strengthens beliefs. One agent’s hallucination can’t override consensus.

Rollback

Supersession is permanent. If the system made a mistake, the old fact is gone.

Append-only. Every belief change is reversible. Rollback restores the previous belief with its full history.

This isn't a feature gap, it's a category difference. A memory system stores what was said, but a belief system maintains a model of what is true.


What belief revision gives you

Your beliefs get stronger over time

When you mention loving coffee in fifty conversations, the system’s belief that you love coffee becomes deeply entrenched. Not because someone labeled it “important,” but because evidence accumulated. Each mention strengthens the belief. A single ambiguous extraction from one conversation can’t override fifty data points.

We use a framework called Subjective Logic to represent this. Every belief has an explicit measure of how much evidence supports it and how much uncertainty remains. New beliefs start uncertain. Corroboration makes them firm. This is computed from structural signals, not from asking the AI “how confident are you?”

Corrections always work

When you say “that’s wrong, I don’t have a dog,” the system listens. Your authority to correct your own beliefs is absolute. It doesn’t matter how entrenched the old belief was. You are the ground truth.

But when the AI infers something that contradicts a well-established belief (like guessing you eat meat because you discussed a steakhouse), the system rejects the inference. The AI can’t unilaterally overrule what you told it.

This asymmetry is by design. Entrenchment protects your beliefs from bad AI inferences. It never protects them from you.

Things that expire actually expire

“I’m on vacation until Friday.” On Monday, the system stops believing it, automatically. No one has to tell the AI you came back. The belief has a temporal scope, and when the scope ends, the belief expires.

This applies to project deadlines, temporary states, and time-bounded decisions. Anything with a natural endpoint. The system models time, not just content.

The AI knows what it doesn’t know

Most memory systems are all-or-nothing: either a fact is stored at full confidence, or it doesn’t exist. XTrace has a middle ground. A weakly supported inference is stored with low confidence. It’s present in the system and available for future corroboration, but unable to override well-established beliefs and invisible in default context.

If the same inference shows up again across multiple conversations, its confidence grows. Eventually it becomes a full belief. But it has to earn its way there through evidence, not through a single lucky extraction.

Portable across every agent

Here’s why this matters beyond a single product.

When memory lives inside individual agents, every agent builds its own partial model of you. Agent A knows your coding preferences. Agent B knows your writing style. Agent C knows your calendar. None of them have the full picture, and they can’t share.

When memory lives with the user in a unified, portable belief system, every agent connects to the same understanding. The architecture:

Each agent reads from and writes to the same belief system. When the coding agent learns you switched from Python to Rust, the writing agent knows too. When you correct your job title in the calendar agent, every agent updates.

Cross-agent corroboration makes beliefs stronger. When multiple independent agents observe the same thing about you (your coding agent sees Python code, your research agent sees Python library questions, your writing agent sees Python documentation), the belief “user works with Python” becomes deeply entrenched across agents. A single agent’s hallucination can’t override what multiple agents agree on.

This is the memory infrastructure that AI agents need. Not per-product silos. A shared, principled model of the user that gets better the more you use it.

The future: shared beliefs

V1 is one user, portable across agents. But the framework extends naturally to where AI is going.

Multi-agent workflows

When agents collaborate on your behalf (a research agent feeding into a writing agent feeding into a publishing agent), they need shared context. Today, this is solved with prompt chains and context passing. It’s brittle. Each handoff loses information.

With a shared belief system, every agent in the workflow reads from the same model. The research agent’s findings update the belief system. The writing agent picks them up automatically. No handoff, no lost context. The belief system is the shared memory of the workflow.

Team and organizational memory

When the “world” isn’t one user but a team, the belief system extends with authority scoping:

  • Personal beliefs like “I prefer dark mode” can only be asserted or revised by you.

  • Organizational beliefs like “We use Kafka for event streaming” can be asserted by anyone with authority, but revision respects the hierarchy. The CTO’s statement carries more evidential weight than a new hire’s.

  • Shared observations like “The deploy pipeline takes 20 minutes” can come from any team member, and the belief strengthens through corroboration across people.

The same entrenchment framework applies. A team belief corroborated by five engineers is harder to dislodge than one stated by a single person. Authority scoping determines who can assert what. The math is the same: Subjective Logic fusion operators work for multi-person just as they do for multi-agent.

Memory as a network

In the long run, memory isn’t just per-user or per-team. It’s a network. Your personal belief system connects to your team’s organizational memory. Your team’s memory connects to the tools and systems you use. Agents flow through the network, carrying beliefs between contexts. Always governed by the same revision principles. Always respecting authority and scope.

The user controls what’s shared and what’s private. The belief system respects boundaries. But within those boundaries, beliefs flow, accumulate evidence, and get stronger across every interaction.

Your memory. Your context. Your control.

© 2026 XTrace. All rights reserved.