Cairn Documentation
AI Governance: Propose → Review → Apply
When you ask Cairn's AI to decompose a subsystem or generate requirements, something unusual happens: nothing changes.
The AI doesn't touch your model. Instead, it produces a ChangeSet — a list of proposed operations — and hands it to you for review. Only when you explicitly apply the ChangeSet does the model update.
This is the core governance mechanism, and it's non-negotiable.
Why AI Shouldn't Directly Mutate Your Model
AI makes mistakes. It hallucinates requirements that don't make sense. It decomposes systems in ways that miss your actual constraints. It names things poorly. It occasionally invents interfaces that violate physics.
If the AI wrote directly to your model, you'd discover these mistakes later — maybe much later — tangled into a web of downstream dependencies. Fixing them would mean archaeology: figuring out what the AI did, what depends on it, and how to unwind it without breaking everything else.
The ChangeSet pattern prevents this. You see exactly what the AI wants to do before it happens. You can accept the good parts and reject the bad parts. You're always in control.
The ChangeSet Contract
A ChangeSet is a list of operations:
- create — add a new node, requirement, interface, state, etc.
- update — modify an existing entity's properties
- delete — remove an entity from the model
Each operation is atomic and inspectable. When you review a ChangeSet, you see:
Not "the AI did some stuff." Explicit operations you can evaluate one at a time.
Operation-by-Operation Review
The review screen shows each operation with:
- What entity is affected
- What's being created, changed, or removed
- A diff view for updates (old value vs. new value)
- Accept, reject, or edit controls per operation
You can accept the node creations but reject a requirement that doesn't make sense. You can edit a name before accepting. You can reject the entire ChangeSet if the AI misunderstood your request.
Partial acceptance is the norm, not the exception. AI gets things 80% right. Your job is to catch the 20%.
History and Rollback
Every applied ChangeSet is recorded in the History tool with:
- Timestamp
- Source (which specialist, or "user" for manual edits)
- Summary of operations
- Full operation list (expandable)
If you apply something and regret it, you can find it in history and see exactly what changed. Full undo is on the roadmap — for now, history gives you the audit trail to manually revert.
Trust Calibration
Start skeptical. Read every operation in your first few ChangeSets. Reject liberally. Edit names to match your conventions.
As you see what the AI gets right and wrong, you'll develop intuition. The Architect specialist is usually solid on decomposition structure. The Requirements specialist sometimes over-generates. The Behavior specialist needs more guidance on state granularity.
Calibrate your trust per specialist, per context. The ChangeSet pattern gives you the data to do that calibration safely.
The V&V Literature Agrees
This isn't a novel idea. Verification and validation literature has long argued that model changes should be traceable, reviewable, and attributable. Sargent's simulation V&V framework, Loper's systematic approach — both emphasize that you can't validate what you can't inspect.
Cairn applies that principle to AI-assisted engineering. Every AI action is a proposal. Every applied change is traceable. The model's provenance is always clear.