Cairn Documentation

The Four Questions Your Model Must Answer

Every system model, no matter how detailed, faces the same four questions. Most tools can't answer them. Cairn's analytical lenses exist specifically to make these questions answerable.

Question 1: What's Underneath?

A system doesn't exist in a vacuum. It depends on layers of prerequisite knowledge, technologies, and capabilities that must exist before the system can work.

Your autonomous rover depends on LIDAR technology. LIDAR depends on laser physics and signal processing. Signal processing depends on digital electronics and Fourier analysis. Each layer rests on the layers beneath it.

Traditional MBSE tools don't model this. They show your system's structure but not its technological foundation. You can't see which capabilities are mature and which are research problems. You can't identify the weakest layer holding up your design.

The Causality Lens answers this question. It shows your system as a pyramid of prerequisite technologies — from the knowledge foundation at the bottom through enabling technologies, domain technologies, and finally your capstone system at the top. Each layer is colored by maturity (TRL). Gaps become visible.

Question 2: What's Missing?

Every model is incomplete. The question is whether you know where the gaps are.

A node might have detailed requirements but no behavior model. Another might have a rich state machine but no verification records. A third might be thoroughly specified but completely disconnected from its siblings — no interfaces, no trace links.

Traditional tools don't aggregate this. You'd have to manually check each node for each type of content, building a mental map of where the model is thin.

The Completeness Lens answers this question. It computes three fidelity scores per node:

  • Entity coverage (how many nodes are defined vs. implied)
  • Process coverage (how many behaviors and states are modeled)
  • Relationship coverage (how many interfaces and traces exist)

A radar chart shows each node's shape. A heatmap shows the whole tree. Gaps light up. One-click prompts offer to fix them.

Question 3: What Does It Mean?

A system is more than a parts list. It transforms inputs into outputs through a chain of processing steps. But most models don't surface that chain — they show structure without narrative.

What actually happens when a delivery request arrives? Which subsystems participate? In what order? What's the main thread of transformation vs. the supporting functions?

Traditional tools show boxes and lines. They don't distinguish the critical path from the periphery. They don't generate explanations. They don't help you communicate what the system actually does.

The Narrative Lens answers this question. It identifies the "mainstay" — the primary transformation chain from input to output — and renders it as a readable systemigram. AI generates verb phrases for each connection. The result is a diagram you can show stakeholders that explains the system's purpose, not just its structure.

Question 4: How Did We Get Here?

Engineering is decision-making. Every decomposition choice, every technology selection, every interface definition is a decision with alternatives that weren't chosen.

Why is the power system battery-electric instead of fuel cell? Why does navigation use LIDAR instead of pure vision? What drove those choices? What would have to change for the alternatives to become viable again?

Traditional tools show the current design. The alternatives that were considered and rejected — the "dead paths" — exist only in meeting notes and engineers' memories. Six months later, nobody remembers why Option B was rejected. Someone proposes it again. The team re-litigates the same decision.

The Dendritic Lens answers this question. Pruned alternatives are first-class nodes with explicit prune reasons, first principles, and cross-branch dependencies. The tree shows active and rejected paths. Decision history is inspectable forever.

The Four Lenses Reinforce Each Other

These aren't four separate tools — they're four views of the same model, and they interact:

  • Completeness flags a node with no behavior modeled
  • You check Causality — it has no prerequisite technologies
  • Narrative shows it's disconnected from the transformation chain
  • Dendritic reveals it was a hasty decomposition without evaluation

Each lens catches something the others miss. Together, they make the model self-auditing.

Read Four Questions Your Model Can't Answer for the full framework with examples