Product

Four Questions Your Model Can't Answer

Every systems engineer asks them instinctively. No tool surfaces them. Here's what changes when the model is smart enough to answer back.

GregApril 20268 min read

Your system model knows what things are called. It knows how they connect. If you've been disciplined, it knows what each component requires and how those requirements trace to verification. This is valuable — it took months to build — and it answers exactly one category of question: what is the system made of?

But that's not the question that keeps you up at night. The questions that matter — the ones you ask in design reviews, argue about in hallways, and lose sleep over before critical milestones — are different. They are structural questions about the system's readiness, completeness, coherence, and reasoning history. And no tool on the market answers any of them.

We've identified four. Each one corresponds to a different analytical lens — a different way of computing insight from the same underlying model data. Each one reveals something the others miss. Together, they transform a system model from a static archive into a thinking tool.

A model that can only tell you what the system is made of is an inventory. A model that can tell you what's underneath it, what's missing from it, what it means, and how it got here — that's engineering intelligence.
Causality Lens
"What must exist before this can be realized?"
Every component in your system sits at the apex of a pyramid of prerequisite technologies. The battery management system doesn't just need child nodes for monitoring, balancing, and thermal protection — it depends on cell chemistry research (knowledge layer), battery management ICs (parts layer), thermal simulation tools (instruments layer), and electrochemical manufacturing processes (supporting technology layer). If any layer of the pyramid is immature or missing, the component carries risk that the decomposition tree cannot show you.
The Causality Lens inverts the tree. Instead of showing what a node contains, it shows what a node stands on. Each layer is color-coded by technology maturity — green for mature, amber for growth phase, red for early development. Gaps in the pyramid surface as actionable warnings.
Example: Autonomous Delivery Rover
The Nav Computer node shows a mature pyramid — GPS, IMU, and LiDAR are all production-ready technologies. But the Autonomy software layer depends on ML-based path planning at TRL 5–6. The Causality Lens reveals that the hardware is ready but the software intelligence isn't — a risk invisible in the architecture tree, where both appear as equal children of the Navigation subsystem.
Completeness Lens
"What's missing from this model?"
A system model's fidelity isn't a single number — it's multidimensional. Completeness can be measured across three independent axes: entity coverage (does the node have sufficient children?), process coverage (does it have defined behaviors?), and relationship coverage (does it have interfaces and traced requirements?). A node can score 95% on entities and 0% on behaviors — it's been decomposed but nobody defined how the parts actually work.
The Completeness Lens renders a three-axis radar chart per node and a heatmap across the entire tree. Red zones aren't just warnings — they're actionable. Click a gap and the tool pre-fills a prompt. One click from gap detection to AI-assisted resolution.
Example: Autonomous Delivery Rover
The Mobility subsystem scores well on entities and relationships. But process coverage is 50% — only drive control and terrain adaptation have defined behaviors. Obstacle avoidance is listed as a behavior on the parent node but has no state machine. The radar chart makes this imbalance visible at a glance.
Narrative Lens
"What does this system actually do?"
Every system has a story — a primary transformation chain that converts its fundamental input into its fundamental output. For the rover, the story is: solar array generates power, which power distribution delivers to the nav computer, which commands hub motors to traverse terrain. That's the mainstay — the reason the system exists, expressed as a narrative arc.
The Narrative Lens identifies this primary chain and renders it as a systemigram — a directed graph where nodes become noun phrases and connections become verb phrases, readable as a sentence. The result is a one-page diagram that a stakeholder who has never seen a SysML diagram can read, understand, and question.
Example: Autonomous Delivery Rover
A program manager reviews the Narrative Lens and immediately asks: "Why does the thermal management system connect to the battery but not to the hub motors?" It's a good question — the motors generate significant waste heat under load. The systemigram made the gap visible because it shows relationships as a readable story, not as a wiring diagram only an engineer can parse.
Dendritic Lens
"How did we arrive at this design?"
Every system model is the result of hundreds of decisions. Most of those decisions eliminated alternatives — hull geometries that were evaluated and rejected, materials that failed under the wrong loading regime, communication approaches that couldn't meet bandwidth constraints. The finished model shows the surviving paths. It says nothing about the dead ones. And the dead paths contain the engineering judgment that justifies the living ones.
The Dendritic Lens makes pruned alternatives visible as first-class elements of the model. Each pruned path carries its rationale — why it was eliminated, which physical principle or constraint disqualified it, and what cascading consequence the decision triggered in other branches. The model remembers its own intellectual history.
Example: Autonomous Delivery Rover
The Power subsystem shows three pruned alternatives beneath the battery selection: hydrogen fuel cells, supercapacitors, and lead-acid — each rejected with specific physics-grounded reasoning. A reviewer asks: "Did we consider solid-state lithium?" The Dendritic Lens shows it wasn't evaluated — a gap in the analysis, surfaced by making the pruning history explicit.

Four Views of One Model

These are not four separate features. They are four computations over the same entity store — the same nodes, interfaces, requirements, and behaviors that the model already contains. Each lens reads the same data and reveals a different dimension of it. And they reinforce each other in ways that no single view can achieve alone.

How the Lenses Converge on a Single Node
Completeness flags the Steering Assembly: zero behaviors defined, zero interfaces traced. Overall score: 13%.
Causality reveals why: no prerequisite technologies identified. The node exists in the tree but has no foundation beneath it.
Narrative confirms the gap: the Steering Assembly is disconnected from the primary transformation chain. It receives no commands and produces no output.
Dendritic explains the history: steering was added during a late decomposition pass with no alternatives evaluated. The pruning history is empty.
Four lenses, one conclusion: this node needs attention. Not because any single metric is red, but because every analytical dimension agrees — the Steering Assembly is a placeholder, not a design.

This is what becomes possible when a model is more than an archive. When the same data that describes a system can also be queried for readiness, completeness, coherence, and reasoning history, the model stops being a documentation burden and becomes the most valuable artifact in the engineering process — a living representation of the team's understanding that gets smarter as the design matures.

Four questions. Four lenses. One model. That's the architecture we're building.

Cairn is the AI engineering workbench for systems that matter.

Sign up and start building for free.