Mental Model Alignment
CanonicalConfidence
Cognitive Load
Medium
Evidence
production validated
Impact
feature
Ethical Guardrail
Never assume the user understands the system's internal logic. Never hide discrepancies between user expectation and system behavior. Always use plain language.
Design Intent
When the system's model matches the user's mental model, everything feels intuitive. When they mismatch, confusion and errors follow. Mental Model Alignment surfaces, checks, and gently corrects the user's understanding in real time.
Psychology Principle
When the system's model matches the user's mental model, everything feels intuitive. When they mismatch, confusion follows.
Description
Surface, check, and gently correct the user's understanding of the system in real time to eliminate confusion.
When to use
Any complex feature, new capability, or AI-driven interaction where the system behaves in non-obvious ways.
Example
Notion Database Relations: Live preview card that shows exactly what the relation currently points to + one-tap Fix this relation editor.
Autonomy Compatibility
Behavioral Objective
Users accurately understand how the system works and predict its behavior correctly.
- Fewer errors and support questions
- Higher confidence when using advanced features
- Faster onboarding and feature adoption
Target Actor
role
Everyday user
environment
Complex or AI-driven features
emotional baseline
Uncertainty when things behave unexpectedly
ai familiarity
medium
risk tolerance
medium
Execution Model
model_visibility
Make the system's current understanding visible and simple.
User has no idea what the system thinks is happening.
mismatch_detection
Notice when user actions suggest a different mental model.
User is surprised by system behavior.
easy_correction
Let the user instantly teach or correct the model.
User must hunt for settings or restart the flow.
Failure Modes
Model visualization is too technical
Use everyday language and visuals
Correction path feels like extra work
Keep it one-tap and contextual
Over-alignment creates constant interruptions
Only surface when mismatch probability is high
Model updates too slowly
Sync instantly with user corrections
Cultural or expertise differences
Offer novice vs expert model views
Agent Decision Protocol
Triggers
- User shows surprise at system behavior
- New or complex feature is introduced
- User asks clarifying questions about how something works
Escalation Strategy
L1: Diagnose the failing element via behavioral_signals
L2: Nudge -- adjust copy, timing, or visual salience
L3: Restructure -- simplify flow, add progressive disclosure, restructure form
L4: Constrain -- lock Autonomy Dial to confirm_execution, add Strategic Friction
L5: Yield -- flag for human designer or domain expert review
Example
User expects AI to read an entire document but it only scanned the first page -> agent shows I'm currently seeing only the first page -- want me to load the rest?
Behavioral KPIs
Primary
- User accuracy predicting system behavior
- Error rate due to model mismatch
- Time to correct a mismatch
Risk
- Confusion reports or Why did it do that? questions
- Feature abandonment due to misunderstanding
Trust
- User-reported I always know what's going on
- Autonomy Dial usage when agent explains its model
Decay Monitoring
Revalidate when
- New AI capabilities or features change how the system works
- User expertise level shifts
- Product complexity increases
Decay signals
- Rising confusion or surprise reactions
- Drop in user prediction accuracy
- Feedback that the app does things I don't expect
Pattern Relationships
Conflicts with