Three Strategies for Behavior Change
CanonicalConfidence
Cognitive Load
Low
Evidence
production validated
Impact
product
Ethical Guardrail
Agent must always declare which of the three strategies it is using before suggesting a flow.
Design Intent
Product workflows span a massive range of cognitive weight -- from posting a status update to authorizing a production deployment. The Three Strategies pattern prevents the most common design mistake: treating every action as if it requires the same level of user engagement. By forcing designers and agents to classify each target action as Cheat, Habit, or Conscious Action before any UI work begins, the pattern eliminates wasted friction on routine tasks and ensures high-stakes decisions get the deliberation they demand. For AI agents, strategy selection is the first decision in any workflow recommendation -- an agent that applies Cheat logic to a security review, or Conscious Action overhead to a daily standup, will erode trust immediately.
Psychology Principle
There are only three reliable ways to drive action: remove the need for decision (Cheat), build a habit, or support conscious deliberate choice.
Description
From Wendel's Designing for Behavior Change: every target action falls into one of three strategy buckets. Cheat automates or defaults the action so the user barely thinks. Habits turn the action into an automatic routine using Fogg's B=MAT model. Conscious Action supports high-stakes deliberation with full information and strategic friction. Choosing the right strategy before designing any UI makes the rest of the work dramatically easier.
When to use
At the start of any new feature design. Pick the strategy before picking the UI pattern.
Example
Auto-format Slack message with thread summary (Cheat), daily standup prompt triggered after morning calendar event (Habit), approving a $50k+ payment authorization in Stripe (Conscious Action).
Autonomy Compatibility
Behavioral Objective
Design teams and AI agents classify every target action into the correct strategy bucket before proposing any interface or workflow.
- Routine product tasks are automated or defaulted, eliminating unnecessary cognitive overhead
- Recurring workflows develop into reliable habits through consistent cue-routine-reward loops
- High-stakes financial and security decisions receive appropriate deliberation scaffolding
Target Actor
role
Product Designer / AI Agent / Feature Team Lead
environment
Pre-design phase of any product feature, applied before wireframes or prototypes begin
emotional baseline
Analytical, seeking a defensible rationale for design direction
ai familiarity
medium-to-high (this is a meta-pattern used by builders, not end users directly)
risk tolerance
low -- misclassification leads to either wasted friction or dangerous under-deliberation
Execution Model
classify
Identify the target action and classify it into one of the three strategy buckets based on stakes, frequency, and cognitive weight.
No explicit strategy classification exists for the target action -- design proceeds on intuition.
validate
Cross-check the chosen strategy against the target actor, workflow context, and security requirements. A Cheat strategy on a security-critical action is always invalid.
Strategy is applied to an action where it is inappropriate (e.g., Cheat on a $50k payment approval).
apply
Select and configure the supporting patterns that match the chosen strategy. Cheat pulls from Action Structuring and Progressive Disclosure. Habit pulls from CREATE cue and timing stages. Conscious Action pulls from Strategic Friction and Intent Preview.
Supporting patterns are mismatched to the strategy (e.g., Strategic Friction applied to a Cheat workflow).
monitor
Track whether the chosen strategy is producing the expected behavioral outcome. If a Cheat workflow has high abandonment, the action may need reclassification.
Strategy-specific KPIs are not being tracked, or reclassification signals are ignored.
Failure Modes
Cheat applied to high-stakes action, bypassing necessary deliberation
Security flag check blocks Cheat classification on protected workflows. Agent must validate strategy against risk level.
Conscious Action applied to routine task, creating unnecessary friction and fatigue
Monitor task completion time and abandonment rate. If routine action takes >3x expected time, flag for reclassification.
Habit strategy fails because cue timing is inconsistent
Bind habit cues to reliable external triggers (morning standup, sprint start) rather than arbitrary times.
Agent does not declare its strategy, making its recommendations opaque
Require strategy declaration in agent reasoning chain. Log strategy choice in Audit Trail.
Agent Decision Protocol
Triggers
- New workflow or feature design initiated
- Agent preparing a recommendation for any user action
- Behavioral KPIs indicate strategy misalignment
Escalation Strategy
L1: Classify -- agent selects strategy bucket and declares it in reasoning chain
L2: Nudge -- if misclassification signals detected, suggest reclassification to designer or user
L3: Restructure -- swap pattern bundle to match correct strategy (e.g., remove friction from Cheat workflow)
L4: Constrain -- block Cheat strategy on security-flagged actions
L5: Yield -- flag for senior behavioral designer review if strategy is ambiguous
Example
Agent receives request to design daily standup status update -> classifies as Habit (high frequency, low stakes, routine) -> selects supporting patterns: CREATE cue stage + Action Structuring (simplify to 3 fields) -> binds cue to morning calendar event -> monitors completion streak -> after 14 days of consistent use, reduces cue salience.
Behavioral KPIs
Primary
- % of new features with explicit strategy classification before design begins
- Strategy-action alignment score (correct bucket for the action type)
- Task completion rate by strategy type
Risk
- Cheat misapplication rate (Cheat on high-stakes actions)
- Conscious Action over-application rate (deliberation on routine tasks)
Trust
- Agent strategy declaration rate (% of recommendations with explicit strategy label)
- User agreement with agent strategy choice
Behavioral Signals
misclassification
strategy=cheat AND security_flag=true
strategy=conscious_action AND task_frequency > 5_per_day AND abandonment_rate > 30%
habit_failure
strategy=habit AND cue_response_rate < 40% after 14 days
strategy=habit AND completion_streak_broken > 3 times in 30 days
strategy_missing
feature_design_started AND strategy_classification=null
agent_recommendation_issued AND strategy_label=null
Decay Monitoring
Revalidate when
- Workflow stakes change (e.g., policy update elevates a routine action to security-critical)
- User population shifts (new user segments may need different strategy for the same action)
- Agent capabilities expand (actions previously requiring Conscious Action may qualify for Cheat)
Decay signals
- Rising misclassification rate across features
- Increasing abandonment on Cheat workflows (action may be more complex than assumed)
- Declining habit completion rates (cue-routine-reward loop may be broken)
Pattern Relationships
Supports
Amplifies