Peak-End Rule
CanonicalConfidence
Cognitive Load
Low
Evidence
production validated
Impact
feature
Ethical Guardrail
Never create fake peaks or manufactured endings. End sessions with real value, not manipulation.
Design Intent
People judge an entire experience based almost entirely on its peak and its end -- not the average or total duration. Peak-End Rule deliberately designs the emotional high point and the closing moment.
Psychology Principle
People judge an entire experience based almost entirely on its peak (best or worst moment) and its end.
Description
Design the emotional high point and closing moment of every user journey to create lasting positive memory.
When to use
Any complete user session or journey -- onboarding, checkout, feature completion, session close.
Example
Airbnb Checkout: Beautiful 'Your trip is confirmed!' animation with personalized map (peak) + warm thank-you message and shareable receipt (end).
Autonomy Compatibility
Behavioral Objective
Users remember the overall experience as positive because of a strong peak and satisfying end.
- Higher NPS and satisfaction ratings
- Increased likelihood to return
- Stronger emotional connection to the product
Target Actor
role
Everyday user
environment
Multi-step journeys or sessions
emotional baseline
Memory is shaped by emotional extremes
ai familiarity
medium
risk tolerance
medium
Execution Model
peak_creation
Design one memorable high point during the experience.
The journey feels flat or average throughout.
ending_design
Close the experience on a high note.
Session ends on a neutral or negative note.
memory_reinforcement
Remind the user of the peak and end in follow-up.
User forgets the positive experience quickly.
Failure Modes
Peak feels gimmicky or forced
Tie it to real user progress or value
Ending is rushed or incomplete
Always include a deliberate closer
Multiple weak peaks dilute the effect
Focus on one strong peak per session
Cultural differences in what feels like a peak
Test regionally and personalize
Over-reliance on peak-end hides core problems
Fix underlying issues first
Agent Decision Protocol
Triggers
- Session completion is high but satisfaction is low
- Users complete tasks but don't return
- User feedback is 'it was okay' instead of 'that was great'
Escalation Strategy
L1: Diagnose flat or weak ending via behavioral_signals
L2: Nudge -- add a personalized achievement summary at session close
L3: Restructure -- design a dedicated peak moment and closing experience
L4: Constrain -- ensure only one peak per session to maximize impact
L5: Yield -- flag for human behavioral designer review
Example
User finishes a long report -> agent shows beautiful summary visualization (peak) + 'Well done! Your report is saved and ready to share' (end).
Behavioral KPIs
Primary
- Post-session satisfaction / NPS score
- Return rate after peak-end optimized flows
- Recall accuracy in follow-up surveys
Risk
- Users reporting 'it was fine but I don't remember it'
- Drop in emotional connection metrics
Trust
- User-reported 'that was a great experience'
- Autonomy Dial usage when agent engineers peaks/ends
Behavioral Signals
flat_experience
session_completed=true AND satisfaction_score < 3
session_duration > 300s AND peak_moment_triggered=false
weak_ending
session_ended_positively=false AND last_interaction_sentiment=neutral
post_session_satisfaction < 3 AND session_completion=true
strong_memory
peak_moment_triggered=true AND post_session_satisfaction >= 4
return_visit_within_48h=true AND peak_end_optimized=true
Decay Monitoring
Revalidate when
- User expectations for delight evolve
- Product adds longer or more complex journeys
- Cultural norms around emotional closure shift
Decay signals
- Declining NPS despite objective improvements
- Users saying 'it was fine but forgettable'
- Drop in emotional recall metrics
Pattern Relationships
Amplifies
Requires
Conflicts with