Explainable Actions

Canonical
Agentic UXEthical Design

Confidence

68%

Cognitive Load

Low

Evidence

production validated

Impact

feature

Ethical Guardrail

Never act silently or with vague explanations. Never hide reasoning to appear more confident. Always offer Show me more detail for deeper transparency.

Design Intent

Users cannot trust what they cannot understand. Explainable Actions makes every agent or system action transparent by showing exactly what was done, why it was done, and what data or logic was used.

Psychology Principle

Users cannot trust what they cannot understand.

Description

Make every agent action transparent by showing what was done, why, and what data was used -- eliminating the black box feeling.

When to use

Every agent or automated action -- especially at confirm_execution or autonomous levels, or when confidence is below 90%.

Example

Claude/Grok Agent Response: After every action -- I updated the section because [specific reason + data sources]. Here's exactly what changed. Undo or edit?

Autonomy Compatibility

SuggestConfirmAuto

Behavioral Objective

Users understand and trust every agent action because the reasoning is always visible and verifiable.

  • Higher acceptance of agent suggestions
  • More frequent corrections when needed
  • Stronger long-term confidence in the AI

Target Actor

role

Everyday user

environment

Mixed human-AI decision workflows

emotional baseline

Needs to understand before trusting

ai familiarity

medium

risk tolerance

medium

Execution Model

1

pre_action_preview

Show the plan before executing at confirm or autonomous levels.

User is surprised by what the agent does.

2

post_action_explanation

Immediately after the action, surface the why and how.

User has to ask or hunt for the reason.

3

on_demand_depth

Allow users to expand for full transparency.

User wants more detail but cannot easily get it.

Failure Modes

Explanation is too vague or generic

Always include specific data sources and logic

micro

Explanation overwhelms the user

Default to short version with optional expand

micro

Explanation arrives too late

Show preview before autonomous actions

feature

Agent hides low-confidence reasoning

Explicitly call out uncertainty

micro

Explanations feel defensive or overly formal

Use warm human tone

micro

Agent Decision Protocol

Triggers

  • Any agent action is about to occur
  • User shows hesitation or surprise
  • Confidence is below 90% or stakes are high

Escalation Strategy

L1: Diagnose the failing element via behavioral_signals

L2: Nudge -- adjust copy, timing, or visual salience

L3: Restructure -- simplify flow, add progressive disclosure, restructure form

L4: Constrain -- lock Autonomy Dial to confirm_execution, add Strategic Friction

L5: Yield -- flag for human designer or domain expert review

Example

Agent auto-fills a section -> immediately shows I pulled this from your last three similar documents because the language matched 94% -- here are the sources.

Behavioral KPIs

Primary

  • % of actions where user views or expands the explanation
  • User trust score after explained actions
  • Correction rate when explanation is shown

Risk

  • Why did it do that? questions
  • Distrust reports after agent actions

Trust

  • User-reported understanding of agent behavior
  • Autonomy Dial usage when explanations are provided

Decay Monitoring

Revalidate when

  • Agent capabilities or data sources change significantly
  • New interaction paradigms emerge
  • User familiarity with AI explanations evolves

Decay signals

  • Rising Why? questions
  • Drop in explanation view rates
  • Feedback that explanations feel stale or unhelpful

Pattern Relationships

Related Patterns

Canonical Implementation

Claude/Grok Agent: After every action -- I updated because [reason + sources]. Here's what changed. Undo or edit?

Telemetry Hooks

explanation_shownexplanation_expandeduser_correction_made

Tags

agentic-uxtransparencytrust