SlideShare a Scribd company logo
Design by Numbers
A Data-Driven UX Process
Brian Rimel @brianrimel
UX Consultant, OpenSource Connections
User-Centered Design
Internal enterprise applications
Access to users
Why Data?
Balancing the qualitative and quantitative
You can’t always trust your users
Limited data doesn’t tell the whole story
The HEART Framework
Src: https://library.gv.com/how-to-choose-the-right-ux-metrics-for-your-product-5f46359ab5be
PULSE Metrics
Page views, Uptime, Latency,
Seven-day active users, Earnings
Unnecessary Data
Creates Noise
HEART Metrics
Happiness, Engagement, Adoption,
Retention, Task Success
Happiness
Satisfaction or Delight
System Usability Scale, Net Promoter Score
Engagement
Level of involvement
Number of visits per user per week
Adoption
New users/uses of a feature
Number of accounts created in the last 7 days
Retention
Rate at which existing users return
Percentage of seven-day active users that
are still active 30 days later
Task Success
Traditional behavior metrics for efficiency,
effectiveness, and error rate.
Percentage of completion errors for a given task
Goals
Signals
Metrics
Goals Signals Metrics
Happiness
The user feels the
welcome wizard is
easy to use
Level of user
satisfaction
Mean SUS Score
Engagement - - -
Adoption - - -
Retention - - -
Task Success
The welcome wizard
should be as
simple as possible
The number of
errors during the
process
Rate of error
per step
Example: Welcome Wizard
Goals should be SMART
Specific, Measurable, Attainable, Realistic, Time-Based
Normalize the Data
What does an increase in total active users tell us?
A Limited-Data Process
Initial Metrics Gathering
Existing metrics influence feature priority
Kano Survey for feature-level satisfaction
Kano Survey
src: http://uxmag.com/articles/leveraging-the-kano-model-for-optimal-results
Feature Must-be
One-
Dimensional
Attractive
Unimportan
t
Undesired
Advanced Search 87% 8% 4% 1% 0%
Prioritizing of Features
1.
2.
3. Advanced Search
4.
5.
6.
7.
8.
9.
10.
From Kano Survey:
87% Must-be feature
From Usage Statistics:
22% Engagement/Week
Why the
discrepancy?
Goals Signals Metrics
Happiness
The user feels
comfortable using
advanced search
Level of
confidence
SUS Survey
Engagement
The features enable
consistent searching
Number of
advanced
searches
Searches per day
per user
Adoption - - -
Retention - - -
Task Success
The advanced
search process is
easily understood
User enters a
query, but does not
complete the
search
Percentage of
Abandoned
Searches
Advanced Search: Goals & Metrics
User Interview & Testing
Identify discrepancy between
stated importance and usage metrics
Establish baseline metrics
Measure satisfaction - SUS Survey
System Usability Scale (SUS)
src: https://www.usability.gov/how-to-and-tools/methods/system-usability-scale.html
Review Findings
Metric Initial Testing
Mean SUS Score 56
Error Rate / Step 21%
Okay, so where is the problem?
Let’s map it!
Mapping the Journey
Develop Prototypes
User Testing of Prototype
Continue measuring baseline metrics
A/B Testing
Follow-up SUS Survey
Results & Recommendations
Great! But, what does this mean?
Context critical to interpretation
Metric Initial Testing Prototype Testing
Mean SUS Score 56 73
Error Rate / Step 21% 12%
The Customer Journey
Long-Term Metrics
Tracking Engagement, Adoption, Retention, and
Task Success over time
Periodic usability testing of full application
The Data-Driven Process
Tools
Kibana Dashboard
“Extremely satisfied is
like extremely edible.”
- Jared Spool
Key Takeaways
• Collaboratively define SMART goals
• Revisit and challenge goals
• Continuously monitor metrics over time
• Balance quantitative and qualitative measures
Questions
References
• Google HEART Metrics Study
http://static.googleusercontent.com/media/research.google.com/e
n//pubs/archive/36299.pdf
• Kano Survey
http://uxmag.com/articles/leveraging-the-kano-model-for-optimal-
results
• SUS Survey
https://www.usability.gov/how-to-and-tools/methods/system-
usability-scale.html

More Related Content

Design by Numbers: A Data-Driven UX Process