SlideShare a Scribd company logo
LABORATORY FOR QUANTITATIVE EXPERIENCE DESIGNqed.cs.utah.edu
Toward a 

Science of Game Design
Rogelio E. Cardona-Rivera

Assistant Professor and Director, QED Lab

School of Computing, Entertainment Arts & Engineering

University of Utah

rogelio@cs.utah.edu

@recardona
Acknowledgements
Keynote at the 2018 SIGGRAPH Conference on Motion, Interaction and Games
My Work: The Big Picture
Developing intelligent systems

My Work: The Big Picture
Developing intelligent systems

which sit at the interface of a virtual world
My Work: The Big Picture
Developing intelligent systems

which sit at the interface of a virtual world
and a person's understanding of it, 

My Work: The Big Picture
Developing intelligent systems

which sit at the interface of a virtual world
and a person's understanding of it, 

to enable the automated generation of 

compelling interactive experiences
Three Methodological Pillars
Three Methodological Pillars
•Synthesis
Three Methodological Pillars
•Synthesis
Narratology
Three Methodological Pillars
•Synthesis
Narratology
Psychology
Three Methodological Pillars
•Synthesis
Game Design
Narratology
Psychology
Three Methodological Pillars
•Synthesis

•Development
Game Design
Narratology
Psychology
AI
Three Methodological Pillars
•Synthesis

•Development

•Validation
Game Design
Narratology
Psychology
AI

+

HCI
Keynote at the 2018 SIGGRAPH Conference on Motion, Interaction and Games
Game Design 

is a Cognitive Science
Game Design is a Cognitive Science
Game Design is a Cognitive Science
{"<start>" : "<template>",
"<template>" : 

"<object> in which players <engagement>. 

| <object> that involves <characteristics>. 

| <object> <constraints>. 

| <object> characterized by <relationship>.",
"<object>" : 

…
} molleindustria. http://www.gamedefinitions.com/
Game Design is a Cognitive Science
–Herbert Simon
The act of transforming
existing courses of action
into preferred ones.
Game Design is a Cognitive Science
Talk Outline
Objective: The MIG Community is well-
poised to pursue a science of game design

• What is a Science of Game Design and why
bother?

• What are examples of work in this area?

• What are MIG-specific opportunities?
What is a Science of Game
Design and why bother?
What is a Science of Game Design…
A systematically organized 

body of knowledge
What is a Science of Game Design…
A systematically organized 

body of knowledge

composed of 

observation and experiment
What is a Science of Game Design…
A systematically organized 

body of knowledge

composed of 

observation and experiment

that encompasses the 

structure and behavior of games
…and why bother?
• Games are a significant engineering
challenge

• Advances in technology create more
problems

• Research should target artifact and person
…and why bother?
• Games are a significant engineering
challenge

• Advances in technology create more
problems

• Research should target artifact and person
The Engineering Challenge
•Costly

•Technically difficult

•Poorly understood
Cost of Most Expensive Games
2011 2012 2013 2014 2015 2016
$124M
$80M
$140M$137M
$105M
$200M
MGSV
Development Time for them
2011 2012 2013 2014 2015 2016
5 Years
3 Years
4 Years4 Years
3 Years3 Years
MGSV
Human Cost
Telltale Games
Human Cost
Rockstar Games
Authorial Combinatorics Problem
•Content authoring
increases
exponentially with
player choice
Event
Action
12 writers, 3 years

200,000 dialogue lines

= approx. 1M words
= 1,094,170 wordsA choose-your-own-adventure (CYOA)
Game Purchase Influence Factors
Similarity
9%
Sequel
9%
Word of mouth
11%
Graphics
12%
Interesting Story
16%
Price
21%
Other
22%
Essential Facts About the Computer & Video Game Industry
(Entertainment Software Association, 2016)
Keynote at the 2018 SIGGRAPH Conference on Motion, Interaction and Games
Keynote at the 2018 SIGGRAPH Conference on Motion, Interaction and Games
Keynote at the 2018 SIGGRAPH Conference on Motion, Interaction and Games
…and why bother?
• Games are a significant engineering
challenge

• Advances in technology create more
problems

• Research should target artifact and person
Keynote at the 2018 SIGGRAPH Conference on Motion, Interaction and Games
Graphics at the Expense of Stories
Keynote at the 2018 SIGGRAPH Conference on Motion, Interaction and Games
There’s a lot of hacks and kludges to get
things working… I’m sure you would find tons
of duplication of effort, definitely. I’ve been an
audio programmer on [X] different games and
I’ve written [X] different audio engines.
Meaningless Procedural Generation
No Man’s Sky can generate 1.8 × 1019 Planets
Meaningless Procedural Generation
The Kaleidoscope Effect
Cognitively-grounded Procedural Content Generation
(Cardona-Rivera, 2017)
…and why bother?
• Games are a significant engineering
challenge

• Advances in technology create more
problems

• Research should target artifact and person
The Player Modeling Principle
The whole value of a game is in the mental
model of itself it projects into the player’s
mind.
The Simulation Dream
(Sylvester, 2013)
Tacit Learning and Expectations
What is a Science of Game
Design and why bother?
• Games are a significant engineering
challenge

• Advances in technology create more
problems

• Research should target artifact and person
Game Design
Narratology
Psychology
AI

+

HCI
What are examples 

of work in this 

area?
Narratively

Intelligent

Game AI
An example agenda in the
Science of Game Design
What is Narrative Intelligence?
Unique human
capacity to 

understand our
environment in 

terms of stories
(Heider and Simmel, 1944)
•Narrative framing makes interaction more
compelling
Why Narrative Intelligence matters
•Narrative framing makes interaction more
compelling

‣ Entertainment
Why Narrative Intelligence matters
video games
•Narrative framing makes interaction more
compelling

‣ Entertainment

‣ Education
Why Narrative Intelligence matters
training simulations
•Narrative framing makes interaction more
compelling

‣ Entertainment

‣ Education

‣ Engagement
Why Narrative Intelligence matters
gamification
Why Narrative Intelligence matters
•Narrative framing makes interaction more
compelling

‣ Entertainment

‣ Education

‣ Engagement

•Difficult to engineer

‣ AI may help ameliorate authorial burden
Interactive Narrative (IN)
• Mediates actions
through a 

narrative framing
Interactive Narrative (IN)
Designer
Authors

with
Story 

Director
• Mediates actions
through a 

narrative framing

• Abstraction of story
as trajectory of world
states
‣ Narratives as plans
Narratives as Plans
• Story generation as a classical planning
problem

‣ : initial state

‣ : goal conditions

‣ : set of (domain) actions, 

predicates, and objects

• Search for sequence to transform →
P = hsi, g, Di
D
g
si
gsi
Na
Plans and planning in narrative generation: a review of plan-based approaches to the 

generation of story, discourse and interactivity in narratives (Young et al., 2013)
Narratives as Plans
• Actions encoded as template operators

‣ Planning Domain Definition Language
Na
(:action pick-up

:parameters (?agent ?item ?location)
:precondition (and (at ?item ?location) 

(at ?agent ?location))
:effect (and (not (at ?item ?location)) 

(has ?agent ?item)))
(:action pick-up

:parameters (?agent ?item ?location)
:precondition (and (at ?item ?location) 

(at ?agent ?location))
:effect (and (not (at ?item ?location)) 

(has ?agent ?item))
:agents (?agent))
Narratives as Plans
• Actions encoded as template operators

‣ Planning Domain Definition Language

• PDDL expanded with consenting agents
Na
Automated Planning
• Solution to a planning problem

		 	 	 	 	 	 	 is a planP = hsi, g, Di ⇡ = hS, B, Li
Pick-up
Automated Planning
• Solution to a planning problem

		 	 	 	 	 	 	 is a plan

‣ : steps
P = hsi, g, Di ⇡ = hS, B, Li
S si g
Pick-up
Disenchant
Automated Planning
• Solution to a planning problem

		 	 	 	 	 	 	 is a plan

‣ : steps

‣ : bindings
P = hsi, g, Di ⇡ = hS, B, Li
S
B
si g
Pick-up
Disenchant
Pick-up
Automated Planning
• Solution to a planning problem

		 	 	 	 	 	 	 is a plan

‣ : steps

‣ : bindings

‣ : causal links

(e.g. 	 	 	 	 	 	 	 	 	 	 	 	 	 )
P = hsi, g, Di ⇡ = hS, B, Li
hs1, , s2i
S
B
L
si g
Pick-up
Disenchant
Pick-up
(has ARTHUR SPELLBOOK)
Example: A Knight’s Tale
(define (problem STORY)

(:domain KNIGHT)
(:objects
ARTHUR MERLIN 

SPELLBOOK MERLINBOOK

EXCALIBUR FOREST HOME)
(:init
(at ARTHUR FOREST)
(at MERLIN FOREST)
(has MERLIN MERLINBOOK)
(asleep MERLIN)

(at SPELLBOOK FOREST)
(at EXCALIBUR FOREST)
(enchanted EXCALIBUR)

(path FOREST HOME))
(:goal
(has ARTHUR EXCALIBUR))
(define (domain KNIGHT)
(:requirements :strips)
(:predicates
(at ?x ?y) (has ?x ?y) (path ?x ?y) 

(asleep ?x) (enchanted ?x))
(:action pick-up

:parameters 

(?agent ?item ?location) …)
(:action move

:parameters 

(?agent ?from ?to) …)
(:action disenchant

:parameters 

(?agent ?obj ?location ?book) …)
(:action wake-up

:parameters 

(?agent ?sleeper ?location) …) )
Na
Example: A Knight’s Tale
(define (problem STORY)

(:domain KNIGHT)
(:objects
ARTHUR MERLIN 

SPELLBOOK MERLINBOOK

EXCALIBUR FOREST HOME)
(:init
(at ARTHUR FOREST)
(at MERLIN FOREST)
(has MERLIN MERLINBOOK)
(asleep MERLIN)

(at SPELLBOOK FOREST)
(at EXCALIBUR FOREST)
(enchanted EXCALIBUR)

(path FOREST HOME))
(:goal
(has ARTHUR EXCALIBUR))
Na
Example: A Knight’s Tale
Na
(define (problem STORY)

(:domain KNIGHT)
(:objects
ARTHUR MERLIN 

SPELLBOOK MERLINBOOK

EXCALIBUR FOREST HOME)
(:init
(at ARTHUR FOREST)
(at MERLIN FOREST)
(has MERLIN MERLINBOOK)
(asleep MERLIN)

(at SPELLBOOK FOREST)
(at EXCALIBUR FOREST)
(enchanted EXCALIBUR)

(path FOREST HOME))
(:goal
(has ARTHUR EXCALIBUR))
Example: A Knight’s Tale
Na
(define (problem STORY)

(:domain KNIGHT)
(:objects
ARTHUR MERLIN 

SPELLBOOK MERLINBOOK

EXCALIBUR FOREST HOME)
(:init
(at ARTHUR FOREST)
(at MERLIN FOREST)
(has MERLIN MERLINBOOK)
(asleep MERLIN)

(at SPELLBOOK FOREST)
(at EXCALIBUR FOREST)
(enchanted EXCALIBUR)

(path FOREST HOME))
(:goal
(has ARTHUR EXCALIBUR))
Example: A Knight’s Tale
Na
(define (problem STORY)

(:domain KNIGHT)
(:objects
ARTHUR MERLIN 

SPELLBOOK MERLINBOOK

EXCALIBUR FOREST HOME)
(:init
(at ARTHUR FOREST)
(at MERLIN FOREST)
(has MERLIN MERLINBOOK)
(asleep MERLIN)

(at SPELLBOOK FOREST)
(at EXCALIBUR FOREST)
(enchanted EXCALIBUR)

(path FOREST HOME))
(:goal
(has ARTHUR EXCALIBUR))
Example: A Knight’s Tale
(define (problem STORY)

(:domain KNIGHT)
(:objects
ARTHUR MERLIN 

SPELLBOOK MERLINBOOK

EXCALIBUR FOREST HOME)
(:init
(at ARTHUR FOREST)
(at MERLIN FOREST)
(has MERLIN MERLINBOOK)
(asleep MERLIN)

(at SPELLBOOK FOREST)
(at EXCALIBUR FOREST)
(enchanted EXCALIBUR)

(path FOREST HOME))
(:goal
(has ARTHUR EXCALIBUR))
Na
Example: A Knight’s Tale
(define (problem STORY)

(:domain KNIGHT)
(:objects
ARTHUR MERLIN 

SPELLBOOK MERLINBOOK

EXCALIBUR FOREST HOME)
(:init
(at ARTHUR FOREST)
(at MERLIN FOREST)
(has MERLIN MERLINBOOK)
(asleep MERLIN)

(at SPELLBOOK FOREST)
(at EXCALIBUR FOREST)
(enchanted EXCALIBUR)

(path FOREST HOME))
(:goal
(has ARTHUR EXCALIBUR))
Na
Example: A Knight’s Tale
Na
(define (problem STORY)

(:domain KNIGHT)
(:objects
ARTHUR MERLIN 

SPELLBOOK MERLINBOOK

EXCALIBUR FOREST HOME)
(:init
(at ARTHUR FOREST)
(at MERLIN FOREST)
(has MERLIN MERLINBOOK)
(asleep MERLIN)

(at SPELLBOOK FOREST)
(at EXCALIBUR FOREST)
(enchanted EXCALIBUR)

(path FOREST HOME))
(:goal
(has ARTHUR EXCALIBUR))
Example: A Knight’s Tale
Na
(define (problem STORY)

(:domain KNIGHT)
(:objects
ARTHUR MERLIN 

SPELLBOOK MERLINBOOK

EXCALIBUR FOREST HOME)
(:init
(at ARTHUR FOREST)
(at MERLIN FOREST)
(has MERLIN MERLINBOOK)
(asleep MERLIN)

(at SPELLBOOK FOREST)
(at EXCALIBUR FOREST)
(enchanted EXCALIBUR)

(path FOREST HOME))
(:goal
(has ARTHUR EXCALIBUR))
si g
Pick-up
Disenchant
Pick-up
Interactive Narrative (IN)
Na
Designer
Authors

with
Story 

Director
Interactive Narrative (IN)
Na
Designer
Authors

with
Story 

Director
si g
Interactive Narrative (IN)
Na
Interacts

with
Designer
Authors

with
Story 

Director
si g
Player
Interactive Narrative (IN)
Na
Interacts

with
Designer
Authors

with
Story 

Director
si g
Player
Player can act as
afforded by the
logical world state
IN Play as Game Tree Search
Na
si g
Pick-up
Disenchant
Pick-up
Intended Narrative Plan
IN Play as Game Tree Search
• Chronology — Player & System take turns

‣ On System Turn: Advance Narrative Agenda

‣ On Player Turn: ???
Na
Disenchant Pick-upPick-up
si g
IN Play as Game Tree Search
Na
Disenchant Pick-upPick-up
si g
IN Play as Game Tree Search
Na
Disenchant Pick-upPick-up
si g
Move
IN Play as Game Tree Search
Disenchant Pick-upPick-up
si g
Na
Move
Wake-up
IN Play as Game Tree Search
Disenchant Pick-upPick-up
si g
Move
Wake-up
r
q
p
Na
• Many trajectories
IN Play as Game Tree Search
Disenchant Pick-upPick-up
si g
Na
Move
Wake-up
r
q
p
… …
… …
• Many many trajectories
IN Play as Game Tree Search
Na
Disenchant Pick-upPick-up
si g
Move
Wake-up
r
q
p
… …
… …
• Many many trajectories
‣ Not all are good
IN Play as Game Tree Search
Na
is unreachable
Disenchant Pick-upPick-up
si g
Move
Wake-up
r
q
p
… …
…
g
• Many many trajectories
‣ Not all are good
IN Play as Game Tree Search
Na
is unreachable
Disenchant Pick-upPick-up
si g
Move
Wake-up
r
q
p
… …
…
g
• Many many trajectories
‣ Not all are good
‣ Mediator is needed
Interactive Narrative (IN)
Na
Interacts

with
Designer
Authors

with
Story 

Director
si g
Player
Interactive Narrative (IN)
Na
Interacts

with
Designer
Authors

with
Story 

Director
si g
PlayerMediator
Interactive Narrative (IN)
Na
Interacts

with
Designer
Authors

with
Story 

Director
si g
PlayerMediator
• Director & Mediator
collaborate
‣ Accept
‣ Re-plan around
‣ Fail user actions
Na
Disenchant Pick-upPick-up
si g
Move
Wake-up
r
q
p
… …
… …
Why would players pick these?
D
Na
Ps
si g
… …
… …
Why would players pick these?
D
Na
Ps
si g
… …
… …
• Valued as completions
Why would players pick these?
D
Na
Ps
si g
… …
… …
• Valued as completions
• If often, represents:
‣ More work for system
Why would players pick these?
D
Na
Ps
si g
… …
… …
• Valued as completions
• If often, represents:
‣ More work for system
‣ Failure of design
Automated Design Problem
D
Na
Ps
…
…
Automated Design Problem
•Managing the player’s
intent, which fluctuates
due to narrative
intelligence
D
Na
Ps
…
…
Automated Design Problem
•Managing the player’s
intent, which fluctuates
due to narrative
intelligence

‣ Comprehension
D
Na
Ps
…
…
Automated Design Problem
•Managing the player’s
intent, which fluctuates
due to narrative
intelligence

‣ Comprehension

‣ Role-play
D
Na
Ps
…
…
Automated Design Problem
•Managing the player’s
intent, which fluctuates
due to narrative
intelligence

‣ Comprehension

‣ Role-play

‣ Desire for Agency
D
Na
Ps
…
…
Automated Design Problem
•Managing the player’s
intent, which fluctuates
due to narrative
intelligence

‣ Comprehension

‣ Role-play

‣ Desire for Agency

‣ …
D
Na
Ps
…
…
•Managing the player’s
intent, which fluctuates
due to narrative
intelligence

‣ Comprehension

‣ Role-play

‣ Desire for Agency
…
…
Example Science of Game Design
•Managing the player’s
intent, which fluctuates
due to narrative
intelligence

‣ Comprehension

‣ Role-play

‣ Desire for Agency
…
…
Example Science of Game Design
•Managing the player’s 

intent, which fluctuates 

due to narrative 

intelligence

‣ Comprehension

‣ Role-play

‣ Desire for Agency

•In the context of the Automated Design Problem
…
…
•Managing the player’s 

intent, which fluctuates 

due to narrative 

intelligence

‣ Comprehension
Example Science of Game Design
•Managing the player’s 

intent, which fluctuates 

due to narrative 

intelligence

‣ Comprehension

‣ Role-play

‣ Desire for Agency

•In the context of the Automated Design Problem
…
…
Question Answering in the Context of Stories Generated by Computers

(Cardona-Rivera, Price, Winer, and Young, 2016)
Modeling Story Understanding
•Readers as 

problem solvers
(Gerrig and Bernardo, 1994)
Na
Ps
Modeling Story Understanding
•Readers as 

problem solvers

•Planning is a model of
problem solving
(Gerrig and Bernardo, 1994)
(Tate, 2001)
D
Na
Ps
Question Answering in the Context of Stories Generated by Computers

(Cardona-Rivera, Price, Winer, and Young, 2016)
Modeling Story Understanding
•Readers as 

problem solvers

•Planning is a model of
problem solving

•Idea: narrative plan 

as a proxy for 

mental state
(Gerrig and Bernardo, 1994)
(Tate, 2001)
D
Na
Ps
Question Answering in the Context of Stories Generated by Computers

(Cardona-Rivera, Price, Winer, and Young, 2016)
Understanding as Planning
• The QUEST Model of Comprehension

‣ Comprehension as Q&A
• Predicts normative answers to questions

‣ Why? How? When? What enabled? What
was the consequence?
(Graesser and Franklin, 1990)
D
Na
Ps
Question Answering in the Context of Stories Generated by Computers

(Cardona-Rivera, Price, Winer, and Young, 2016)
Understanding as Planning
The QUEST Graph D
Na
Ps
Arthur
disenchants

Excalibur
Excalibur 

disenchanted
Arthur wants

disenchanted
Arthur wants

Excalibur
Consequence
Outcome
Reason
Event
State
Goal
Question Answering in the Context of Stories Generated by Computers

(Cardona-Rivera, Price, Winer, and Young, 2016)
Understanding as Planning
Example QUEST “Why?” Search D
Na
Ps
Arthur
disenchants

Excalibur
Excalibur 

disenchanted
Arthur wants

disenchanted
Arthur wants

Excalibur
Consequence
Outcome
Reason
Why did
Arthur
disenchant
Excalibur?
Event
State
Goal
Question Answering in the Context of Stories Generated by Computers

(Cardona-Rivera, Price, Winer, and Young, 2016)
Understanding as Planning
Example QUEST “Why?” Search D
Na
Ps
Arthur
disenchants

Excalibur
Excalibur 

disenchanted
Arthur wants

disenchanted
Arthur wants

Excalibur
Consequence
Outcome
Reason
Why did
Arthur
disenchant
Excalibur?
Event
State
Goal
Question Answering in the Context of Stories Generated by Computers

(Cardona-Rivera, Price, Winer, and Young, 2016)
Understanding as Planning
Example QUEST “Why?” Search D
Na
Ps
Arthur
disenchants

Excalibur
Excalibur 

disenchanted
Arthur wants

disenchanted
Arthur wants

Excalibur
Consequence
Outcome
Reason
Why did
Arthur
disenchant
Excalibur?
Event
State
Goal
Question Answering in the Context of Stories Generated by Computers

(Cardona-Rivera, Price, Winer, and Young, 2016)
Understanding as Planning
Example QUEST “Why?” Search D
Na
Ps
Arthur
disenchants

Excalibur
Excalibur 

disenchanted
Arthur wants

disenchanted
Arthur wants

Excalibur
Consequence
Outcome
Reason
Why did
Arthur
disenchant
Excalibur?
Event
State
Goal
Question Answering in the Context of Stories Generated by Computers

(Cardona-Rivera, Price, Winer, and Young, 2016)
Understanding as Planning
Example QUEST “Why?” Search D
Na
Ps
Arthur wants

disenchanted
Arthur wants

Excalibur
Reason
Why did
Arthur
disenchant
Excalibur?
Goal
Candidate Answers
Question Answering in the Context of Stories Generated by Computers

(Cardona-Rivera, Price, Winer, and Young, 2016)
Understanding as Planning
Plan to QUEST Graph Mapping Algorithm D
Na
Ps
Given a plan :
1. , generate event node ei with
a. effects , generate state node ti with
2. Connect Consequence Arcs for all ti → ei , ei →ti+1 in
3. For all literals in , generate goal node li with
4. Connect Reason Arcs for all goal nodes, by ancestry
5. Connect Outcome Arcs for all li →ei in
B
L
⇡ = hS, B, Li
B
L
L B
8s 2 S
8 e 2 S
Question Answering in the Context of Stories Generated by Computers

(Cardona-Rivera, Price, Winer, and Young, 2016)
Understanding as Planning
Plan to QUEST Graph Mapping Algorithm D
Na
Ps
Given a plan :
1. , generate event node ei with
a. effects , generate state node ti with
2. Connect Consequence Arcs for all ti → ei , ei →ti+1 in
3. For all literals in , generate goal node li with
4. Connect Reason Arcs for all goal nodes, by ancestry
5. Connect Outcome Arcs for all li →ei in
B
L
⇡ = hS, B, Li
B
L
L B
8s 2 S
8 e 2 S
Mapping data structure semantics 

to cognitive semantics
Question Answering in the Context of Stories Generated by Computers

(Cardona-Rivera, Price, Winer, and Young, 2016)
Understanding as Planning
•Replicated QUEST
Validation experiment

‣ Original: manual graph

‣ Ours: generated graph

• Participants gave
goodness-of-answer
Likert data for Q&A pairs

‣ Predicted their answers

‣ Strong support for
model (N=695)
Evaluating the Mapping D
Na
Ps
(Graesser, Lang, and Roberts, 1991)
Question Answering in the Context of Stories Generated by Computers

(Cardona-Rivera, Price, Winer, and Young, 2016)
Understanding as Planning
Takeaway D
Na
Ps
si g
Pick-up
Disenchant
Pick-up
Question Answering in the Context of Stories Generated by Computers

(Cardona-Rivera, Price, Winer, and Young, 2016)
Understanding as Planning
Takeaway D
Na
Ps
Generation
si g
Pick-up
Disenchant
Pick-up
Question Answering in the Context of Stories Generated by Computers

(Cardona-Rivera, Price, Winer, and Young, 2016)
Understanding as Planning
Takeaway D
Na
Ps
Generation
si g
Pick-up
Disenchant
Pick-up
Comprehension
Question Answering in the Context of Stories Generated by Computers

(Cardona-Rivera, Price, Winer, and Young, 2016)
•Managing the player’s 

intent, which fluctuates 

due to narrative 

intelligence

‣ Comprehension
Example Science of Game Design
•Managing the player’s 

intent, which fluctuates 

due to narrative 

intelligence

‣ Comprehension

‣ Role-play

‣ Desire for Agency

•In the context of the Automated Design Problem
…
…
•Managing the player’s 

intent, which fluctuates 

due to narrative 

intelligence

‣ Comprehension

‣ Role-play

‣ Desire for Agency

•In the context of the Automated Design Problem
•Managing the player’s 

intent, which fluctuates 

due to narrative 

intelligence

‣ Role-play
Example Science of Game Design
…
…
Determinants of Player Choice
•Tripartite Model of
Player Behavior

‣ Person
‣ Player
‣ Persona (Roles)
The Mimesis Effect

(Domínguez, Cardona-Rivera, Vance and Roberts, 2016)
! HONORABLE MENTION FOR BEST PAPER, CHI2016
(Waskul and Lusk, 2004)
Na
Ps
Role-play as Preferred Actions
•Roles

‣ Fighter
‣ Wizard
‣ Rogue
•Participants (n=210)
played 1-of-3 games

‣ Assigned Role (78)
‣ Chosen Role (91)
‣ No Explicit Role (41)
The Mimesis Effect

(Domínguez, Cardona-Rivera, Vance and Roberts, 2016)
! HONORABLE MENTION FOR BEST PAPER, CHI2016
Na
Ps
Role-play as Preferred Actions
• Players prefer to act
as expected from
assigned/chosen
role

• Players with no
explicit role self-
select and remain
consistent
Mimesis Effect
The Mimesis Effect

(Domínguez, Cardona-Rivera, Vance and Roberts, 2016)
! HONORABLE MENTION FOR BEST PAPER, CHI2016
Na
Ps
Role-play as Preferred Actions
Takeaway
The Mimesis Effect

(Domínguez, Cardona-Rivera, Vance and Roberts, 2016)
! HONORABLE MENTION FOR BEST PAPER, CHI2016
Chronology
Na
Ps
Role-play as Preferred Actions
Takeaway
The Mimesis Effect

(Domínguez, Cardona-Rivera, Vance and Roberts, 2016)
! HONORABLE MENTION FOR BEST PAPER, CHI2016
Chronology
Na
Ps
Role-play as Preferred Actions
Takeaway
The Mimesis Effect

(Domínguez, Cardona-Rivera, Vance and Roberts, 2016)
! HONORABLE MENTION FOR BEST PAPER, CHI2016
Chronology
Na
Ps
Role-play as Preferred Actions
Takeaway
The Mimesis Effect

(Domínguez, Cardona-Rivera, Vance and Roberts, 2016)
! HONORABLE MENTION FOR BEST PAPER, CHI2016
Chronology
Inferences
Na
Ps
Role-play as Preferred Actions
Takeaway
The Mimesis Effect

(Domínguez, Cardona-Rivera, Vance and Roberts, 2016)
! HONORABLE MENTION FOR BEST PAPER, CHI2016
Na
Ps
Chronology
Inferences
Role-play as Preferred Actions
Takeaway
The Mimesis Effect

(Domínguez, Cardona-Rivera, Vance and Roberts, 2016)
! HONORABLE MENTION FOR BEST PAPER, CHI2016
Na
Ps
Chronology
Preferred!
•Managing the player’s 

intent, which fluctuates 

due to narrative 

intelligence

‣ Comprehension

‣ Role-play

‣ Desire for Agency

•In the context of the Automated Design Problem
•Managing the player’s 

intent, which fluctuates 

due to narrative 

intelligence

‣ Role-play
Example Science of Game Design
…
…
•Managing the player’s 

intent, which fluctuates 

due to narrative 

intelligence

‣ Comprehension

‣ Role-play

‣ Desire for Agency

•In the context of the Automated Design Problem
•Managing the player’s 

intent, which fluctuates 

due to narrative 

intelligence

‣ Desire for Agency
Example Science of Game Design
…
…
Pursuing Greater Agency
•Satisfying power to
take meaningful action
and see the results of
our decisions &
choices

•What is meaningful?

‣ The effect of feedback
‣ Some choices were
“greater agency” ones
(Murray 1997)
The Wolf Among Us
Achieving the Illusion of Agency

(Fendt, Harrison, Ware, Cardona-Rivera and Roberts, 2012)
or
v.
or
Na
Ps
Foreseeing Meaningful Choices
• Idea: Greater agency — greater difference
(greater meaning)

• Method: Measure choice story outcomes

‣ Formalize story content
‣ Define story content difference
‣ Compare choices through story content difference
Foreseeing Meaningful Choices

(Cardona-Rivera, Robertson, Ware, Harrison, Roberts, and Young, 2014)
Na
Ps
A Formalism of Story Content
• The Event-Indexing Model

‣ Consumers “chunk” story information into events
(Zwaan, Langston, Graesser 1995)
picks uppicks up disenchants
space
time
causal
goals
entities
space
time
causal
goals
entities
space
time
causal
goals
entities
Foreseeing Meaningful Choices

(Cardona-Rivera, Robertson, Ware, Harrison, Roberts, and Young, 2014)
Na
Ps
Story Content Difference
•Situation Vector
picks up
space
time
causal
goals
entities
Foreseeing Meaningful Choices

(Cardona-Rivera, Robertson, Ware, Harrison, Roberts, and Young, 2014)
Na
Ps
Story Content Difference
•Situation Vector
Foreseeing Meaningful Choices

(Cardona-Rivera, Robertson, Ware, Harrison, Roberts, and Young, 2014)
picks up
forest
time point 3
primary
wants excalibur
arthur, excalibur
Na
Ps
Story Content Difference
•Situation Vector

•Change Function 

‣
Foreseeing Meaningful Choices

(Cardona-Rivera, Robertson, Ware, Harrison, Roberts, and Young, 2014)
picks up
forest
time point 3
primary
wants excalibur
arthur, excalibur
: SV ! [0, 5]
Na
Ps
Story Content Difference
•Situation Vector

•Change Function 

‣
Foreseeing Meaningful Choices

(Cardona-Rivera, Robertson, Ware, Harrison, Roberts, and Young, 2014)
picks up
forest
time point 3
primary
wants excalibur
arthur, excalibur
: SV ! [0, 5]
picks up
forest
time point 1
primary
wants excalibur
arthur, spellbook
Na
Ps
Story Content Difference
•Situation Vector

•Change Function 

‣
Foreseeing Meaningful Choices

(Cardona-Rivera, Robertson, Ware, Harrison, Roberts, and Young, 2014)
picks up
forest
time point 3
primary
wants excalibur
arthur, excalibur
: SV ! [0, 5]
picks up
forest
time point 1
primary
wants excalibur
arthur, spellbook
= 2
Na
Ps
Agency as Function of Outcomes
•Participants (N=88) 

played custom CYOA

‣ 6 binary choices
•Answered 5-point 

Likert prompts for 

agency

•Page Trend Test supports our theory
= 0 6= 0(Vermeulen et al. 2010)
H0 : MdC0
= MdC5
= MdC3
= MdC1
= MdC2
= MdC1
HA : MdC0
< MdC5
< MdC3
< MdC1
< MdC2
< MdC1
Foreseeing Meaningful Choices

(Cardona-Rivera, Robertson, Ware, Harrison, Roberts, and Young, 2014)
Na
Ps
Agency as Function of Outcomes
Takeaway
Foreseeing Meaningful Choices

(Cardona-Rivera, Robertson, Ware, Harrison, Roberts, and Young, 2014)
Chronology
Na
Ps
Agency as Function of Outcomes
Takeaway
Foreseeing Meaningful Choices

(Cardona-Rivera, Robertson, Ware, Harrison, Roberts, and Young, 2014)
Na
Ps
Chronology
Inferences
Agency as Function of Outcomes
Takeaway
Foreseeing Meaningful Choices

(Cardona-Rivera, Robertson, Ware, Harrison, Roberts, and Young, 2014)
Na
Ps
Chronology
⇢ ,agency
•Managing the player’s 

intent, which fluctuates 

due to narrative 

intelligence

‣ Comprehension

‣ Role-play

‣ Desire for Agency

•In the context of the Automated Design Problem
•Managing the player’s 

intent, which fluctuates 

due to narrative 

intelligence

‣ Desire for Agency
Example Science of Game Design
…
…
Example Science of Game Design
•Managing the player’s 

intent, which fluctuates 

due to narrative 

intelligence

‣ Comprehension

‣ Role-play

‣ Desire for Agency

•In the context of the Automated Design Problem
…
…
What are examples 

of work in this 

area?
• Modeling Story Comprehension as
Planning

• Modeling Role-play as a Preference over
Actions

• Modeling Agency as a Function of Choice
Outcome Differences
Keynote at the 2018 SIGGRAPH Conference on Motion, Interaction and Games
What are MIG-specific
opportunities?
Fidelity for Designed Purpose
•How much display
fidelity is enough?
Fidelity for Designed Purpose
•How much display
fidelity is enough?

•How much display
fidelity is enough 

for X purpose?
Fidelity for Designed Purpose
•How much display
fidelity is enough?

•How much display
fidelity is enough 

for X purpose?

•How much Y fidelity is
enough for X purpose?
Fidelity for Designed Purpose
•How much display
fidelity is enough?

•How much display
fidelity is enough 

for X purpose?

•How much Y fidelity is
enough for X purpose?

‣ A Design Space
Scenario
Interaction
Display
Inferencing & Expectations
•How do 

mimetic interfaces
elicit expectations?

‣ Interaction

‣ Motion

‣ Games
Person’s Bounding Box
Player’s Bounding Box
Persona’s
Bounding
Box
v.
Person’s Bounding Box
Player+Persona

Bounding Box
Storytelling through Motion
•Movement attracts
attention first
Storytelling through Motion
•Movement attracts
attention first
Storytelling through Motion
•Movement attracts
attention first
•Classes of movement

(Kurosawa)

‣ Nature

‣ Groups of People

‣ Individuals

‣ Camera
Every Frame a Painting. - https://www.youtube.com/watch?v=doaQC-S8de8
What are MIG-specific
opportunities in the Science
of Game Design?
• Fidelity for Designed Purpose

• Understanding the Role of 

Inferencing & Expectations

• Storytelling through Motion
Recap
• What is a Science of Game Design and why
bother?

• What are examples of work in this area?

• What are MIG-specific opportunities?

Takeaway: The MIG Community is well-
poised to pursue a science of game design
Call to Action
• Embrace Design

‣ No optimal solutions, only tradeoffs (“it depends”)

• Tripartite Model of Games Research

‣ Seek the invariant relationships
Content
Game
Interface
Cognition
Keynote at the 2018 SIGGRAPH Conference on Motion, Interaction and Games

More Related Content

Keynote at the 2018 SIGGRAPH Conference on Motion, Interaction and Games

  • 1. LABORATORY FOR QUANTITATIVE EXPERIENCE DESIGNqed.cs.utah.edu Toward a 
 Science of Game Design Rogelio E. Cardona-Rivera Assistant Professor and Director, QED Lab School of Computing, Entertainment Arts & Engineering University of Utah rogelio@cs.utah.edu @recardona
  • 4. My Work: The Big Picture Developing intelligent systems

  • 5. My Work: The Big Picture Developing intelligent systems
 which sit at the interface of a virtual world
  • 6. My Work: The Big Picture Developing intelligent systems
 which sit at the interface of a virtual world and a person's understanding of it, 

  • 7. My Work: The Big Picture Developing intelligent systems
 which sit at the interface of a virtual world and a person's understanding of it, 
 to enable the automated generation of 
 compelling interactive experiences
  • 12. Three Methodological Pillars •Synthesis Game Design Narratology Psychology
  • 16. Game Design 
 is a Cognitive Science
  • 17. Game Design is a Cognitive Science
  • 18. Game Design is a Cognitive Science {"<start>" : "<template>", "<template>" : 
 "<object> in which players <engagement>. 
 | <object> that involves <characteristics>. 
 | <object> <constraints>. 
 | <object> characterized by <relationship>.", "<object>" : 
 … } molleindustria. http://www.gamedefinitions.com/
  • 19. Game Design is a Cognitive Science –Herbert Simon The act of transforming existing courses of action into preferred ones.
  • 20. Game Design is a Cognitive Science
  • 21. Talk Outline Objective: The MIG Community is well- poised to pursue a science of game design • What is a Science of Game Design and why bother? • What are examples of work in this area? • What are MIG-specific opportunities?
  • 22. What is a Science of Game Design and why bother?
  • 23. What is a Science of Game Design… A systematically organized 
 body of knowledge
  • 24. What is a Science of Game Design… A systematically organized 
 body of knowledge composed of 
 observation and experiment
  • 25. What is a Science of Game Design… A systematically organized 
 body of knowledge composed of 
 observation and experiment that encompasses the 
 structure and behavior of games
  • 26. …and why bother? • Games are a significant engineering challenge • Advances in technology create more problems • Research should target artifact and person
  • 27. …and why bother? • Games are a significant engineering challenge • Advances in technology create more problems • Research should target artifact and person
  • 28. The Engineering Challenge •Costly •Technically difficult •Poorly understood
  • 29. Cost of Most Expensive Games 2011 2012 2013 2014 2015 2016 $124M $80M $140M$137M $105M $200M MGSV
  • 30. Development Time for them 2011 2012 2013 2014 2015 2016 5 Years 3 Years 4 Years4 Years 3 Years3 Years MGSV
  • 33. Authorial Combinatorics Problem •Content authoring increases exponentially with player choice Event Action
  • 34. 12 writers, 3 years
 200,000 dialogue lines
 = approx. 1M words = 1,094,170 wordsA choose-your-own-adventure (CYOA)
  • 35. Game Purchase Influence Factors Similarity 9% Sequel 9% Word of mouth 11% Graphics 12% Interesting Story 16% Price 21% Other 22% Essential Facts About the Computer & Video Game Industry (Entertainment Software Association, 2016)
  • 39. …and why bother? • Games are a significant engineering challenge • Advances in technology create more problems • Research should target artifact and person
  • 41. Graphics at the Expense of Stories
  • 43. There’s a lot of hacks and kludges to get things working… I’m sure you would find tons of duplication of effort, definitely. I’ve been an audio programmer on [X] different games and I’ve written [X] different audio engines.
  • 44. Meaningless Procedural Generation No Man’s Sky can generate 1.8 × 1019 Planets
  • 45. Meaningless Procedural Generation The Kaleidoscope Effect Cognitively-grounded Procedural Content Generation (Cardona-Rivera, 2017)
  • 46. …and why bother? • Games are a significant engineering challenge • Advances in technology create more problems • Research should target artifact and person
  • 47. The Player Modeling Principle The whole value of a game is in the mental model of itself it projects into the player’s mind. The Simulation Dream (Sylvester, 2013)
  • 48. Tacit Learning and Expectations
  • 49. What is a Science of Game Design and why bother? • Games are a significant engineering challenge • Advances in technology create more problems • Research should target artifact and person
  • 50. Game Design Narratology Psychology AI
 +
 HCI What are examples 
 of work in this 
 area?
  • 51. Narratively
 Intelligent
 Game AI An example agenda in the Science of Game Design
  • 52. What is Narrative Intelligence? Unique human capacity to 
 understand our environment in 
 terms of stories (Heider and Simmel, 1944)
  • 53. •Narrative framing makes interaction more compelling Why Narrative Intelligence matters
  • 54. •Narrative framing makes interaction more compelling ‣ Entertainment Why Narrative Intelligence matters video games
  • 55. •Narrative framing makes interaction more compelling ‣ Entertainment ‣ Education Why Narrative Intelligence matters training simulations
  • 56. •Narrative framing makes interaction more compelling ‣ Entertainment ‣ Education ‣ Engagement Why Narrative Intelligence matters gamification
  • 57. Why Narrative Intelligence matters •Narrative framing makes interaction more compelling ‣ Entertainment ‣ Education ‣ Engagement •Difficult to engineer ‣ AI may help ameliorate authorial burden
  • 58. Interactive Narrative (IN) • Mediates actions through a 
 narrative framing
  • 59. Interactive Narrative (IN) Designer Authors
 with Story 
 Director • Mediates actions through a 
 narrative framing • Abstraction of story as trajectory of world states ‣ Narratives as plans
  • 60. Narratives as Plans • Story generation as a classical planning problem ‣ : initial state ‣ : goal conditions ‣ : set of (domain) actions, 
 predicates, and objects • Search for sequence to transform → P = hsi, g, Di D g si gsi Na Plans and planning in narrative generation: a review of plan-based approaches to the 
 generation of story, discourse and interactivity in narratives (Young et al., 2013)
  • 61. Narratives as Plans • Actions encoded as template operators ‣ Planning Domain Definition Language Na (:action pick-up
 :parameters (?agent ?item ?location) :precondition (and (at ?item ?location) 
 (at ?agent ?location)) :effect (and (not (at ?item ?location)) 
 (has ?agent ?item)))
  • 62. (:action pick-up
 :parameters (?agent ?item ?location) :precondition (and (at ?item ?location) 
 (at ?agent ?location)) :effect (and (not (at ?item ?location)) 
 (has ?agent ?item)) :agents (?agent)) Narratives as Plans • Actions encoded as template operators ‣ Planning Domain Definition Language • PDDL expanded with consenting agents Na
  • 63. Automated Planning • Solution to a planning problem
 is a planP = hsi, g, Di ⇡ = hS, B, Li
  • 64. Pick-up Automated Planning • Solution to a planning problem
 is a plan ‣ : steps P = hsi, g, Di ⇡ = hS, B, Li S si g Pick-up Disenchant
  • 65. Automated Planning • Solution to a planning problem
 is a plan ‣ : steps ‣ : bindings P = hsi, g, Di ⇡ = hS, B, Li S B si g Pick-up Disenchant Pick-up
  • 66. Automated Planning • Solution to a planning problem
 is a plan ‣ : steps ‣ : bindings ‣ : causal links
 (e.g. ) P = hsi, g, Di ⇡ = hS, B, Li hs1, , s2i S B L si g Pick-up Disenchant Pick-up (has ARTHUR SPELLBOOK)
  • 67. Example: A Knight’s Tale (define (problem STORY)
 (:domain KNIGHT) (:objects ARTHUR MERLIN 
 SPELLBOOK MERLINBOOK
 EXCALIBUR FOREST HOME) (:init (at ARTHUR FOREST) (at MERLIN FOREST) (has MERLIN MERLINBOOK) (asleep MERLIN)
 (at SPELLBOOK FOREST) (at EXCALIBUR FOREST) (enchanted EXCALIBUR)
 (path FOREST HOME)) (:goal (has ARTHUR EXCALIBUR)) (define (domain KNIGHT) (:requirements :strips) (:predicates (at ?x ?y) (has ?x ?y) (path ?x ?y) 
 (asleep ?x) (enchanted ?x)) (:action pick-up
 :parameters 
 (?agent ?item ?location) …) (:action move
 :parameters 
 (?agent ?from ?to) …) (:action disenchant
 :parameters 
 (?agent ?obj ?location ?book) …) (:action wake-up
 :parameters 
 (?agent ?sleeper ?location) …) ) Na
  • 68. Example: A Knight’s Tale (define (problem STORY)
 (:domain KNIGHT) (:objects ARTHUR MERLIN 
 SPELLBOOK MERLINBOOK
 EXCALIBUR FOREST HOME) (:init (at ARTHUR FOREST) (at MERLIN FOREST) (has MERLIN MERLINBOOK) (asleep MERLIN)
 (at SPELLBOOK FOREST) (at EXCALIBUR FOREST) (enchanted EXCALIBUR)
 (path FOREST HOME)) (:goal (has ARTHUR EXCALIBUR)) Na
  • 69. Example: A Knight’s Tale Na (define (problem STORY)
 (:domain KNIGHT) (:objects ARTHUR MERLIN 
 SPELLBOOK MERLINBOOK
 EXCALIBUR FOREST HOME) (:init (at ARTHUR FOREST) (at MERLIN FOREST) (has MERLIN MERLINBOOK) (asleep MERLIN)
 (at SPELLBOOK FOREST) (at EXCALIBUR FOREST) (enchanted EXCALIBUR)
 (path FOREST HOME)) (:goal (has ARTHUR EXCALIBUR))
  • 70. Example: A Knight’s Tale Na (define (problem STORY)
 (:domain KNIGHT) (:objects ARTHUR MERLIN 
 SPELLBOOK MERLINBOOK
 EXCALIBUR FOREST HOME) (:init (at ARTHUR FOREST) (at MERLIN FOREST) (has MERLIN MERLINBOOK) (asleep MERLIN)
 (at SPELLBOOK FOREST) (at EXCALIBUR FOREST) (enchanted EXCALIBUR)
 (path FOREST HOME)) (:goal (has ARTHUR EXCALIBUR))
  • 71. Example: A Knight’s Tale Na (define (problem STORY)
 (:domain KNIGHT) (:objects ARTHUR MERLIN 
 SPELLBOOK MERLINBOOK
 EXCALIBUR FOREST HOME) (:init (at ARTHUR FOREST) (at MERLIN FOREST) (has MERLIN MERLINBOOK) (asleep MERLIN)
 (at SPELLBOOK FOREST) (at EXCALIBUR FOREST) (enchanted EXCALIBUR)
 (path FOREST HOME)) (:goal (has ARTHUR EXCALIBUR))
  • 72. Example: A Knight’s Tale (define (problem STORY)
 (:domain KNIGHT) (:objects ARTHUR MERLIN 
 SPELLBOOK MERLINBOOK
 EXCALIBUR FOREST HOME) (:init (at ARTHUR FOREST) (at MERLIN FOREST) (has MERLIN MERLINBOOK) (asleep MERLIN)
 (at SPELLBOOK FOREST) (at EXCALIBUR FOREST) (enchanted EXCALIBUR)
 (path FOREST HOME)) (:goal (has ARTHUR EXCALIBUR)) Na
  • 73. Example: A Knight’s Tale (define (problem STORY)
 (:domain KNIGHT) (:objects ARTHUR MERLIN 
 SPELLBOOK MERLINBOOK
 EXCALIBUR FOREST HOME) (:init (at ARTHUR FOREST) (at MERLIN FOREST) (has MERLIN MERLINBOOK) (asleep MERLIN)
 (at SPELLBOOK FOREST) (at EXCALIBUR FOREST) (enchanted EXCALIBUR)
 (path FOREST HOME)) (:goal (has ARTHUR EXCALIBUR)) Na
  • 74. Example: A Knight’s Tale Na (define (problem STORY)
 (:domain KNIGHT) (:objects ARTHUR MERLIN 
 SPELLBOOK MERLINBOOK
 EXCALIBUR FOREST HOME) (:init (at ARTHUR FOREST) (at MERLIN FOREST) (has MERLIN MERLINBOOK) (asleep MERLIN)
 (at SPELLBOOK FOREST) (at EXCALIBUR FOREST) (enchanted EXCALIBUR)
 (path FOREST HOME)) (:goal (has ARTHUR EXCALIBUR))
  • 75. Example: A Knight’s Tale Na (define (problem STORY)
 (:domain KNIGHT) (:objects ARTHUR MERLIN 
 SPELLBOOK MERLINBOOK
 EXCALIBUR FOREST HOME) (:init (at ARTHUR FOREST) (at MERLIN FOREST) (has MERLIN MERLINBOOK) (asleep MERLIN)
 (at SPELLBOOK FOREST) (at EXCALIBUR FOREST) (enchanted EXCALIBUR)
 (path FOREST HOME)) (:goal (has ARTHUR EXCALIBUR)) si g Pick-up Disenchant Pick-up
  • 79. Interactive Narrative (IN) Na Interacts
 with Designer Authors
 with Story 
 Director si g Player Player can act as afforded by the logical world state
  • 80. IN Play as Game Tree Search Na si g Pick-up Disenchant Pick-up Intended Narrative Plan
  • 81. IN Play as Game Tree Search • Chronology — Player & System take turns ‣ On System Turn: Advance Narrative Agenda ‣ On Player Turn: ??? Na Disenchant Pick-upPick-up si g
  • 82. IN Play as Game Tree Search Na Disenchant Pick-upPick-up si g
  • 83. IN Play as Game Tree Search Na Disenchant Pick-upPick-up si g Move
  • 84. IN Play as Game Tree Search Disenchant Pick-upPick-up si g Na Move Wake-up
  • 85. IN Play as Game Tree Search Disenchant Pick-upPick-up si g Move Wake-up r q p Na • Many trajectories
  • 86. IN Play as Game Tree Search Disenchant Pick-upPick-up si g Na Move Wake-up r q p … … … … • Many many trajectories
  • 87. IN Play as Game Tree Search Na Disenchant Pick-upPick-up si g Move Wake-up r q p … … … … • Many many trajectories ‣ Not all are good
  • 88. IN Play as Game Tree Search Na is unreachable Disenchant Pick-upPick-up si g Move Wake-up r q p … … … g • Many many trajectories ‣ Not all are good
  • 89. IN Play as Game Tree Search Na is unreachable Disenchant Pick-upPick-up si g Move Wake-up r q p … … … g • Many many trajectories ‣ Not all are good ‣ Mediator is needed
  • 92. Interactive Narrative (IN) Na Interacts
 with Designer Authors
 with Story 
 Director si g PlayerMediator • Director & Mediator collaborate ‣ Accept ‣ Re-plan around ‣ Fail user actions
  • 94. Why would players pick these? D Na Ps si g … … … …
  • 95. Why would players pick these? D Na Ps si g … … … … • Valued as completions
  • 96. Why would players pick these? D Na Ps si g … … … … • Valued as completions • If often, represents: ‣ More work for system
  • 97. Why would players pick these? D Na Ps si g … … … … • Valued as completions • If often, represents: ‣ More work for system ‣ Failure of design
  • 99. Automated Design Problem •Managing the player’s intent, which fluctuates due to narrative intelligence D Na Ps … …
  • 100. Automated Design Problem •Managing the player’s intent, which fluctuates due to narrative intelligence ‣ Comprehension D Na Ps … …
  • 101. Automated Design Problem •Managing the player’s intent, which fluctuates due to narrative intelligence ‣ Comprehension ‣ Role-play D Na Ps … …
  • 102. Automated Design Problem •Managing the player’s intent, which fluctuates due to narrative intelligence ‣ Comprehension ‣ Role-play ‣ Desire for Agency D Na Ps … …
  • 103. Automated Design Problem •Managing the player’s intent, which fluctuates due to narrative intelligence ‣ Comprehension ‣ Role-play ‣ Desire for Agency ‣ … D Na Ps … …
  • 104. •Managing the player’s intent, which fluctuates due to narrative intelligence ‣ Comprehension ‣ Role-play ‣ Desire for Agency … …
  • 105. Example Science of Game Design •Managing the player’s intent, which fluctuates due to narrative intelligence ‣ Comprehension ‣ Role-play ‣ Desire for Agency … …
  • 106. Example Science of Game Design •Managing the player’s 
 intent, which fluctuates 
 due to narrative 
 intelligence ‣ Comprehension ‣ Role-play ‣ Desire for Agency •In the context of the Automated Design Problem … …
  • 107. •Managing the player’s 
 intent, which fluctuates 
 due to narrative 
 intelligence ‣ Comprehension Example Science of Game Design •Managing the player’s 
 intent, which fluctuates 
 due to narrative 
 intelligence ‣ Comprehension ‣ Role-play ‣ Desire for Agency •In the context of the Automated Design Problem … …
  • 108. Question Answering in the Context of Stories Generated by Computers
 (Cardona-Rivera, Price, Winer, and Young, 2016) Modeling Story Understanding •Readers as 
 problem solvers (Gerrig and Bernardo, 1994) Na Ps
  • 109. Modeling Story Understanding •Readers as 
 problem solvers •Planning is a model of problem solving (Gerrig and Bernardo, 1994) (Tate, 2001) D Na Ps Question Answering in the Context of Stories Generated by Computers
 (Cardona-Rivera, Price, Winer, and Young, 2016)
  • 110. Modeling Story Understanding •Readers as 
 problem solvers •Planning is a model of problem solving •Idea: narrative plan 
 as a proxy for 
 mental state (Gerrig and Bernardo, 1994) (Tate, 2001) D Na Ps Question Answering in the Context of Stories Generated by Computers
 (Cardona-Rivera, Price, Winer, and Young, 2016)
  • 111. Understanding as Planning • The QUEST Model of Comprehension ‣ Comprehension as Q&A • Predicts normative answers to questions ‣ Why? How? When? What enabled? What was the consequence? (Graesser and Franklin, 1990) D Na Ps Question Answering in the Context of Stories Generated by Computers
 (Cardona-Rivera, Price, Winer, and Young, 2016)
  • 112. Understanding as Planning The QUEST Graph D Na Ps Arthur disenchants
 Excalibur Excalibur 
 disenchanted Arthur wants
 disenchanted Arthur wants
 Excalibur Consequence Outcome Reason Event State Goal Question Answering in the Context of Stories Generated by Computers
 (Cardona-Rivera, Price, Winer, and Young, 2016)
  • 113. Understanding as Planning Example QUEST “Why?” Search D Na Ps Arthur disenchants
 Excalibur Excalibur 
 disenchanted Arthur wants
 disenchanted Arthur wants
 Excalibur Consequence Outcome Reason Why did Arthur disenchant Excalibur? Event State Goal Question Answering in the Context of Stories Generated by Computers
 (Cardona-Rivera, Price, Winer, and Young, 2016)
  • 114. Understanding as Planning Example QUEST “Why?” Search D Na Ps Arthur disenchants
 Excalibur Excalibur 
 disenchanted Arthur wants
 disenchanted Arthur wants
 Excalibur Consequence Outcome Reason Why did Arthur disenchant Excalibur? Event State Goal Question Answering in the Context of Stories Generated by Computers
 (Cardona-Rivera, Price, Winer, and Young, 2016)
  • 115. Understanding as Planning Example QUEST “Why?” Search D Na Ps Arthur disenchants
 Excalibur Excalibur 
 disenchanted Arthur wants
 disenchanted Arthur wants
 Excalibur Consequence Outcome Reason Why did Arthur disenchant Excalibur? Event State Goal Question Answering in the Context of Stories Generated by Computers
 (Cardona-Rivera, Price, Winer, and Young, 2016)
  • 116. Understanding as Planning Example QUEST “Why?” Search D Na Ps Arthur disenchants
 Excalibur Excalibur 
 disenchanted Arthur wants
 disenchanted Arthur wants
 Excalibur Consequence Outcome Reason Why did Arthur disenchant Excalibur? Event State Goal Question Answering in the Context of Stories Generated by Computers
 (Cardona-Rivera, Price, Winer, and Young, 2016)
  • 117. Understanding as Planning Example QUEST “Why?” Search D Na Ps Arthur wants
 disenchanted Arthur wants
 Excalibur Reason Why did Arthur disenchant Excalibur? Goal Candidate Answers Question Answering in the Context of Stories Generated by Computers
 (Cardona-Rivera, Price, Winer, and Young, 2016)
  • 118. Understanding as Planning Plan to QUEST Graph Mapping Algorithm D Na Ps Given a plan : 1. , generate event node ei with a. effects , generate state node ti with 2. Connect Consequence Arcs for all ti → ei , ei →ti+1 in 3. For all literals in , generate goal node li with 4. Connect Reason Arcs for all goal nodes, by ancestry 5. Connect Outcome Arcs for all li →ei in B L ⇡ = hS, B, Li B L L B 8s 2 S 8 e 2 S Question Answering in the Context of Stories Generated by Computers
 (Cardona-Rivera, Price, Winer, and Young, 2016)
  • 119. Understanding as Planning Plan to QUEST Graph Mapping Algorithm D Na Ps Given a plan : 1. , generate event node ei with a. effects , generate state node ti with 2. Connect Consequence Arcs for all ti → ei , ei →ti+1 in 3. For all literals in , generate goal node li with 4. Connect Reason Arcs for all goal nodes, by ancestry 5. Connect Outcome Arcs for all li →ei in B L ⇡ = hS, B, Li B L L B 8s 2 S 8 e 2 S Mapping data structure semantics 
 to cognitive semantics Question Answering in the Context of Stories Generated by Computers
 (Cardona-Rivera, Price, Winer, and Young, 2016)
  • 120. Understanding as Planning •Replicated QUEST Validation experiment ‣ Original: manual graph ‣ Ours: generated graph • Participants gave goodness-of-answer Likert data for Q&A pairs ‣ Predicted their answers ‣ Strong support for model (N=695) Evaluating the Mapping D Na Ps (Graesser, Lang, and Roberts, 1991) Question Answering in the Context of Stories Generated by Computers
 (Cardona-Rivera, Price, Winer, and Young, 2016)
  • 121. Understanding as Planning Takeaway D Na Ps si g Pick-up Disenchant Pick-up Question Answering in the Context of Stories Generated by Computers
 (Cardona-Rivera, Price, Winer, and Young, 2016)
  • 122. Understanding as Planning Takeaway D Na Ps Generation si g Pick-up Disenchant Pick-up Question Answering in the Context of Stories Generated by Computers
 (Cardona-Rivera, Price, Winer, and Young, 2016)
  • 123. Understanding as Planning Takeaway D Na Ps Generation si g Pick-up Disenchant Pick-up Comprehension Question Answering in the Context of Stories Generated by Computers
 (Cardona-Rivera, Price, Winer, and Young, 2016)
  • 124. •Managing the player’s 
 intent, which fluctuates 
 due to narrative 
 intelligence ‣ Comprehension Example Science of Game Design •Managing the player’s 
 intent, which fluctuates 
 due to narrative 
 intelligence ‣ Comprehension ‣ Role-play ‣ Desire for Agency •In the context of the Automated Design Problem … …
  • 125. •Managing the player’s 
 intent, which fluctuates 
 due to narrative 
 intelligence ‣ Comprehension ‣ Role-play ‣ Desire for Agency •In the context of the Automated Design Problem •Managing the player’s 
 intent, which fluctuates 
 due to narrative 
 intelligence ‣ Role-play Example Science of Game Design … …
  • 126. Determinants of Player Choice •Tripartite Model of Player Behavior ‣ Person ‣ Player ‣ Persona (Roles) The Mimesis Effect
 (Domínguez, Cardona-Rivera, Vance and Roberts, 2016) ! HONORABLE MENTION FOR BEST PAPER, CHI2016 (Waskul and Lusk, 2004) Na Ps
  • 127. Role-play as Preferred Actions •Roles ‣ Fighter ‣ Wizard ‣ Rogue •Participants (n=210) played 1-of-3 games ‣ Assigned Role (78) ‣ Chosen Role (91) ‣ No Explicit Role (41) The Mimesis Effect
 (Domínguez, Cardona-Rivera, Vance and Roberts, 2016) ! HONORABLE MENTION FOR BEST PAPER, CHI2016 Na Ps
  • 128. Role-play as Preferred Actions • Players prefer to act as expected from assigned/chosen role • Players with no explicit role self- select and remain consistent Mimesis Effect The Mimesis Effect
 (Domínguez, Cardona-Rivera, Vance and Roberts, 2016) ! HONORABLE MENTION FOR BEST PAPER, CHI2016 Na Ps
  • 129. Role-play as Preferred Actions Takeaway The Mimesis Effect
 (Domínguez, Cardona-Rivera, Vance and Roberts, 2016) ! HONORABLE MENTION FOR BEST PAPER, CHI2016 Chronology Na Ps
  • 130. Role-play as Preferred Actions Takeaway The Mimesis Effect
 (Domínguez, Cardona-Rivera, Vance and Roberts, 2016) ! HONORABLE MENTION FOR BEST PAPER, CHI2016 Chronology Na Ps
  • 131. Role-play as Preferred Actions Takeaway The Mimesis Effect
 (Domínguez, Cardona-Rivera, Vance and Roberts, 2016) ! HONORABLE MENTION FOR BEST PAPER, CHI2016 Chronology Na Ps
  • 132. Role-play as Preferred Actions Takeaway The Mimesis Effect
 (Domínguez, Cardona-Rivera, Vance and Roberts, 2016) ! HONORABLE MENTION FOR BEST PAPER, CHI2016 Chronology Inferences Na Ps
  • 133. Role-play as Preferred Actions Takeaway The Mimesis Effect
 (Domínguez, Cardona-Rivera, Vance and Roberts, 2016) ! HONORABLE MENTION FOR BEST PAPER, CHI2016 Na Ps Chronology Inferences
  • 134. Role-play as Preferred Actions Takeaway The Mimesis Effect
 (Domínguez, Cardona-Rivera, Vance and Roberts, 2016) ! HONORABLE MENTION FOR BEST PAPER, CHI2016 Na Ps Chronology Preferred!
  • 135. •Managing the player’s 
 intent, which fluctuates 
 due to narrative 
 intelligence ‣ Comprehension ‣ Role-play ‣ Desire for Agency •In the context of the Automated Design Problem •Managing the player’s 
 intent, which fluctuates 
 due to narrative 
 intelligence ‣ Role-play Example Science of Game Design … …
  • 136. •Managing the player’s 
 intent, which fluctuates 
 due to narrative 
 intelligence ‣ Comprehension ‣ Role-play ‣ Desire for Agency •In the context of the Automated Design Problem •Managing the player’s 
 intent, which fluctuates 
 due to narrative 
 intelligence ‣ Desire for Agency Example Science of Game Design … …
  • 137. Pursuing Greater Agency ��Satisfying power to take meaningful action and see the results of our decisions & choices •What is meaningful? ‣ The effect of feedback ‣ Some choices were “greater agency” ones (Murray 1997) The Wolf Among Us Achieving the Illusion of Agency
 (Fendt, Harrison, Ware, Cardona-Rivera and Roberts, 2012) or v. or Na Ps
  • 138. Foreseeing Meaningful Choices • Idea: Greater agency — greater difference (greater meaning) • Method: Measure choice story outcomes ‣ Formalize story content ‣ Define story content difference ‣ Compare choices through story content difference Foreseeing Meaningful Choices
 (Cardona-Rivera, Robertson, Ware, Harrison, Roberts, and Young, 2014) Na Ps
  • 139. A Formalism of Story Content • The Event-Indexing Model ‣ Consumers “chunk” story information into events (Zwaan, Langston, Graesser 1995) picks uppicks up disenchants space time causal goals entities space time causal goals entities space time causal goals entities Foreseeing Meaningful Choices
 (Cardona-Rivera, Robertson, Ware, Harrison, Roberts, and Young, 2014) Na Ps
  • 140. Story Content Difference •Situation Vector picks up space time causal goals entities Foreseeing Meaningful Choices
 (Cardona-Rivera, Robertson, Ware, Harrison, Roberts, and Young, 2014) Na Ps
  • 141. Story Content Difference •Situation Vector Foreseeing Meaningful Choices
 (Cardona-Rivera, Robertson, Ware, Harrison, Roberts, and Young, 2014) picks up forest time point 3 primary wants excalibur arthur, excalibur Na Ps
  • 142. Story Content Difference •Situation Vector •Change Function ‣ Foreseeing Meaningful Choices
 (Cardona-Rivera, Robertson, Ware, Harrison, Roberts, and Young, 2014) picks up forest time point 3 primary wants excalibur arthur, excalibur : SV ! [0, 5] Na Ps
  • 143. Story Content Difference •Situation Vector •Change Function ‣ Foreseeing Meaningful Choices
 (Cardona-Rivera, Robertson, Ware, Harrison, Roberts, and Young, 2014) picks up forest time point 3 primary wants excalibur arthur, excalibur : SV ! [0, 5] picks up forest time point 1 primary wants excalibur arthur, spellbook Na Ps
  • 144. Story Content Difference •Situation Vector •Change Function ‣ Foreseeing Meaningful Choices
 (Cardona-Rivera, Robertson, Ware, Harrison, Roberts, and Young, 2014) picks up forest time point 3 primary wants excalibur arthur, excalibur : SV ! [0, 5] picks up forest time point 1 primary wants excalibur arthur, spellbook = 2 Na Ps
  • 145. Agency as Function of Outcomes •Participants (N=88) 
 played custom CYOA ‣ 6 binary choices •Answered 5-point 
 Likert prompts for 
 agency •Page Trend Test supports our theory = 0 6= 0(Vermeulen et al. 2010) H0 : MdC0 = MdC5 = MdC3 = MdC1 = MdC2 = MdC1 HA : MdC0 < MdC5 < MdC3 < MdC1 < MdC2 < MdC1 Foreseeing Meaningful Choices
 (Cardona-Rivera, Robertson, Ware, Harrison, Roberts, and Young, 2014) Na Ps
  • 146. Agency as Function of Outcomes Takeaway Foreseeing Meaningful Choices
 (Cardona-Rivera, Robertson, Ware, Harrison, Roberts, and Young, 2014) Chronology Na Ps
  • 147. Agency as Function of Outcomes Takeaway Foreseeing Meaningful Choices
 (Cardona-Rivera, Robertson, Ware, Harrison, Roberts, and Young, 2014) Na Ps Chronology Inferences
  • 148. Agency as Function of Outcomes Takeaway Foreseeing Meaningful Choices
 (Cardona-Rivera, Robertson, Ware, Harrison, Roberts, and Young, 2014) Na Ps Chronology ⇢ ,agency
  • 149. •Managing the player’s 
 intent, which fluctuates 
 due to narrative 
 intelligence ‣ Comprehension ‣ Role-play ‣ Desire for Agency •In the context of the Automated Design Problem •Managing the player’s 
 intent, which fluctuates 
 due to narrative 
 intelligence ‣ Desire for Agency Example Science of Game Design … …
  • 150. Example Science of Game Design •Managing the player’s 
 intent, which fluctuates 
 due to narrative 
 intelligence ‣ Comprehension ‣ Role-play ‣ Desire for Agency •In the context of the Automated Design Problem … …
  • 151. What are examples 
 of work in this 
 area? • Modeling Story Comprehension as Planning • Modeling Role-play as a Preference over Actions • Modeling Agency as a Function of Choice Outcome Differences
  • 154. Fidelity for Designed Purpose •How much display fidelity is enough?
  • 155. Fidelity for Designed Purpose •How much display fidelity is enough? •How much display fidelity is enough 
 for X purpose?
  • 156. Fidelity for Designed Purpose •How much display fidelity is enough? •How much display fidelity is enough 
 for X purpose? •How much Y fidelity is enough for X purpose?
  • 157. Fidelity for Designed Purpose •How much display fidelity is enough? •How much display fidelity is enough 
 for X purpose? •How much Y fidelity is enough for X purpose? ‣ A Design Space Scenario Interaction Display
  • 158. Inferencing & Expectations •How do 
 mimetic interfaces elicit expectations? ‣ Interaction ‣ Motion ‣ Games Person’s Bounding Box Player’s Bounding Box Persona’s Bounding Box v. Person’s Bounding Box Player+Persona
 Bounding Box
  • 159. Storytelling through Motion •Movement attracts attention first
  • 160. Storytelling through Motion •Movement attracts attention first
  • 161. Storytelling through Motion •Movement attracts attention first •Classes of movement
 (Kurosawa) ‣ Nature ‣ Groups of People ‣ Individuals ‣ Camera Every Frame a Painting. - https://www.youtube.com/watch?v=doaQC-S8de8
  • 162. What are MIG-specific opportunities in the Science of Game Design? • Fidelity for Designed Purpose • Understanding the Role of 
 Inferencing & Expectations • Storytelling through Motion
  • 163. Recap • What is a Science of Game Design and why bother? • What are examples of work in this area? • What are MIG-specific opportunities? Takeaway: The MIG Community is well- poised to pursue a science of game design
  • 164. Call to Action • Embrace Design ‣ No optimal solutions, only tradeoffs (“it depends”) • Tripartite Model of Games Research ‣ Seek the invariant relationships Content Game Interface Cognition