Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

The Role of Cognitive Processes and Psychological Studies in Decision Making, Slides of Linear Algebra

Cognitive PsychologyMental ProcessesDecision Making

The relationship between cognitive processes, mental effort, and decision making. It discusses the phenomena of people's dilation when solving complex problems, the law of least effort, and the impact of temptation on decision making. The document also introduces the concept of System 1 and System 2, and their roles in monitoring and controlling thoughts and actions. Shane Frederick and the author worked together on a theory of judgment, using the bat-and-ball puzzle to study the monitoring of System 1's suggestions by System 2.

What you will learn

  • How does the law of least effort impact decision making?
  • How does temptation affect decision making?
  • What is the relationship between mental effort and decision making?
  • What is the significance of priming in decision making?
  • What is the role of System 1 and System 2 in decision making?

Typology: Slides

2021/2022

Uploaded on 07/04/2022

AnnemieS
AnnemieS 🇳🇱

3.6

(6)

86 documents

1 / 35

Toggle sidebar

Related documents


Partial preview of the text

Download The Role of Cognitive Processes and Psychological Studies in Decision Making and more Slides Linear Algebra in PDF only on Docsity! Thinking  Fast  and  Slow  –  Daniel  Kahneman   Published  2011,  438  pages        QUESTIONS  AT  END     Daniel  Kahneman  is  a  recipient  of  the  Nobel  Prize  in  Economics  for  his  work  in   psychology  that  challenges  the  rational  model  of  decision-­‐making.  He  reveals  in  this   book  where  we  can  and  cannot  trust  our  intuitions  and  how  we  can  tap  into  the   benefits  of  slow  thinking.     The  labels  of  System  1  and  System  2  are  widely  used  in  psychology.  In  rough  order   of  complexity,  here  are  some  examples  of  the  automatic  activities  that  are  attributed   to  System  1:   Detect  that  one  object  is  more  distant  than  another.     Orient  to  the  source  of  a  sudden  sound.     Complete  the  phrase  “bread  and…”     Make  a  “disgust  face”  when  shown  a  horrible  picture.     Detect  hostility  in  a  voice.     Answer  to  2  +  2  =  ?   Understand  simple  sentences.       Orienting  to  a  loud  sound  is  normally  an  involuntary  operation  of  System  1,  which   immediately  mobilizes  the  voluntary  attention  of  System  2.  You  may  be  able  to   resist  turning  toward  the  source  of  a  loud  and  offensive  comment  at  a  crowded   party,  but  even  if  your  head  does  not  move,  your  attention  is  initial  y  directed  to  it,   at  least  for  a  while.  However,  attention  can  be  moved  away  from  an  unwanted  focus,   primarily  by  focusing  intently  on  another  target.     The  highly  diverse  operations  of  System  2  have  one  feature  in  common:  they  require   attention  and  are  disrupted  when  attention  is  drawn  away.     Here  are  some  examples:   Brace  for  the  starter  gun  in  a  race.   Focus  attention  on  the  clowns  in  the  circus.   Focus  on  the  voice  of  a  particular  person  in  a  crowded  and  noisy  room.   Look  for  a  woman  with  white  hair.     Search  memory  to  identify  a  surprising  sound.     Maintain  a  faster  walking  speed  than  is  natural  for  you.   Tell  someone  your  phone  number.     Fill  out  a  tax  form.     Check  the  validity  of  a  complex  logical  argument.     The  division  of  labor  between  System  1  and  System  2  is  highly  efficient:  it  minimizes   effort  and  optimizes  performance.  The  arrangement  works  well  most  of  the  time   because  System  1  is  generally  very  good  at  what  it  does:  its  models  of  familiar   situations  are  accurate,  its  short-­‐term  predictions  are  usually  accurate  as  well,  and   its  initial  reactions  to  challenges  are  swift  and  general  y  appropriate.  System  1  has   biases,  however,  systematic  errors  that  it  is  prone  to  make  in  specified   circumstances.  As  we  shall  see,  it  sometimes  answers  easier  questions  than  the  one   it  was  asked,  and  it  has  little  understanding  of  logic  and  statistics.  One  further   limitation  of  System  1  is  that  it  cannot  be  turned  off.       Casting  about  for  a  useful  topic  of  research,  I  found  an  article  in  Scientific  American   in  which  the  psychologist  Eckhard  Hess  described  the  pupil  of  the  eye  as  a  window   to  the  soul.  It  begins  with  Hess  reporting  that  his  wife  had  noticed  his  pupils   widening  as  he  watched  beautiful  nature  pictures,  and  it  ends  with  two  striking   pictures  of  the  same  good-­‐looking  woman,  who  somehow  appears  much  more   attractive  in  one  than  in  the  other.  There  is  only  one  difference:  the  pupils  of  the   eyes  appear  dilated  in  the  attractive  picture  and  constricted  in  the  other.  Hess  also   wrote  of  belladonna,  a  pupil-­‐dilating  substance  that  was  used  as  a  cosmetic,  and  of   bazaar  shoppers  who  wear  dark  glasses  in  order  to  hide  their  level  of  interest  from   merchants.  One  of  Hess’s  findings  especially  captured  my  attention.  He  had  noticed   that  the  pupils  are  sensitive  indicators  of  mental  effort—they  dilate  substantially   when  people  multiply  two-­‐digit  numbers,  and  they  dilate  more  if  the  problems  are   hard  than  if  they  are  easy.  His  observations  indicated  that  the  response  to  mental   effort  is  distinct  from  emotional  arousal.     We  worked  for  some  months  in  a  spacious  basement  suite  in  which  we  had  set  up  a   closed-­‐circuit  system  that  projected  an  image  of  the  subject’s  pupil  on  a  screen  in   the  corridor;  we  also  could  hear  what  was  happening  in  the  laboratory.  The   diameter  of  the  projected  pupil  was  about  a  foot;  watching  it  dilate  and  contract   when  the  participant  was  at  work  was  a  fascinating  sight,  quite  an  attraction  for   visitors  in  our  lab.  We  amused  ourselves  and  impressed  our  guests  by  our  ability  to   divine  when  the  participant  gave  up  on  a  task.  During  a  mental  multiplication,  the   pupil  normally  dilated  to  a  large  size  within  a  few  seconds  and  stayed  large  as  long   as  the  individual  kept  working  on  the  problem;  it  contracted  immediately  when  she   found  a  solution  or  gave  up.  As  we  watched  from  the  corridor,  we  would  sometimes   surprise  both  the  owner  of  the  pupil  and  our  guests  by  asking,  “Why  did  you  stop   working  just  now?”  The  answer  from  inside  the  lab  was  often,  “How  did  you  know?”   to  which  we  would  reply,  “We  have  a  window  to  your  soul.”     As  you  become  skilled  in  a  task,  its  demand  for  energy  diminishes.  Studies  of  the   brain  have  shown  that  the  pattern  of  activity  associated  with  an  action  changes  as   skill  increases,  with  fewer  brain  regions  involved.     Talent  has  similar  effects.  Highly  intelligent  individuals  need  less  effort  to  solve  the   same  problems,  as  indicated  by  both  pupil  size  and  brain  activity.  A  general  “law  of   least  effort”  applies  to  cognitive  as  well  as  physical  exertion.  The  law  asserts  that  if   there  are  several  ways  of  achieving  the  same  goal,  people  will  eventually  gravitate  to   the  least  demanding  course  of  action.  In  the  economy  of  action,  effort  is  a  cost,  and   the  acquisition  of  skill  is  driven  by  the  balance  of  benefits  and  costs.  Laziness  is  built   deep  into  our  nature.       is  remarkable  because  the  cost  of  checking  is  so  low:  a  few  seconds  of  mental  work   (the  problem  is  moderately  difficult),  with  slightly  tensed  muscles  and  dilated   pupils,  could  avoid  an  embarrassing  mistake.  People  who  say  10¢  appear  to  be   ardent  followers  of  the  law  of  least  effort.  People  who  avoid  that  answer  appear  to   have  more  active  minds.       Many  thousands  of  university  students  have  answered  the  bat-­‐and-­‐ball  puzzle,  and   the  results  are  shocking.  More  than  50%  of  students  at  Harvard,  MIT,  and  Princeton   gave  the  intuitive—incorrect—answer.  At  less  selective  universities,  the  rate  of   demonstrable  failure  to  check  was  in  excess  of  80%.  The  bat-­‐and-­‐ball  problem  is  our   first  encounter  with  an  observation  that  will  be  a  recurrent  theme  of  this  book:   many  people  are  overconfident,  prone  to  place  too  much  faith  in  their  intuitions.   They  apparently  find  cognitive  effort  at  least  mildly  unpleasant  and  avoid  it  as  much   as  possible.     In  one  of  the  most  famous  experiments  in  the  history  of  psychology,  Walter  Mischel   and  his  students  exposed  four-­‐year-­‐old  children  to  a  cruel  dilemma.  They  were   given  a  choice  between  a  small  reward  (one  Oreo),  which  they  could  have  at  any   time,  or  a  larger  reward  (two  cookies)  for  which  they  had  to  wait  15  minutes  under   difficult  conditions.  They  were  to  remain  alone  in  a  room,  facing  a  desk  with  two   objects:  a  single  cookie  and  a  bell  that  the  child  could  ring  at  any  time  to  call  in  the   experimenter  and  receive  the  one  cookie.  A  significant  difference  in  intellectual   aptitude  emerged:  the  children  who  had  shown  more  self-­‐control  as  four-­‐year-­‐olds   had  substantially  higher  scores  on  tests  of  intelligence  as  adults.     The  testers  found  that  training  attention  not  only  improved  executive  control;   scores  on  nonverbal  tests  of  intelligence  also  improved  and  the  improvement  was   maintained  for  several  months.  Other  research  by  the  same  group  identified  specific   genes  that  are  involved  in  the  control  of  attention,  showed  that  parenting   techniques  also  affected  this  ability,  and  demonstrated  a  close  connection  between   the  children’s  ability  to  control  their  attention  and  their  ability  to  control  their   emotions.     To  begin  your  exploration  of  the  surprising  workings  of  System  1,  look  at  the   following  words:     Bananas         Vomit     A  lot  happened  to  you  during  the  last  second  or  two.  You  experienced  some   unpleasant  images  and  memories.  Your  face  twisted  slightly  in  an  expression  of   disgust,  and  you  may  have  pushed  this  book  imperceptibly  farther  away.  Your  heart   rate  increased,  the  hair  on  your  arms  rose  a  little,  and  your  sweat  glands  were   activated.  In  short,  you  responded  to  the  disgusting  word  with  an  attenuated   version  of  how  you  would  react  to  the  actual  event.  All  of  this  was  completely   automatic,  beyond  your  control.     This  complex  constellation  of  responses  occurred  quickly,  automatically,  and   effortlessly.  You  did  not  will  it  and  you  could  not  stop  it.  It  was  an  operation  of   System  1.  The  events  that  took  place  as  a  result  of  your  seeing  the  words  happened   by  a  process  called  associative  activation:  ideas  that  have  been  evoked  trigger   many  other  ideas,  in  a  spreading  cascade  of  activity  in  your  brain.  The  essential   feature  of  this  complex  set  of  mental  events  is  its  coherence.  Each  element  is   connected,  and  each  supports  and  strengthens  the  others.  The  word  evokes   memories,  which  evoke  emotions,  which  in  turn  evoke  facial  expressions  and  other   reactions,  such  as  a  general  tensing  up  and  an  avoidance  tendency.  The  facial   expression  and  the  avoidance  motion  intensify  the  feelings  to  which  they  are  linked,   and  the  feelings  in  turn  reinforce  compatible  ideas.  All  this  happens  quickly  and  all   at  once,  yielding  a  self-­‐reinforcing  pattern  of  cognitive,  emotional,  and  physical   responses  that  is  both  diverse  and  integrated—it  has  been  called  associatively   coherent.     In  a  second  or  so  you  accomplished,  automatically  and  unconsciously,  a  remarkable   feat.  Starting  from  a  completely  unexpected  event,  your  System  1  made  as  much   sense  as  possible  of  the  situation—two  simple  words,  oddly  juxtaposed—by  linking   the  words  in  a  causal  story;  it  evaluated  the  possible  threat  (mild  to  moderate)  and   created  a  context  for  future  developments  by  preparing  you  for  events  that  had  just   become  more  likely;  it  also  created  a  context  for  the  current  event  by  evaluating   how  surprising  it  was.  You  ended  up  as  informed  about  the  past  and  as  prepared  for   the  future  as  you  could  be.       An  odd  feature  of  what  happened  is  that  your  System  1  treated  the  mere   conjunction  of  two  words  as  representations  of  reality.  Your  body  reacted  in  an   attenuated  replica  of  a  reaction  to  the  real  thing,  and  the  emotional  response  and   physical  recoil  were  part  of  the  interpretation  of  the  event.  As  cognitive  scientists   have  emphasized  in  recent  years,  cognition  is  embodied;  you  think  with  your   body,  not  only  with  your  brain.  Furthermore,  only  a  few  of  the  activated  ideas  will   register  in  consciousness;  most  of  the  work  of  associative  thinking  is  silent,  hidden   from  our  conscious  selves.     Studying  priming  and  associative  thinking  in  the  1980s,  psychologists  discovered   that  exposure  to  a  word  causes  immediate  and  measurable  changes  in  the  ease  with   which  many  related  words  can  be  evoked.  If  you  have  recently  seen  or  heard  the   word  EAT,  you  are  temporarily  more  likely  to  complete  the  word  fragment  SO_P  as   SOUP  than  as  SOAP.     Priming  effects  take  many  forms.  If  the  idea  of  EAT  is  currently  on  your  mind   (whether  or  not  you  are  conscious  of  it),  you  will  be  quicker  than  usual  to  recognize   the  word  SOUP  when  it  is  spoken  in  a  whisper  or  presented  in  a  blurry  font.  And  of   course  you  are  primed  not  only  for  the  idea  of  soup  but  also  for  a  multitude  of  food-­‐ related  ideas,  including  fork,  hungry,  fat,  diet,  and  cookie.     Another  major  advance  in  our  understanding  of  memory  was  the  discovery  that   priming  is  not  restricted  to  concepts  and  words.  You  cannot  know  this  from   conscious  experience,  of  course,  but  you  must  accept  the  alien  idea  that  your  actions   and  your  emotions  can  be  primed  by  events  of  which  you  are  not  even  aware.       In  an  experiment  that  became  an  instant  classic,  the  psychologist  John  Bargh  and  his   collaborators  asked  students  at  New  York  University—most  aged  eighteen  to   twenty-­‐two—to  assemble  four-­‐word  sentences  from  a  set  of  five  words  (for   example,  “finds  he  it  yellow  instantly”).  For  one  group  of  students,  half  the   scrambled  sentences  contained  words  associated  with  the  elderly,  such  as  Florida,   forgetful,  bald,  gray,  or  wrinkle.  When  they  had  completed  that  task,  the  young   participants  were  sent  out  to  do  another  experiment  in  an  office  down  the  hall  .       That  short  walk  was  what  the  experiment  was  about.  The  researchers  unobtrusively   measured  the  time  it  took  people  to  get  from  one  end  of  the  corridor  to  the  other.  As   Bargh  had  predicted,  the  young  people  who  had  fashioned  a  sentence  from  words   with  an  elderly  theme  walked  down  the  hallway  significantly  more  slowly  than  the   others.  This  is  the  “Florida  effect”.     Reciprocal  links  are  common  in  the  associative  network.  For  example,  being  amused   tends  to  make  you  smile,  but  the  act  of  smiling  too  tends  to  make  you  feel  amused.   (Try  it!)     Studies  of  priming  effects  have  yielded  discoveries  that  threaten  our  self-­‐image  as   conscious  and  autonomous  authors  of  our  judgments  and  our  choices.  For  instance,   most  of  us  think  of  voting  as  a  deliberate  act  that  reflects  our  values  and  our   assessments  of  policies  and  is  not  influenced  by  irrelevancies.  Our  vote  should  not   be  affected  by  the  location  of  the  polling  station,  for  example,  but  it  is.  A  study  of   voting  patterns  in  precincts  of  Arizona  in  2000  showed  that  the  support  for   propositions  to  increase  the  funding  of  schools  was  significantly  greater  when  the   polling  station  was  in  a  school  than  when  it  was  in  a  nearby  location.     Reminders  of  money  produce  some  troubling  effects.  Money-­‐primed  people  become   more  independent  than  they  would  be  without  the  associative  trigger.  They   persevered  almost  twice  as  long  in  trying  to  solve  a  very  difficult  problem  before   they  asked  the  experimenter  for  help,  a  crisp  demonstration  of  increased  self-­‐ reliance.  Money-­‐primed  people  are  also  more  selfish:  they  were  much  less  willing  to   spend  time  helping  another  student  who  pretended  to  be  confused  about  an   experimental  task.  When  an  experimenter  clumsily  dropped  a  bunch  of  pencils  on   the  floor,  the  participants  with  money  (unconsciously)  on  their  mind  picked  up   fewer  pencils.  In  another  experiment  in  the  series,  participants  were  told  that  they   would  shortly  have  a  get-­‐acquainted  conversation  with  another  person  and  were   asked  to  set  up  two  chairs  while  the  experimenter  left  to  retrieve  that  person.   Participants  primed  by  money  chose  to  stay  much  farther  apart  than  their  nonprime   peers  (118  vs.  80  centimeters).       accept  as  true  the  statement  that  “the  body  temperature  of  a  chicken  is  144°”  (or   any  other  arbitrary  number).  The  familiarity  of  one  phrase  in  the  statement  sufficed   to  make  the  whole  statement  feel  familiar,  and  therefore  true.  If  you  cannot   remember  the  source  of  a  statement,  and  have  no  way  to  relate  it  to  other  things  you   know,  you  have  no  option  but  to  go  with  the  sense  of  cognitive  ease.     How  to  Write  a  Persuasive  Message   Suppose  you  must  write  a  message  that  you  want  the  recipients  to  believe.  Of   course,  your  message  will  be  true,  but  that  is  not  necessarily  enough  for  people  to   believe  that  it  is  true.  It  is  entirely  legitimate  for  you  to  enlist  cognitive  ease  to  work   in  your  favor,  and  studies  of  truth  illusions  provide  specific  suggestions  that  may   help  you  achieve  this  goal.  The  general  principle  is  that  anything  you  can  do  to   reduce  cognitive  strain  will  help,  so  you  should  first  maximize  legibility.     More  advice:  if  your  message  is  to  be  printed,  use  high-­‐quality  paper  to  maximize   the  contrast  between  characters  and  their  background.  If  you  use  color,  you  are   more  likely  to  be  believed  if  your  text  is  printed  in  bright  blue  or  red  than  in   middling  shades  of  green,  yellow,  or  pale  blue.       If  you  care  about  being  thought  credible  and  intelligent,  do  not  use  complex   language  where  simpler  language  will  do.  My  Princeton  colleague  Danny   Oppenheimer  refuted  a  myth  prevalent  among  undergraduates  about  the   vocabulary  that  professors  find  most  impressive.  In  an  article  titled  “Consequences   of  Erudite  Vernacular  Utilized  Irrespective  of  Necessity:  Problems  with  Using  Long   Words  Needlessly,”  he  showed  that  couching  familiar  ideas  in  pretentious  language   is  taken  as  a  sign  of  poor  intelligence  and  low  credibility.  In  addition  to  making  your   message  simple,  try  to  make  it  memorable.     Finally,  if  you  quote  a  source,  choose  one  with  a  name  that  is  easy  to  pronounce.   Participants  in  an  experiment  were  asked  to  evaluate  the  prospects  of  fictitious   Turkish  companies  on  the  basis  of  reports  from  two  brokerage  firms.  For  each  stock,   one  of  the  reports  came  from  an  easily  pronounced  name  (e.g.,  Artan)  and  the  other   report  came  from  a  firm  with  an  unfortunate  name  (e.g.,  Taahhut).  The  reports   sometimes  disagreed.  The  best  procedure  for  the  observers  would  have  been  to   average  the  two  reports,  but  this  is  not  what  they  did.  They  gave  much  more  weight   to  the  report  from  Artan  than  to  the  report  from  Taahhut.  Remember  that  System   2  is  lazy  and  that  mental  effort  is  aversive.  If  possible,  the  recipients  of  your  message   want  to  stay  away  from  anything  that  reminds  them  of  effort,  including  a  source   with  a  complicated  name.     All  this  is  very  good  advice,  but  we  should  not  get  carried  away.  High  quality  paper,   bright  colors,  and  rhyming  or  simple  language  will  not  be  much  help  if  your  message   is  obviously  nonsensical,  or  if  it  contradicts  facts  that  your  audience  knows  to  be   true.  The  psychologists  who  do  these  experiments  do  not  believe  that  people  are   stupid  or  infinitely  gullible.  What  psychologists  do  believe  is  that  all  of  us  live  much   of  our  life  guided  by  the  impressions  of  System  1—and  we  often  do  not  know  the   source  of  these  impressions.  How  do  you  know  that  a  statement  is  true?  If  it  is   strongly  linked  by  logic  or  association  to  other  beliefs  or  preferences  you  hold,  or   comes  from  a  source  you  trust  and  like,  you  will  feel  a  sense  of  cognitive  ease.     The  famed  psychologist  Robert  Zajonc  dedicated  much  of  his  career  to  the  study  of   the  link  between  the  repetition  of  an  arbitrary  stimulus  and  the  mild  affection  that   people  eventually  have  for  it.  Zajonc  called  it  the  mere  exposure  effect.  A   demonstration  conducted  in  the  student  newspapers  of  the  University  of  Michigan   and  of  Michigan  State  University  is  one  of  my  favorite  experiments.  For  a  period  of   some  weeks,  an  ad-­‐like  box  appeared  on  the  front  page  of  the  paper,  which     contained  one  of  the  following  Turkish  (or  Turkish-­‐sounding)  words:  kadirga,   saricik,  biwonjni,  nansoma,  and  iktitaf.  The  frequency  with  which  the  words  were   repeated  varied:  one  of  the  words  was  shown  only  once,  the  others  appeared  on   two,  five,  ten,  or  twenty-­‐five  separate  occasions.  (The  words  that  were  presented   most  often  in  one  of  the  university  papers  were  the  least  frequent  in  the  other.)  No   explanation  was  offered,  and  readers’  queries  were  answered  by  the  statement  that,   “the  purchaser  of  the  display  wished  for  anonymity.”  When  the  mysterious  series  of   ads  ended,  the  investigators  sent  questionnaires  to  the  university  communities,   asking  for  impressions  of  whether  each  of  the  words  “means  something  ‘good’  or   something  ‘bad.’”  The  results  were  spectacular:  the  words  that  were  presented  more   frequently  were  rated  much  more  favorably  than  the  words  that  had  been  shown   only  once  or  twice.  The  finding  has  been  confirmed  in  many  experiments,  using   Chinese  ideographs,  faces,  and  randomly  shaped  polygons.       The  main  function  of  System  1  is  to  maintain  and  update  a  model  of  your  personal   world,  which  represents  what  is  normal  in  it.  A  capacity  for  surprise  is  an  essential   aspect  of  our  mental  life,  and  surprise  itself  is  the  most  sensitive  indication  of  how   we  understand  our  world  and  what  we  expect  from  it.     “How  many  animals  of  each  kind  did  Moses  take  into  the  ark?”       The  number  of  people  who  detect  what  is  wrong  with  this  question  is  so  small  that  it   has  been  dubbed  the  “Moses  illusion.”  Moses  took  no  animals  into  the  ark;  Noah  did.   Like  the  incident  of  the  wincing  soup  eater,  the  Moses  illusion  is  readily  explained   by  norm  theory.  The  idea  of  animals  going  into  the  ark  sets  up  a  biblical  context,  and   Moses  is  not  abnormal  in  that  context.  You  did  not  positively  expect  him,  but  the   mention  of  his  name  is  not  surprising.  It  also  helps  that  Moses  and  Noah  have  the   same  vowel  sound  and  number  of  syllables.  As  with  the  triads  that  produce   cognitive  ease,  you  unconsciously  detect  associative  coherence  between  “Moses”   and  “ark”  and  so  quickly  accept  the  question.  Replace  Moses  with  George  W.  Bush  in   this  sentence  and  you  will  have  a  poor  political  joke  but  no  illusion.     If  you  like  the  president’s  politics,  you  probably  like  his  voice  and  his  appearance  as   well.  The  tendency  to  like  (or  dislike)  everything  about  a  person—including  things   you  have  not  observed—is  known  as  the  halo  effect.  The  term  has  been  in  use  in   psychology  for  a  century,  but  it  has  not  come  into  wide  use  in  everyday  language.   This  is  a  pity,  because  the  halo  effect  is  a  good  name  for  a  common  bias  that  plays  a   large  role  in  shaping  our  view  of  people  and  situations.  It  is  one  of  the  ways  the   representation  of  the  world  that  System  1  generates  is  simpler  and  more  coherent   than  the  real  thing.     Compare  these  two  statements:   Adolf  Hitler  was  born  in  1892.   Adolf  Hitler  was  born  in  1887.     Both  are  false  (Hitler  was  born  in  1889),  but  experiments  have  shown  that  the  first   is  more  likely  to  be  believed.       The  order  of  items  within  lists  also  influences  our  intuition.  In  an  enduring  classic  of   psychology,  Solomon  Asch  presented  descriptions  of  two  people  and  asked  for   comments  on  their  personality.  What  do  you  think  of  Alan  and  Ben?     Alan:  intelligent—industrious—impulsive—critical—stubborn—envious   Ben:    envious—stubborn—critical—impulsive—industrious—intelligent     If  you  are  like  most  of  us,  you  viewed  Alan  much  more  favorably  than  Ben.  The   initial  traits  in  the  list  change  the  very  meaning  of  the  traits  that  appear  later.  The   stubbornness  of  an  intelligent  person  is  seen  as  likely  to  be  justified  and  may   actually  evoke  respect,  but  intelligence  in  an  envious  and  stubborn  person  makes   him  more  dangerous.  The  halo  effect  is  also  an  example  of  suppressed  ambiguity:   the  adjective  stubborn  is  ambiguous  and  will  be  interpreted  in  a  way  that  makes  it   coherent  with  the  context.     Other  judgement  influencers:     “Evaluating  people  as  attractive  or  not  is  a  basic  assessment.   You  do  that  automatically  whether  or  not  you  want  to,  and  it   influences  you.”     “There  are  circuits  in  the  brain  that  evaluate  dominance  from  the   shape  of  the  face.  He  looks  the  part  for  a  leadership  role.”     “The  punishment  won’t  feel  just  unless  its  intensity  matches  the   crime.  Just  like  you  can  match  the  loudness  of  a  sound  to  the   brightness  of  a  light.”               research  is  that  the  most  successful  schools,  on  average,  are  small.  In  a  survey  of   1,662  schools  in  Pennsylvania,  for  instance,  6  of  the  top  50  were  small,  which  is  an   over  representation  by  a  factor  of  4.  These  data  encouraged  the  Gates  Foundation  to   make  a  substantial  investment  in  the  creation  of  small  schools,  sometimes  by   splitting  large  schools  into  smaller  units.  At  least  half  a  dozen  other  prominent   institutions,  such  as  the  Annenberg  Foundation  and  the  Pew  Charitable  Trust,  joined   the  effort,  as  did  the  U.S.  Department  of  Education’s  Smaller  Learning  Communities   Program.     This  probably  makes  intuitive  sense  to  you.  It  is  easy  to  construct  a  causal  story  that   explains  how  small  schools  are  able  to  provide  superior  education  and  thus  produce   high-­‐achieving  scholars  by  giving  them  more  personal  attention  and  encouragement   than  they  could  get  in  larger  schools.  Unfortunately,  the  causal  analysis  is  pointless   because  the  facts  are  wrong.  If  the  statisticians  who  reported  to  the  Gates   Foundation  had  asked  about  the  characteristics  of  the  worst  schools,  they  would   have  found  that  bad  schools  also  tend  to  be  smaller  than  average.  The  truth  is  that   small  schools  are  not  better  on  average;  they  are  simply  more  variable.  If  anything,   say  Wainer  and  Zwerling,  large  schools  tend  to  produce  better  results,  especially  in   higher  grades  where  a  variety  of  curricular  and  extra-­‐curricular  options  is  valuable.       Many  psychological  phenomena  can  be  demonstrated  experimentally,  but  few  can   actually  be  measured.  The  effect  of  anchors  is  an  exception.  Anchoring  can  be   measured,  and  it  is  an  impressively  large  effect.  Some  visitors  at  the  San  Francisco   Exploratorium  were  asked  the  following  two  questions:   Is  the  height  of  the  tallest  redwood  more  or  less  than  1,200  feet?   What  is  your  best  guess  about  the  height  of  the  tallest  redwood?     The  “high  anchor”  in  this  experiment  was  1,200  feet.  For  other  participants,  the  first   question  referred  to  a  “low  anchor”  of  180  feet.  The  difference  between  the  two   anchors  was  1,020  feet.  As  expected,  the  two  groups  produced  very  different  mean   estimates:  844  and  282  feet.  The  difference  between  them  was  562  feet.  The   anchoring  index  is  simply  the  ratio  of  the  two  differences  (562/1,020)  expressed  as   a  percentage:  55%.       The  anchoring  measure  would  be  100%  for  people  who  slavishly  adopt  the  anchor   as  an  estimate,  and  zero  for  people  who  are  able  to  ignore  the  anchor  altogether.   The  value  of  55%  that  was  observed  in  this  example  is  typical.  Similar  values  have   been  observed  in  numerous  other  problems.     The  anchoring  effect  is  not  a  laboratory  curiosity;  it  can  be  just  as  strong  In  the  real   world.  In  an  experiment  conducted  some  years  ago,  real-­‐estate  agents  were  given  an   opportunity  to  assess  the  value  of  a  house  that  was  actually  on  the  market.  They   visited  the  house  and  studied  a  comprehensive  booklet  of  information  that  included   an  asking  price.  Half  the  agents  saw  an  asking  price  that  was  substantially  higher   than  the  listed  price  of  the  house;  the  other  half  saw  an  asking  price  that  was   substantially  lower.  Each  agent  gave  her  opinion  about  a  reasonable  buying  price  for   the  house  and  the  lowest  price  at  which  she  would  agree  to  sell  the  house  if  she   owned  it.       The  agents  were  then  asked  about  the  factors  that  had  affected  their  judgment.   Remarkably,  the  asking  price  was  not  one  of  these  factors;  the  agents  took  pride  in   their  ability  to  ignore  it.  They  insisted  that  the  listing  price  had  no  effect  on  their   responses,  but  they  were  wrong:  the  anchoring  effect  was  41%.  Indeed,  the   professionals  were  almost  as  susceptible  to  anchoring  effects  as  business  school   students  with  no  real  estate  experience,  whose  anchoring  index  was  48%.  The  only   difference  between  the  two  groups  was  that  the  students  conceded  that  they  were   influenced  by  the  anchor,  while  the  professionals  denied  that  influence.     By  now  you  should  be  convinced  that  anchoring  effects—sometimes  due  to  priming,   sometimes  to  insufficient  adjustment—are  everywhere.  The  psychological   mechanisms  that  produce  anchoring  make  us  far  more  suggestible  than  most  of  us   would  want  to  be.  And  of  course  there  are  quite  a  few  people  who  are  willing  and   able  to  exploit  our  gullibility.     Anchoring  effects  explain  why,  for  example,  arbitrary  rationing  is  an  effective   marketing  ploy.  A  few  years  ago,  supermarket  shoppers  in  Sioux  City,  Iowa,   encountered  a  sales  promotion  for  Campbell’s  soup  at  about  10%  off  the  regular   price.  On  some  days,  a  sign  on  the  shelf  said  limit  of  12  per  person.  On  other  days,   the  sign  said  no  limit  per  person.  Shoppers  purchased  an  average  of  7  cans  when  the   limit  was  in  force,  twice  as  many  as  they  bought  when  the  limit  was  removed.       Anchoring  is  not  the  sole  explanation.  Rationing  also  implies  that  the  goods  are   flying  off  the  shelves,  and  shoppers  should  feel  some  urgency  about  stocking  up.  But   we  also  know  that  the  mention  of  12  cans  as  a  possible  purchase  would  produce   anchoring  even  if  the  number  were  produced  by  a  roulette  wheel.       We  see  the  same  strategy  at  work  in  the  negotiation  over  the  price  of  a  home,  when   the  seller  makes  the  first  move  by  setting  the  list  price.  As  in  many  other  games,   moving  first  is  an  advantage  in  single-­‐issue  negotiations—for  example,  when  price   is  the  only  issue  to  be  settled  between  a  buyer  and  a  seller.       As  you  may  have  experienced  when  negotiating  for  the  first  time  in  a  bazaar,  the   initial  anchor  has  a  powerful  effect.  My  advice  to  students  when  I  taught   negotiations  was  that  if  you  think  the  other  side  has  made  an  outrageous  proposal,   you  should  not  come  back  with  an  equally  outrageous  counteroffer,  creating  a  gap   that  will  be  difficult  to  bridge  in  further  negotiations.  Instead  you  should  make  a   scene,  storm  out  or  threaten  to  do  so,  and  make  it  clear—to  yourself  as  well  as  to  the   other  side—that  you  will  not  continue  the  negotiation  with  that  number  on  the   table.     The  psychologists  Adam  Galinsky  and  Thomas  Mussweiler  proposed  more  subtle     ways  to  resist  the  anchoring  effect  in  negotiations.  They  instructed  negotiators  to   focus  their  attention  and  search  their  memory  for  arguments  against  the  anchor.   The  instruction  to  activate  System  2  was  successful.  For  example,  the  anchoring   effect  is  reduced  or  eliminated  when  the  second  mover  focuses  his  attention  on  the   minimal  offer  that  the  opponent  would  accept,  or  on  the  costs  to  the  opponent  of   failing  to  reach  an  agreement.  In  general,  a  strategy  of  deliberately  “thinking  the   opposite”  may  be  a  good  defense  against  anchoring  effects,  because  it  negates  the   biased  recruitment  of  thoughts  that  produces  these  effects.     Regression  to  the  Mean     I  had  one  of  the  most  satisfying  eureka  experiences  of  my  career  while  teaching   flight  instructors  in  the  Israeli  Air  Force  about  the  psychology  of  effective  training.  I   was  telling  them  about  an  important  principle  of  skill  training:  rewards  for   improved  performance  work  better  than  punishment  of  mistakes.  This  proposition   is  supported  from  research  on  pigeons,  rats,  humans,  and  other  animals.     When  I  finished  my  enthusiastic  speech,  one  of  the  most  seasoned  instructors  in  the   group  raised  his  hand  and  made  a  short  speech  of  his  own.  He  began  by  conceding   that  rewarding  improved  performance  might  be  good  for  the  birds,  but  he  denied   that  it  was  optimal  for  flight  cadets.  This  is  what  he  said:  “On  many  occasions  I  have   praised  flight  cadets  for  clean  execution  of  some  aerobatic  maneuver.  The  next  time   they  try  the  same  maneuver  they  usually  do  worse.  On  the  other  hand,  I  have  often   screamed  into  a  cadet’s  earphone  for  bad  execution,  and  in  general  he  does  better   on  his  next  try.  So  please  don’t  tell  us  that  reward  works  and  punishment  does  not,   because  the  opposite  is  the  case.”     This  was  a  joyous  moment  of  insight,  when  I  saw  in  a  new  light  a  principle  of   statistics  that  I  had  been  teaching  for  years.  The  instructor  was  right—but  he  was   also  completely  wrong!  His  observation  was  astute  and  correct:  occasions  on  which   he  praised  a  performance  were  likely  to  be  followed  by  a  disappointing   performance,  and  punishments  were  typically  followed  by  an  improvement.  But  the   inference  he  had  drawn  about  the  efficacy  of  reward  and  punishment  was   completely  off  the  mark.         What  he  had  observed  is  known  as  regression  to  the  mean,  which  in  that  case  was   due  to  random  fluctuations  in  the  quality  of  performance.  Naturally,  he  praised  only   a  cadet  whose  performance  was  far  better  than  average.  But  the  cadet  was  probably   just  lucky  on  that  particular  attempt  and  therefore  likely  to  deteriorate  regardless  of   whether  or  not  he  was  praised.  Similarly,  the  instructor  would  shout  into  a  cadet’s   earphones  only  when  the  cadet’s  performance  was  unusually  bad  and  therefore   likely  to  improve  regardless  of  what  the  instructor  did.  The  instructor  had  attached   a  causal  interpretation  to  the  inevitable  fluctuations  of  a  random  process.       that  is  familiar  in  the  academic  world,  but  the  analogies  to  other  spheres  of  life  are   immediate.       A  department  is  about  to  hire  a  young  professor  and  wants  to  choose  the  one  whose   prospects  for  scientific  productivity  are  the  best.  The  search  committee  has   narrowed  down  the  choice  to  two  candidates:     Kim  recently  completed  her  graduate  work.  Her  recommendations  are  spectacular  and   she  gave  a  brilliant  talk  and  impressed  everyone  in  her  interviews.  She  has  no   substantial  track  record  of  scientific  productivity.     Jane  has  held  a  postdoctoral  position  for  the  last  three  years.  She  has  been  very   productive  and  her  research  record  is  excellent,  but  her  talk  and  interviews  were  less   sparkling  than  Kim’s.     The  intuitive  choice  favors  Kim,  because  she  left  a  stronger  impression,  and   WYSIATI  (What  you  see  is  all  there  is).  But  it  is  also  the  case  that  there  is  much  less   information  about  Kim  than  about  Jane.  We  are  back  to  the  law  of  small  numbers.  In   effect,  you  have  a  smaller  sample  of  information  from  Kim  than  from  Jane,  and   extreme  outcomes  are  much  more  likely  to  be  observed  in  small  samples.  There  is   more  luck  in  the  outcomes  of  small  samples,  and  you  should  therefore  regress  your   prediction  more  deeply  toward  the  mean  in  your  prediction  of  Kim’s  future   performance.  When  you  allow  for  the  fact  that  Kim  is  likely  to  regress  more  than   Jane,  you  might  end  up  selecting  Jane  although  you  were  less  impressed  by  her.  In   the  context  of  academic  choices,  I  would  vote  for  Jane,  but  it  would  be  a  struggle  to   overcome  my  intuitive  impression  that  Kim  is  more  promising.  Following  our   intuitions  is  more  natural,  and  somehow  more  pleasant,  than  acting  against  them.     Extreme  predictions  and  a  willingness  to  predict  rare  events  from  weak  evidence   are  both  manifestations  of  System  1.  It  is  natural  for  the  associative  machinery  to   match  the  extremeness  of  predictions  to  the  perceived  extremeness  of  evidence  on   which  it  is  based—this  is  how  substitution  works.  And  it  is  natural  for  System  1  to   generate  overconfident  judgments,  because  confidence,  as  we  have  seen,  is   determined  by  the  coherence  of  the  best  story  you  can  tell  from  the  evidence  at   hand.  Be  warned:  your  intuitions  will  deliver  predictions  that  are  too  extreme  and   you  will  be  inclined  to  put  far  too  much  faith  in  them.     Regression  is  also  a  problem  for  System  2.  The  very  idea  of  regression  to  the  mean  is   alien  and  difficult  to  communicate  and  comprehend.  Galton  had  a  hard  time  before   he  understood  it.  Many  statistics  teachers  dread  the  class  in  which  the  topic  comes   up,  and  their  students  often  end  up  with  only  a  vague  understanding  of  this  crucial   concept.  This  is  a  case  where  System  2  requires  special  training.  Matching   predictions  to  the  evidence  is  not  only  something  we  do  intuitively;  it  also  seems  a   reasonable  thing  to  do.  We  will  not  learn  to  understand  regression  from  experience.   Even  when  a  regression  is  identified,  as  we  saw  in  the  story  of  the  flight  instructors,   it  will  be  given  a  causal  interpretation  that  is  almost  always  wrong.     The  trader-­‐philosopher-­‐statistician  Nassim  Taleb  could  also  be  considered  a   psychologist.  In  The  Black  Swan,  Taleb  introduced  the  notion  of  a  narrative  fallacy  to   describe  how  flawed  stories  of  the  past  shape  our  views  of  the  world  and  our   expectations  for  the  future.  Narrative  fallacies  arise  inevitably  from  our  continuous   attempt  to  make  sense  of  the  world.     The  explanatory  stories  that  people  find  compelling  are  simple;  are  concrete  rather   than  abstract;  assign  a  larger  role  to  talent,  stupidity,  and  intentions  than  to  luck;   and  focus  on  a  few  striking  events  that  happened  rather  than  on  the  countless   events  that  failed  to  happen.  Any  recent  salient  event  is  a  candidate  to  become  the   kernel  of  a  causal  narrative.  Taleb  suggests  that  we  humans  constantly  fool   ourselves  by  constructing  flimsy  accounts  of  the  past  and  believing  they  are  true.       The  mind  that  makes  up  narratives  about  the  past  is  a  sense-­‐making  organ.  When  an   unpredicted  event  occurs,  we  immediately  adjust  our  view  of  the  world  to   accommodate  the  surprise.  Imagine  yourself  before  a  football  game  between  two   teams  that  have  the  same  record  of  wins  and  losses.  Now  the  game  is  over,  and  one   team  trashed  the  other.  In  your  revised  model  of  the  world,  the  winning  team  is   much  stronger  than  the  loser,  and  your  view  of  the  past  as  well  as  of  the  future  has   been  altered  by  that  new  perception.  Learning  from  surprises  is  a  reasonable  thing   to  do,  but  it  can  have  some  dangerous  consequences.     A  general  limitation  of  the  human  mind  is  its  imperfect  ability  to  reconstruct  past   states  of  knowledge,  or  beliefs  that  have  changed.  Once  you  adopt  a  new  view  of  the   world  (or  of  any  part  of  it),  you  immediately  lose  much  of  your  ability  to  recall  what   you  used  to  believe  before  your  mind  changed.  Many  psychologists  have  studied   what  happens  when  people  change  their  minds.  Choosing  a  topic  on  which  minds   are  not  completely  made  up—say,  the  death  penalty—the  experimenter  carefully   measures  people’s  attitudes.  Next,  the  participants  see  or  hear  a  persuasive  pro  or   con  message.  Then  the  experimenter  measures  people’s  attitudes  again;  they  usually   are  closer  to  the  persuasive  message  they  were  exposed  to.  Finally,  the  participants   report  the  opinion  they  held  beforehand.  This  task  turns  out  to  be  surprisingly   difficult.  Asked  to  reconstruct  their  former  beliefs,  people  retrieve  their  current   ones  instead—an  instance  of  substitution—and  many  cannot  believe  that  they  ever   felt  differently.     Your  inability  to  reconstruct  past  beliefs  will  inevitably  cause  you  to  underestimate   the  extent  to  which  you  were  surprised  by  past  events.  Baruch  Fischh  off  first   demonstrated  this  “I-­‐knew-­‐it-­‐all-­‐along”  effect,  or  hindsight  bias,  when  he  was  a   student  in  Jerusalem.  Together  with  Ruth  Beyth  (another  of  our  students),  Fischh  off   conducted  a  survey  before  President  Richard  Nixon  visited  China  and  Russia.  The   respondents  assigned  probabilities  to  fifteen  possible  outcomes  of  Nixon’s   diplomatic  initiatives.  Would  Mao  Zedong  agree  to  meet  with  Nixon?  Might  the   United  States  grant  diplomatic  recognition  to  China?  After  decades  of  enmity,  could   the  United  States  and  the  Soviet  Union  agree  on  anything  significant?  After  Nixon’s   return  from  his  travels,  Fischh  off  and  Beyth  asked  the  same  people  to  recall  the   probability  that  they  had  originally  assigned  to  each  of  the  fifteen  possible   outcomes.  The  results  were  clear.  If  an  event  had  actually  occurred,  people   exaggerated  the  probability  that  they  had  assigned  to  it  earlier.  If  the  possible  event   had  not  come  to  pass,  the  participants  erroneously  recalled  that  they  had  always   considered  it  unlikely.  Further  experiments  showed  that  people  were  driven  to   overstate  the  accuracy  not  only  of  their  original  predictions  but  also  of  those  made   by  others.  Similar  results  have  been  found  for  other  events  that  gripped  public   attention,  such  as  the  O.  J.  Simpson  murder  trial  and  the  impeachment  of  President   Bill  Clinton.  The  tendency  to  revise  the  history  of  one’s  beliefs  in  light  of  what   actually  happened  produces  a  robust  cognitive  illusion.     Actions  that  seemed  prudent  in  foresight  can  look  irresponsibly  negligent  in   hindsight.  Based  on  an  actual  legal  case,  students  in  California  were  asked  whether   the  city  of  Duluth,  Minnesota,  should  have  shouldered  the  considerable  cost  of   hiring  a  full-­‐time  bridge  monitor  to  protect  against  the  risk  that  debris  might  get   caught  and  block  the  free  flow  of  water.  One  group  was  shown  only  the  evidence   available  at  the  time  of  the  city’s  decision;  24%  of  these  people  felt  that  Duluth   should  take  on  the  expense  of  hiring  a  flood  monitor.  The  second  group  was   informed  that  debris  had  blocked  the  river,  causing  major  flood  damage;  56%  of   these  people  said  the  city  should  have  hired  the  monitor,  although  they  had  been   explicitly  instructed  not  to  let  hindsight  distort  their  judgment     Although  hindsight  and  the  outcome  bias  generally  foster  risk  aversion,  they  also   bring  undeserved  rewards  to  irresponsible  risk  seekers,  such  as  a  general  or  an   entrepreneur  who  took  a  crazy  gamble  and  won.  Leaders  who  have  been  lucky  are   never  punished  for  having  taken  too  much  risk.     Instead,  they  are  believed  to  have  had  the  flair  and  foresight  to  anticipate  success,   and  the  sensible  people  who  doubted  them  are  seen  in  hindsight  as  mediocre,  timid,   and  weak.  A  few  lucky  gambles  can  crown  a  reckless  leader  with  a  halo  of  prescience   and  boldness.     The  main  point  of  this  chapter  is  not  that  people  who  attempt  to  predict  the  future   make  many  errors;  that  goes  without  saying.  The  first  lesson  is  that  errors  of   prediction  are  inevitable  because  the  world  is  unpredictable.  The  second  is  that  high   subjective  confidence  is  not  to  be  trusted  as  an  indicator  of  accuracy.     In  the  slim  volume  that  he  later  called  “my  disturbing  little  book,”  Meehl  reviewed   the  results  of  20  studies  that  had  analyzed  whether  clinical  predictions  based  on  the   subjective  impressions  of  trained  professionals  were  more  accurate  than  statistical   predictions  made  by  combining  a  few  scores  or  ratings  according  to  a  rule.  In  a   typical  study,  trained  counselors  predicted  the  grades  of  freshmen  at  the  end  of  the   school  year.  The  counselors  interviewed  each  student  for  forty-­‐five  minutes.  They   also  had  access  to  high  school  grades,  several  aptitude  tests,  and  a  four-­‐page     You  can  measure  the  extent  of  your  aversion  to  losses  by  asking  yourself  a  question:   What  is  the  smallest  gain  that  I  need  to  balance  an  equal  chance  to  lose  $100?  For   many  people  the  answer  is  about  $200,  twice  as  much  as  the  loss.  The  “loss  aversion   ratio”  has  been  estimated  in  several  experiments  and  is  usually  in  the  range  of  1.5  to   2.5.  This  is  an  average,  of  course;  some  people  are  much  more  loss  averse  than   others  because  you  stand  to  gain  more  than  you  can  lose,  you  probably  dislike  it  — most  people  do.  The  rejection  of  this  gamble  is  an  act  of  System  2,   Richard  Thaler  found  many  examples  of  what  he  called  the  Endowment  Effect.   Suppose  you  hold  a  ticket  to  a  sold  out  concert  by  a  popular  band,  which  you  bought   at  the  regular  price  of  $200.  You  are  an  avid  fan  and  would  have  been  willing  to  pay   up  to  $500  for  the  ticket.  Now  you  have  your  ticket  and  you  learn  on  the  Internet   that  richer  or  more  desperate  fans  are  offering  $3,000.  Would  you  sell?  If  you   resemble  most  of  the  audience  at  sold-­‐out  events  you  do  not  sell.  Your  lowest  selling   price  is  above  $3,000  and  your  maximum  buying  price  is  $500.  This  is  an  example  of   an  endowment  effect,  and  a  believer  in  standard  economic  theory  would  be  puzzled   by  it.       Prospect  theory  suggested  that  the  willingness  to  buy  or  sell  an  item  depends  on  the   reference  point—whether  or  not  the  person  owns  the  item  now.  If  he  owns  it,  he   considers  the  pain  of  giving  up  the  item.  If  he  does  not  own  it,  he  considers  the   pleasure  of  getting  the  item.  The  values  were  unequal  because  of  loss  aversion:   giving  up  a  nice  item  is  more  painful  than  getting  an  equally  good  item  is   pleasurable.     Other  scholars,  in  a  paper  titled  “Bad  Is  Stronger  Than  Good,”  summarized  the   evidence  as  follows:  “Bad  emotions,  bad  parents,  and  bad  feedback  have  more   impact  than  good  ones,  and  bad  information  is  processed  more  thoroughly  than   good.  The  self  is  more  motivated  to  avoid  bad  self-­‐definitions  than  to  pursue  good   ones.  Bad  impressions  and  bad  stereotypes  are  quicker  to  form  and  more  resistant   to  disconfirmation  than  good  ones.”  They  cite  John  Gottman,  the  well-­‐known  expert   in  marital  relations,  who  observed  that  the  long-­‐term  success  of  a  relationship   depends  far  more  on  avoiding  the  negative  than  on  seeking  the  positive.     Every  stroke  counts  in  golf,  and  in  professional  golf  every  stroke  counts  a  lot.   According  to  prospect  theory,  however,  some  strokes  count  more  than  others.   Failing  to  make  par  is  a  loss  but  missing  a  birdie  putt  is  a  foregone  gain,  not  a  loss.   Pope  and  Schweitzer  reasoned  from  loss  aversion  that  players  would  try  a  little   harder  when  putting  for  par  (to  avoid  a  bogey)  than  when  putting  for  a  birdie.  They   analyzed  more  than  2.5  million  putts  in  exquisite  detail  to  test  that  prediction.     They  were  right.  Whether  the  putt  was  easy  or  hard,  at  every  distance  from  the  hole,   the  players  were  more  successful  when  putting  for  par  than  for  a  birdie.  The   difference  in  their  rate  of  success  when  going  for  par  (to  avoid  a  bogey)  or  for  a   birdie  was  3.6%.  This  difference  is  not  trivial.  Tiger  Woods  was  one  of  the   “participants”  in  their  study.  If  in  his  best  years  Tiger  Woods  had  managed  to  putt  as   well  for  birdies  as  he  did  for  par,  his  average  tournament  score  would  have   improved  by  one  stroke  and  his  earnings  by  almost  $1  million  per  season.     Framing     An  experiment  that  Amos  carried  out  with  colleagues  at  Harvard  Medical  School  is   the  classic  example  of  emotional  framing.  Physician  participants  were  given   statistics  about  the  outcomes  of  two  treatments  for  lung  cancer:  surgery  and   radiation.  The  five-­‐year  survival  rates  clearly  favor  surgery,  but  in  the  short  term   surgery  is  riskier  than  radiation.  Half  the  participants  read  statistics  about  survival   rates,  the  others  received  the  same  information  in  terms  of  mortality  rates.  The  two   descriptions  of  the  short-­‐term  outcomes  of  surgery  were:   The  one-­‐month  survival  rate  is  90%.   There  is  10%  mortality  in  the  first  month.     You  already  know  the  results:  surgery  was  much  more  popular  in  the  former  frame   (84%  of  physicians  chose  it)  than  in  the  latter  (where  50%  favored  radiation).  The   logical  equivalence  of  the  two  descriptions  is  transparent,  and  a  reality-­‐bound   decision  maker  would  make  the  same  choice  regardless  of  which  version  she  saw.       But  System  1,  as  we  have  gotten  to  know  it,  is  rarely  indifferent  to  emotional  words:   mortality  is  bad,  survival  is  good,  and  90%  survival  sounds  encouraging  whereas   10%  mortality  is  frightening.  An  important  finding  of  the  study  is  that  physicians   were  just  as  susceptible  to  the  framing  effect  as  medically  unsophisticated  people   (hospital  patients  and  graduate  students  in  a  business  school).  Medical  training  is,   evidently,  no  defense  against  the  power  of  framing.     A  directive  about  organ  donation  in  case  of  accidental  death  is  noted  on  an   individual’s  driver  license  in  many  countries.  The  formulation  of  that  directive  is   another  case  in  which  one  frame  is  clearly  superior  to  the  other.  Few  people  would   argue  that  the  decision  of  whether  or  not  to  donate  one’s  organs  is  unimportant,  but   there  is  strong  evidence  that  most  people  make  their  choice  thoughtlessly.  The   evidence  comes  from  a  comparison  of  the  rate  of  organ  donation  in  European   countries,  which  reveals  startling  differences  between  neighboring  and  culturally   similar  countries.  An  article  published  in  2003  noted  that  the  rate  of  organ  donation   was  close  to  100%  in  Austria  but  only  12%  in  Germany,  86%  in  Sweden  but  only  4%   in  Denmark.     These  enormous  differences  are  a  framing  effect,  which  is  caused  by  the  format  of   the  critical  question.  The  high-­‐donation  countries  have  an  opt  out  form,  where   individuals  who  wish  not  to  donate  must  check  an  appropriate  box.  Unless  they  take   this  simple  action,  they  are  considered  willing  donors.  The  low-­‐contribution   countries  have  an  opt-­‐in  form:  you  must  check  a  box  to  become  a  donor.    That  is  all.   The  best  single  predictor  of  whether  or  not  people  will  donate  their  organs  is  the   designation  of  the  default  option  that  will  be  adopted  without  having  to  check  a   box.  (Duncan  note:  The  Ontario  government  recently  adopted  the  opt-­‐out  option)     As  we  have  seen  again  and  again,  an  important  choice  is  controlled  by  an  utterly   inconsequential  feature  of  the  situation.  This  is  embarrassing—it  is  not  how  we   would  wish  to  make  important  decisions.  Furthermore,  it  is  not  how  we  experience   the  workings  of  our  mind,  but  the  evidence  for  these  cognitive  illusions  is   undeniable.     How  should  we  answer  questions  such  as  “How  much  pain  did  Helen  suffer  during   the  medical  procedure?”  or  “How  much  enjoyment  did  she  get  from  her  20  minutes   on  the  beach?”  The  British  economist  Francis  Edgeworth  speculated  about  this  topic   in  the  nineteenth  century  and  proposed  the  idea  of  a  “hedonimeter,”  an  imaginary   instrument  analogous  to  the  devices  used  in  weather-­‐recording  stations,  which   would  measure  the  level  of  pleasure  or  pain  that  an  individual  experiences  at  any   moment.     The  answer  to  the  question  of  how  much  pain  or  pleasure  Helen  experienced  during   her  medical  procedure  or  vacation  would  be  the  “area  under  the  curve.”  Time  plays   a  critical  role  in  Edgeworth’s  conception.  If  Helen  stays  on  the  beach  for  40  minutes   instead  of  20,  and  her  enjoyment  remains  as  intense,  then  the  total  experienced   utility  of  that  episode  doubles,  just  as  doubling  the  number  of  injections  makes  a   course  of  injections  twice  as  bad.  This  was  Edgeworth’s  theory,  and  we  now  have  a   precise  understanding  of  the  conditions  under  which  his  theory  holds.     The  graphs  in  figure  15  show  profiles  of  the  experiences  of  two  patients  undergoing   a  painful  colonoscopy,  drawn  from  a  study  that  Don  Redelmeier  and  I  designed   together.  Redelmeier,  a  physician  and  researcher  at  the  University  of  Toronto,   carried  it  out  in  the  early  1990s.     This  procedure  is  now  routinely  administered  with  an  anesthetic  as  well  as  an   amnesic  drug,  but  these  drugs  were  not  as  widespread  when  our  data  were     collected.  The  patients  were  prompted  every  60  seconds  to  indicate  the  level  of  pain   they  experienced  at  the  moment.  The  data  shown  are  on  a  scale  where  zero  is  “no   pain  at  all”  and  10  is  “intolerable  pain.”     The  psychologist  Ed  Diener  and  his  students  wondered  whether  duration  neglect   and  the  peak-­‐end  rule  would  govern  evaluations  of  entire  lives.  They  used  a  short   description  of  the  life  of  a  fictitious  character  called  Jen,  a  never-­‐married  woman   with  no  children,  who  died  instantly  and  painlessly  in  an  automobile  accident.       In  one  version  of  Jen’s  story,  she  was  extremely  happy  throughout  her  life  (which   lasted  either  30  or  60  years),  enjoying  her  work,  taking  vacations,  spending  time   with  her  friends  and  on  her  hobbies.       Another  version  added  5  extra  years  to  Jen’s  life,  who  now  died  either  when  she  was   35  or  65.  The  extra  years  were  described  as  pleasant  but  less  so  than  before.  After   reading  a  schematic  biography  of  Jen,  each  participant  answered  two  questions:   “Taking  her  life  as  a  whole,  how  desirable  do  you  think  Jen’s  life  was?”  and  “How   much  total  happiness  or  unhappiness  would  you  say  that  Jen  experienced  in  her   life?”     The  results  provided  clear  evidence  of  both  duration  neglect  and  a  peak  end  effect.   In  a  between-­‐subjects  experiment  (different  participants  saw  different  forms),   doubling  the  duration  of  Jen’s  life  had  no  effect  whatsoever  on  the  desirability  of  her   life,  or  on  judgments  of  the  total  happiness  that  Jen  experienced.       As  expected  from  this  idea,  Diener  and  his  students  also  found  a  less-­‐is-­‐more  effect,   a  strong  indication  that  an  average  (prototype)  has  been  substituted  for  a  sum.   Adding  5  “slightly  happy”  years  to  a  very  happy  life  caused  a  substantial  drop  in   evaluations  of  the  total  happiness  of  that  life.     At  my  urging,  they  also  collected  data  on  the  effect  of  the  extra  5  years  in   a  within-­‐subject  experiment;  each  participant  made  both  judgments  in  immediate   succession.  In  spite  of  my  long  experience  with  judgment  errors,  I  did  not  believe   that  reasonable  people  could  say  that  adding  5  slightly  happy  years  to  a  life  would   make  it  substantially  worse.  I  was  wrong.  The  intuition  that  the  disappointing  extra   5  years  made  the  whole  life  worse  was  overwhelming.     The  pattern  of  judgments  seemed  so  absurd  that  Diener  and  his  students  initially   thought  that  it  represented  the  folly  of  the  young  people  who  participated  in  their   experiments.  However,  the  pattern  did  not  change  when  the  parents  and  older   friends  of  students  answered  the  same  questions.    In  intuitive  evaluation  of  entire   lives  as  well  as  brief  episodes,  peaks  and  ends  matter  but  duration  does  not.       Conclusions   I  began  this  book  by  introducing  two  fictitious  characters,  spent  some  time   discussing  two  species,  and  ended  with  two  selves.  The  two  characters  were  the   intuitive  System  1,  which  does  the  fast  thinking,  and  the  effortful  and  slower  System   2,  which  does  the  slow  thinking,  monitors  System  1,  and  maintains  control  as  best  it   can  within  its  limited  resources.     The  two  species  were  the  fictitious  Econs,  who  live  in  the  land  of  theory,  and  the   Humans,  who  act  in  the  real  world.  The  two  selves  are  the  experiencing  self,  which   does  the  living,  and  the  remembering  self,  which  keeps  score  and  makes  the  choices.       The  definition  of  rationality  as  coherence  is  impossibly  restrictive;  it  demands   adherence  to  rules  of  logic  that  a  finite  mind  is  not  able  to  implement.  Reasonable   people  cannot  be  rational  by  that  definition,  but  they  should  not  be  branded  as   irrational  for  that  reason.  Irrational  is  a  strong  word,  which  connotes  impulsivity,   emotionality,  and  a  stubborn  resistance  to  reasonable  argument.  I  often  cringe  when   my  work  with  Amos  is  credited  with  demonstrating  that  human  choices  are   irrational,  when  in  fact  our  research  only  showed  that  Humans  are  not  well   described  by  the  rational-­‐agent  model.     Although  Humans  are  not  irrational,  they  often  need  help  to  make  more  accurate   judgments  and  better  decisions,  and  in  some  cases  policies  and  institutions  can   provide  that  help.  These  claims  may  seem  innocuous,  but  they  are  in  fact  quite   controversial.  As  interpreted  by  the  important  Chicago  school  of  economics,  faith  in   human  rationality  is  closely  linked  to  an  ideology  in  which  it  is  unnecessary  and   even  immoral  to  protect  people  against  their  choices.  Rational  people  should  be  free,   and  they  should  be  responsible  for  taking  care  of  themselves.  Milton  Friedman,  the   leading  figure  in  that  school,  expressed  this  view  in  the  title  of  one  of  his  popular   books:  Free  to  Choose.     The  assumption  that  agents  are  rational  provides  the  intellectual  foundation  for  the   libertarian  approach  to  public  policy:  do  not  interfere  with  the  individual’s  right  to   choose,  unless  the  choices  harm  others.  Libertarian  policies  are  further  bolstered  by   admiration  for  the  efficiency  of  markets  in  allocating  goods  to  the  people  who  are   willing  to  pay  the  most  for  them.       In  a  nation  of  Econs,  government  should  keep  out  of  the  way,  allowing  the  Econs  to   act  as  they  choose,  so  long  as  they  do  not  harm  others.  If  a  motorcycle  rider  chooses   to  ride  without  a  helmet,  a  libertarian  will  support  his  right  to  do  so.  Citizens  know   what  they  are  doing,  even  when  they  choose  not  to  save  for  their  old  age,  or  when   they  expose  themselves  to  addictive  substances.  There  is  sometimes  a  hard  edge  to   this  position:  elderly  people  who  did  not  save  enough  for  retirement  get  little  more   sympathy  than  someone  who  complains  about  the  bill  after  consuming  a  large  meal   at  a  restaurant.  Much  is  therefore  at  stake  in  the  debate  between  the  Chicago  school   and  the  behavioral  economists,  who  reject  the  extreme  form  of  the  rational-­‐agent   model.  Freedom  is  not  a  contested  value;  all  the  participants  in  the  debate  are  in   favor  of  it.       But  life  is  more  complex  for  behavioral  economists  than  for  the  believers  in  human   rationality.  No  behavioral  economist  favors  a  state  that  will  force  its  citizens  to  eat  a   balanced  diet  and  to  watch  only  television  programs  that  are  good  for  the  soul.  For   behavioral  economists,  however,  freedom  has  a  cost,  which  is  borne  by  individuals   who  make  bad  choices,  and  by  a  society  that  feels  obligated  to  help  them.  The   decision  of  whether  or  not  to  protect  individuals  against  their  mistakes  therefore   presents  a  dilemma  for  behavioral  economists.  The  economists  of  the  Chicago   school  do  not  face  that  problem,  because  rational  agents  do  not  make  mistakes.  For   adherents  of  this  school,  freedom  is  free  of  charge.     In  2008  the  economist  Richard  Thaler  and  the  jurist  Cass  Sunstein  teamed  up  to   write  a  book,  Nudge,  which  quickly  became  an  international  bestseller  and  the  bible   of  behavioral  economics.  Their  book  introduced  several  new  words  into  the     language,  including  Econs  and  Humans.  It  also  presented  a  set  of  solutions  to  the   dilemma  of  how  to  help  people  make  good  decisions  without  curtailing  their   freedom.  Thaler  and  Sunstein  advocate  a  position  of  libertarian  paternalism,  in   which  the  state  and  other  institutions  are  allowed  to  nudge  people  to  make   decisions  that  serve  their  own  long-­‐term  interests.  The  designation  of  joining  a   pension  plan  as  the  default  option  is  an  example  of  a  nudge.       It  is  difficult  to  argue  that  anyone’s  freedom  is  diminished  by  being  automatically   enrolled  in  the  plan,  when  they  merely  have  to  check  a  box  to  opt  out.  As  we  saw   earlier,  the  framing  of  the  individual’s  decision—Thaler  and  Sunstein  call  it  choice   architecture  —has  a  huge  effect  on  the  outcome.  The  nudge  is  based  on  sound   psychology,  which  I  described  earlier.  The  default  option  is  naturally  perceived  as   the  normal  choice.  Deviating  from  the  normal  choice  is  an  act  of  commission,  which   requires  more  effortful  deliberation,  takes  on  more  responsibility,  and  is  more  likely   to  evoke  regret  than  doing  nothing.  These  are  powerful  forces  that  may  guide  the   decision  of  someone  who  is  otherwise  unsure  of  what  to  do.     Humans,  more  than  Econs,  also  need  protection  from  others  who  deliberately   exploit  their  weaknesses—and  especially  the  quirks  of  System  1  and  the  laziness  of   System  2.  Rational  agents  are  assumed  to  make  important  decisions  carefully,  and  to   use  all  the  information  that  is  provided  to  them.  An  Econ  will  read  and  understand   the  fine  print  of  a  contract  before  signing  it,  but  Humans  usually  do  not.  An   unscrupulous  firm  that  designs  contracts  that  customers  will  routinely  sign  without   reading  has  considerable  legal  leeway  in  hiding  important  information  in  plain  sight.       A  pernicious  implication  of  the  rational-­‐agent  model  in  its  extreme  form  is  that   customers  are  assumed  to  need  no  protection  beyond  ensuring  that  the  relevant   information  is  disclosed.  The  size  of  the  print  and  the  complexity  of  the  language  in   the  disclosure  are  not  considered  relevant—  an  Econ  knows  how  to  deal  with  small   print  when  it  matters.  In  contrast,  the  recommendations  of  Nudge  require  firms  to   offer  contracts  that  are  sufficiently  simple  to  be  read  and  understood  by  Human   customers.  It  is  a  good  sign  that  some  of  these  recommendations  have  encountered   significant  opposition  from  firms  whose  profits  might  suffer  if  their  customers  were   better  informed.  A  world  in  which  firms  compete  by  offering  better  products  is   preferable  to  one  in  which  the  winner  is  the  firm  that  is  best  at  obfuscation.     A  remarkable  feature  of  libertarian  paternalism  is  its  appeal  across  a  broad  political   spectrum.  The  flagship  example  of  behavioral  policy,  called  Save  More  Tomorrow,  
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved