1 00:00:04,680 --> 00:00:08,800 We like to think that as a species, we are pretty smart. 2 00:00:11,480 --> 00:00:15,120 We like to think we are wise, rational creatures. 3 00:00:15,120 --> 00:00:18,000 I think we all like to think of ourselves as Mr Spock to some degree. 4 00:00:18,000 --> 00:00:20,200 You know, we make rational, conscious decisions. 5 00:00:21,400 --> 00:00:22,960 But we may have to think again. 6 00:00:24,000 --> 00:00:28,000 It's mostly delusion, and we should just wake up to that fact. 7 00:00:31,360 --> 00:00:32,920 In every decision you make, 8 00:00:32,920 --> 00:00:36,120 there's a battle in your mind between intuition and logic. 9 00:00:38,400 --> 00:00:41,440 It's a conflict that plays out in every aspect of your life. 10 00:00:43,240 --> 00:00:44,280 What you eat. 11 00:00:45,520 --> 00:00:46,560 What you believe. 12 00:00:47,720 --> 00:00:49,080 Who you fall in love with. 13 00:00:50,280 --> 00:00:54,520 And most powerfully, in decisions you make about money. 14 00:00:54,520 --> 00:00:58,480 The moment money enters the picture, the rules change. 15 00:00:58,480 --> 00:01:00,680 Scientists now have a new way 16 00:01:00,680 --> 00:01:03,680 to understand this battle in your mind, 17 00:01:03,680 --> 00:01:07,920 how it shapes the decisions you take, what you believe... 18 00:01:09,480 --> 00:01:11,800 And how it has transformed our understanding 19 00:01:11,800 --> 00:01:13,320 of human nature itself. 20 00:01:26,160 --> 00:01:27,280 TAXI HORN BLARES 21 00:01:34,080 --> 00:01:38,440 Sitting in the back of this New York cab is Professor Danny Kahneman. 22 00:01:42,360 --> 00:01:46,640 He's regarded as one of the most influential psychologists alive today. 23 00:01:48,760 --> 00:01:50,040 Over the last 40 years, 24 00:01:50,040 --> 00:01:52,640 he's developed some extraordinary insights 25 00:01:52,640 --> 00:01:54,880 into the way we make decisions. 26 00:01:58,400 --> 00:02:02,360 I think it can't hurt to have a realistic view of human nature 27 00:02:02,360 --> 00:02:03,880 and of how the mind works. 28 00:02:07,000 --> 00:02:09,480 His insights come largely from puzzles. 29 00:02:13,720 --> 00:02:17,880 Take, for instance, the curious puzzle of New York cab drivers 30 00:02:17,880 --> 00:02:20,480 and their highly illogical working habits. 31 00:02:23,880 --> 00:02:26,040 Business varies according to the weather. 32 00:02:27,200 --> 00:02:29,280 On rainy days, everyone wants a cab. 33 00:02:30,520 --> 00:02:34,800 But on sunny days, like today, fares are hard to find. 34 00:02:34,800 --> 00:02:39,880 Logically, they should spend a lot of time driving on rainy days, 35 00:02:39,880 --> 00:02:43,600 because it's very easy to find passengers on rainy days, 36 00:02:43,600 --> 00:02:46,640 and if they are going to take leisure, it should be on sunny days 37 00:02:46,640 --> 00:02:50,280 but it turns out this is not what many of them do. 38 00:02:52,920 --> 00:02:57,120 Many do the opposite, working long hours on slow, sunny days 39 00:02:57,120 --> 00:02:59,960 and knocking off early when it's rainy and busy. 40 00:03:01,320 --> 00:03:05,320 Instead of thinking logically, the cabbies are driven by an urge 41 00:03:05,320 --> 00:03:10,320 to earn a set amount of cash each day, come rain or shine. 42 00:03:11,440 --> 00:03:13,360 Once they hit that target, they go home. 43 00:03:17,040 --> 00:03:19,800 They view being below the target as a loss 44 00:03:19,800 --> 00:03:22,680 and being above the target as a gain, and they care 45 00:03:22,680 --> 00:03:26,720 more about preventing the loss than about achieving the gain. 46 00:03:28,680 --> 00:03:31,760 So when they reach their goal on a rainy day, they stop... 47 00:03:35,040 --> 00:03:37,440 ..which really doesn't make sense. 48 00:03:37,440 --> 00:03:39,880 If they were trying to maximise their income, 49 00:03:39,880 --> 00:03:42,160 they would take their leisure on sunny days 50 00:03:42,160 --> 00:03:44,440 and they would drive all day on rainy days. 51 00:03:49,800 --> 00:03:51,840 It was this kind of glitch in thinking 52 00:03:51,840 --> 00:03:54,840 that Kahneman realised could reveal something profound 53 00:03:54,840 --> 00:03:56,800 about the inner workings of the mind. 54 00:03:58,120 --> 00:04:00,040 Anyone want to take part in an experiment? 55 00:04:00,040 --> 00:04:03,320 And he began to devise a series of puzzles and questions 56 00:04:03,320 --> 00:04:06,280 which have become classic psychological tests. 57 00:04:10,120 --> 00:04:13,000 It's a simple experiment. It's a very attractive game... 58 00:04:13,000 --> 00:04:14,680 Don't worry, sir. Nothing strenuous. 59 00:04:14,680 --> 00:04:17,880 ..Posing problems where you can recognise in yourself 60 00:04:17,880 --> 00:04:21,400 that your intuition is going the wrong way. 61 00:04:21,400 --> 00:04:25,160 The type of puzzle where the answer that intuitively springs to mind, 62 00:04:25,160 --> 00:04:27,520 and that seems obvious, is, in fact, wrong. 63 00:04:30,080 --> 00:04:34,800 Here is one that I think works on just about everybody. 64 00:04:34,800 --> 00:04:36,840 I want you to imagine a guy called Steve. 65 00:04:36,840 --> 00:04:42,440 You tell people that Steve, you know, is a meek and tidy soul 66 00:04:42,440 --> 00:04:48,200 with a passion for detail and very little interest in people. 67 00:04:48,200 --> 00:04:49,800 He's got a good eye for detail. 68 00:04:49,800 --> 00:04:52,360 And then you tell people he was drawn at random. 69 00:04:52,360 --> 00:04:54,480 From a census of the American population. 70 00:04:54,480 --> 00:04:59,600 What's the probability that he is a farmer or a librarian? 71 00:04:59,600 --> 00:05:01,000 So do you think it's more likely 72 00:05:01,000 --> 00:05:03,760 that Steve's going to end up working as a librarian or a farmer? 73 00:05:03,760 --> 00:05:06,240 What's he more likely to be? 74 00:05:06,240 --> 00:05:07,840 Maybe a librarian. 75 00:05:07,840 --> 00:05:09,280 Librarian. 76 00:05:09,280 --> 00:05:10,640 Probably a librarian. 77 00:05:10,640 --> 00:05:11,680 A librarian. 78 00:05:11,680 --> 00:05:15,320 Immediately, you know, the thought pops to mind 79 00:05:15,320 --> 00:05:20,600 that it's a librarian, because he resembled the prototype of librarian. 80 00:05:20,600 --> 00:05:22,080 Probably a librarian. 81 00:05:22,080 --> 00:05:25,640 In fact, that's probably the wrong answer, 82 00:05:25,640 --> 00:05:28,320 because, at least in the United States, 83 00:05:28,320 --> 00:05:32,360 there are 20 times as many male farmers as male librarians. 84 00:05:32,360 --> 00:05:33,920 Librarian. Librarian. 85 00:05:33,920 --> 00:05:37,240 So there are probably more meek and tidy souls you know 86 00:05:37,240 --> 00:05:40,680 who are farmers than meek and tidy souls who are librarians. 87 00:05:45,240 --> 00:05:49,160 This type of puzzle seemed to reveal a discrepancy 88 00:05:49,160 --> 00:05:51,560 between intuition and logic. 89 00:05:51,560 --> 00:05:52,600 Another example is... 90 00:05:52,600 --> 00:05:55,440 Imagine a dictionary. I'm going to pull a word out of it at random. 91 00:05:55,440 --> 00:05:56,880 Which is more likely, 92 00:05:56,880 --> 00:06:02,280 that a word that you pick out at random has the letter 93 00:06:02,280 --> 00:06:07,280 R in the first position or has the letter R in the third position? 94 00:06:07,280 --> 00:06:09,320 Erm, start with the letter R. OK. 95 00:06:10,560 --> 00:06:12,920 People think the first position, 96 00:06:12,920 --> 00:06:15,080 because it's easy to think of examples. 97 00:06:15,080 --> 00:06:16,520 Start with it. First. 98 00:06:17,800 --> 00:06:20,680 In fact, there are nearly three times as many words with 99 00:06:20,680 --> 00:06:24,560 R as the third letter, than words that begin with R, 100 00:06:24,560 --> 00:06:27,640 but that's not what our intuition tells us. 101 00:06:27,640 --> 00:06:31,120 So we have examples like that. Like, many of them. 102 00:06:34,640 --> 00:06:38,680 Kahneman's interest in human error was first sparked in the 1970s 103 00:06:38,680 --> 00:06:41,280 when he and his colleague, Amos Taverski, 104 00:06:41,280 --> 00:06:43,600 began looking at their own mistakes. 105 00:06:45,040 --> 00:06:47,120 It was all in ourselves. 106 00:06:47,120 --> 00:06:50,920 That is, all the mistakes that we studied were mistakes 107 00:06:50,920 --> 00:06:52,920 that we were prone to make. 108 00:06:52,920 --> 00:06:55,280 In my hand here, I've got £100. 109 00:06:55,280 --> 00:06:59,560 Kahneman and Taverski found a treasure trove of these puzzles. 110 00:06:59,560 --> 00:07:00,880 Which would you prefer? 111 00:07:00,880 --> 00:07:03,280 They unveiled a catalogue of human error. 112 00:07:04,440 --> 00:07:07,720 Would you rather go to Rome with a free breakfast? 113 00:07:07,720 --> 00:07:10,800 And opened a Pandora's Box of mistakes. 114 00:07:10,800 --> 00:07:12,800 A year and a day. 115 00:07:12,800 --> 00:07:14,440 25? 116 00:07:14,440 --> 00:07:17,600 But the really interesting thing about these mistakes is, 117 00:07:17,600 --> 00:07:18,920 they're not accidents. 118 00:07:18,920 --> 00:07:20,160 75? 119 00:07:20,160 --> 00:07:22,160 They have a shape, a structure. 120 00:07:22,160 --> 00:07:23,400 I think Rome. 121 00:07:23,400 --> 00:07:25,200 Skewing our judgment. 122 00:07:25,200 --> 00:07:26,560 20. 123 00:07:26,560 --> 00:07:29,560 What makes them interesting is that they are not random errors. 124 00:07:29,560 --> 00:07:32,520 They are biases, so the difference between a bias 125 00:07:32,520 --> 00:07:36,080 and a random error is that a bias is predictable. 126 00:07:36,080 --> 00:07:38,320 It's a systematic error that is predictable. 127 00:07:40,720 --> 00:07:44,000 Kahneman's puzzles prompt the wrong reply again. 128 00:07:44,000 --> 00:07:45,120 More likely. 129 00:07:45,120 --> 00:07:46,280 And again. 130 00:07:46,280 --> 00:07:48,000 More likely. More likely? 131 00:07:48,000 --> 00:07:49,160 And again. 132 00:07:49,160 --> 00:07:50,280 Probably more likely? 133 00:07:51,280 --> 00:07:55,000 It's a pattern of human error that affects every single one of us. 134 00:07:57,120 --> 00:07:59,400 On their own, they may seem small. 135 00:08:01,040 --> 00:08:04,520 Ah, that seems to be the right drawer. 136 00:08:04,520 --> 00:08:07,800 But by rummaging around in our everyday mistakes... 137 00:08:07,800 --> 00:08:09,440 That's very odd. 138 00:08:09,440 --> 00:08:13,560 ..Kahneman started a revolution in our understanding of human thinking. 139 00:08:15,560 --> 00:08:17,080 A revolution so profound 140 00:08:17,080 --> 00:08:20,480 and far-reaching that he was awarded a Nobel prize. 141 00:08:22,720 --> 00:08:27,840 So if you want to see the medal, that's what it looks like. 142 00:08:30,560 --> 00:08:31,720 That's it. 143 00:08:38,320 --> 00:08:42,000 Psychologists have long strived to pick apart the moments 144 00:08:42,000 --> 00:08:43,720 when people make decisions. 145 00:08:45,680 --> 00:08:48,920 Much of the focus has been on our rational mind, 146 00:08:48,920 --> 00:08:51,240 our capacity for logic. 147 00:08:51,240 --> 00:08:55,080 But Kahneman saw the mind differently. 148 00:08:55,080 --> 00:08:58,920 He saw a much more powerful role for the other side of our minds, 149 00:08:58,920 --> 00:09:00,320 intuition. 150 00:09:02,000 --> 00:09:04,040 And at the heart of human thinking, 151 00:09:04,040 --> 00:09:08,160 there's a conflict between logic and intuition that leads to mistakes. 152 00:09:09,280 --> 00:09:11,800 Kahneman and Taverski started this trend 153 00:09:11,800 --> 00:09:13,760 of seeing the mind differently. 154 00:09:15,520 --> 00:09:17,960 They found these decision-making illusions, 155 00:09:17,960 --> 00:09:20,960 these spots where our intuitions just make us decide 156 00:09:20,960 --> 00:09:23,560 these things that just don't make any sense. 157 00:09:23,560 --> 00:09:26,520 The work of Kahneman and Taverski has really been revolutionary. 158 00:09:29,040 --> 00:09:31,560 It kicked off a flurry of experimentation 159 00:09:31,560 --> 00:09:35,360 and observation to understand the meaning of these mistakes. 160 00:09:37,440 --> 00:09:40,800 People didn't really appreciate, as recently as 40 years ago, 161 00:09:40,800 --> 00:09:45,320 that the mind didn't really work like a computer. 162 00:09:45,320 --> 00:09:48,840 We thought that we were very deliberative, conscious creatures 163 00:09:48,840 --> 00:09:51,440 who weighed up the costs and benefits of action, 164 00:09:51,440 --> 00:09:52,960 just like Mr Spock would do. 165 00:09:56,560 --> 00:10:00,360 By now, it's a fairly coherent body of work 166 00:10:00,360 --> 00:10:01,840 about ways in which 167 00:10:01,840 --> 00:10:07,240 intuition departs from the rules, if you will. 168 00:10:11,480 --> 00:10:13,560 And the body of evidence is growing. 169 00:10:16,200 --> 00:10:19,400 Some of the best clues to the working of our minds come not 170 00:10:19,400 --> 00:10:22,560 when we get things right, but when we get things wrong. 171 00:10:35,600 --> 00:10:38,200 In a corner of this otherwise peaceful campus, 172 00:10:38,200 --> 00:10:42,200 Professor Chris Chabris is about to start a fight. 173 00:10:42,200 --> 00:10:49,360 All right, so what I want you guys to do is stay in this area over here. 174 00:10:49,360 --> 00:10:51,000 The two big guys grab you 175 00:10:51,000 --> 00:10:54,640 and sort of like start pretending to punch you, make some sound effects. 176 00:10:54,640 --> 00:10:56,240 All right, this looks good. 177 00:10:58,120 --> 00:10:59,880 All right, that seemed pretty good to me. 178 00:11:01,640 --> 00:11:03,000 It's part of an experiment 179 00:11:03,000 --> 00:11:06,360 that shows a pretty shocking mistake that any one of us could make. 180 00:11:08,680 --> 00:11:10,160 A mistake where you don't notice 181 00:11:10,160 --> 00:11:12,280 what's happening right in front of your eyes. 182 00:11:19,240 --> 00:11:22,920 As well as a fight, the experiment also involves a chase. 183 00:11:26,320 --> 00:11:29,320 It was inspired by an incident in Boston in 1995, 184 00:11:29,320 --> 00:11:32,000 when a young police officer, Kenny Conley, 185 00:11:32,000 --> 00:11:34,440 was in hot pursuit of a murder suspect. 186 00:11:37,160 --> 00:11:39,320 It turned out that this police officer, 187 00:11:39,320 --> 00:11:41,040 while he was chasing the suspect, 188 00:11:41,040 --> 00:11:43,360 had run right past some other police officers 189 00:11:43,360 --> 00:11:46,240 who were beating up another suspect, which, of course, 190 00:11:46,240 --> 00:11:49,440 police officers are not supposed to do under any circumstances. 191 00:11:50,800 --> 00:11:54,800 When the police tried to investigate this case of police brutality, 192 00:11:54,800 --> 00:11:57,120 he said, "I didn't see anything going on there, 193 00:11:57,120 --> 00:11:59,120 "all I saw was the suspect I was chasing." 194 00:11:59,120 --> 00:12:00,760 And nobody could believe this 195 00:12:00,760 --> 00:12:04,200 and he was prosecuted for perjury and obstruction of justice. 196 00:12:05,760 --> 00:12:08,240 Everyone was convinced that Conley was lying. 197 00:12:08,240 --> 00:12:11,360 We don't want you to be, like, closer than about... 198 00:12:11,360 --> 00:12:13,840 Everyone, that is, apart from Chris Chabris. 199 00:12:17,160 --> 00:12:19,720 He wondered if our ability to pay attention 200 00:12:19,720 --> 00:12:23,400 is so limited that any one of us could run past a vicious fight 201 00:12:23,400 --> 00:12:24,720 without even noticing. 202 00:12:27,360 --> 00:12:29,440 And it's something he's putting to the test. 203 00:12:29,440 --> 00:12:32,760 Now, when you see someone jogging across the footbridge, 204 00:12:32,760 --> 00:12:34,280 then you should get started. 205 00:12:34,280 --> 00:12:35,440 Jackie, you can go. 206 00:12:40,000 --> 00:12:41,280 In the experiment, 207 00:12:41,280 --> 00:12:44,600 the subjects are asked to focus carefully on a cognitive task. 208 00:12:45,600 --> 00:12:47,280 They must count the number of times 209 00:12:47,280 --> 00:12:49,200 the runner taps her head with each hand. 210 00:12:56,320 --> 00:12:58,680 Would they, like the Boston police officer, 211 00:12:58,680 --> 00:13:00,880 be so blinded by their limited attention 212 00:13:00,880 --> 00:13:03,720 that they would completely fail to notice the fight? 213 00:13:07,440 --> 00:13:10,880 About 45 seconds or a minute into the run, there was the fight. 214 00:13:10,880 --> 00:13:14,760 And they could actually see the fight from a ways away, 215 00:13:14,760 --> 00:13:17,720 and it was about 20 feet away from them when they got closest to them. 216 00:13:20,240 --> 00:13:22,600 The fight is right in their field of view, 217 00:13:22,600 --> 00:13:26,240 and at least partially visible from as far back as the footbridge. 218 00:13:34,240 --> 00:13:37,240 It seems incredible that anyone would fail to notice something 219 00:13:37,240 --> 00:13:38,640 so apparently obvious. 220 00:13:42,840 --> 00:13:44,680 They completed the three-minute course 221 00:13:44,680 --> 00:13:47,120 and then we said, "Did you notice anything unusual?" 222 00:13:47,120 --> 00:13:48,720 Yes. 223 00:13:48,720 --> 00:13:50,880 What was it? It was a fight. 224 00:13:50,880 --> 00:13:53,800 Sometimes they would have noticed the fight and they would say, 225 00:13:53,800 --> 00:13:57,080 "Yeah, I saw some guys fighting", but a large percentage of people said 226 00:13:57,080 --> 00:13:59,040 "We didn't see anything unusual at all." 227 00:13:59,040 --> 00:14:00,920 And when we asked them specifically 228 00:14:00,920 --> 00:14:03,560 about whether they saw anybody fighting, they still said no. 229 00:14:10,280 --> 00:14:12,920 In fact, nearly 50% of people in the experiment 230 00:14:12,920 --> 00:14:15,160 completely failed to notice the fight. 231 00:14:17,760 --> 00:14:20,040 Did you see anything unusual during the run? 232 00:14:20,040 --> 00:14:21,560 No. 233 00:14:21,560 --> 00:14:23,920 OK. Did you see some people fighting? 234 00:14:23,920 --> 00:14:25,000 No. 235 00:14:28,960 --> 00:14:31,080 We did at night time and we did it in the daylight. 236 00:14:32,680 --> 00:14:34,960 Even when we did it in daylight, 237 00:14:34,960 --> 00:14:38,000 many people ran right past the fight and didn't notice it at all. 238 00:14:41,080 --> 00:14:43,240 Did you see anything unusual during the run? 239 00:14:43,240 --> 00:14:45,040 No, not really. 240 00:14:45,040 --> 00:14:47,120 OK, did you see some people fighting? 241 00:14:47,120 --> 00:14:48,280 No. 242 00:14:48,280 --> 00:14:50,120 You really didn't see anyone fighting? No. 243 00:14:50,120 --> 00:14:52,360 Does it surprise you that you would have missed that? 244 00:14:52,360 --> 00:14:55,040 They were about 20 feet off the path. 245 00:14:55,040 --> 00:14:56,880 Oh! You ran right past them. 246 00:14:56,880 --> 00:14:58,560 Completely missed that, then. OK. 247 00:15:00,400 --> 00:15:02,400 Maybe what happened to Conley was, 248 00:15:02,400 --> 00:15:04,560 when you're really paying attention to one thing 249 00:15:04,560 --> 00:15:07,160 and focusing a lot of mental energy on it, you can miss things 250 00:15:07,160 --> 00:15:10,280 that other people are going to think are completely obvious, and in fact, 251 00:15:10,280 --> 00:15:12,480 that's what the jurors said after Conley's trial. 252 00:15:12,480 --> 00:15:15,600 They said "We couldn't believe that he could miss something like that". 253 00:15:15,600 --> 00:15:17,800 It didn't make any sense. He had to have been lying. 254 00:15:20,920 --> 00:15:24,680 It's an unsettling phenomenon called inattentional blindness 255 00:15:24,680 --> 00:15:26,080 that can affect us all. 256 00:15:27,240 --> 00:15:29,000 Some people have said things like 257 00:15:29,000 --> 00:15:30,800 "This shatters my faith in my own mind", 258 00:15:30,800 --> 00:15:32,880 or, "Now I don't know what to believe", 259 00:15:32,880 --> 00:15:35,240 or, "I'm going to be confused from now on." 260 00:15:35,240 --> 00:15:38,200 But I'm not sure that that feeling really stays with them very long. 261 00:15:38,200 --> 00:15:40,520 They are going to go out from the experiment, you know, 262 00:15:40,520 --> 00:15:43,080 walk to the next place they're going or something like that 263 00:15:43,080 --> 00:15:45,840 and they're going to have just as much inattentional blindness 264 00:15:45,840 --> 00:15:48,960 when they're walking down the street that afternoon as they did before. 265 00:15:53,000 --> 00:15:56,720 This experiment reveals a powerful quandary about our minds. 266 00:15:58,400 --> 00:16:00,440 We glide through the world blissfully unaware 267 00:16:00,440 --> 00:16:04,880 of most of what we do and how little we really know our minds. 268 00:16:07,360 --> 00:16:08,560 For all its brilliance, 269 00:16:08,560 --> 00:16:12,720 the part of our mind we call ourselves is extremely limited. 270 00:16:15,000 --> 00:16:17,360 So how do we manage to navigate our way through 271 00:16:17,360 --> 00:16:19,280 the complexity of daily life? 272 00:16:31,000 --> 00:16:32,320 Every day, each one of us 273 00:16:32,320 --> 00:16:35,960 makes somewhere between two and 10,000 decisions. 274 00:16:40,560 --> 00:16:42,840 When you think about our daily lives, 275 00:16:42,840 --> 00:16:45,640 it's really a long, long sequence of decisions. 276 00:16:47,760 --> 00:16:50,320 We make decisions probably at a frequency 277 00:16:50,320 --> 00:16:52,880 that is close to the frequency we breathe. 278 00:16:52,880 --> 00:16:56,200 Every minute, every second, you're deciding where to move your legs, 279 00:16:56,200 --> 00:16:58,680 and where to move your eyes, and where to move your limbs, 280 00:16:58,680 --> 00:17:00,000 and when you're eating a meal, 281 00:17:00,000 --> 00:17:01,680 you're making all kinds of decisions. 282 00:17:03,320 --> 00:17:05,600 And yet the vast majority of these decisions, 283 00:17:05,600 --> 00:17:07,440 we make without even realising. 284 00:17:13,720 --> 00:17:17,200 It was Danny Kahneman's insight that we have two systems 285 00:17:17,200 --> 00:17:19,040 in the mind for making decisions. 286 00:17:21,120 --> 00:17:23,040 Two ways of thinking: 287 00:17:23,040 --> 00:17:25,480 fast and slow. 288 00:17:30,360 --> 00:17:33,120 You know, our mind has really two ways of operating, 289 00:17:33,120 --> 00:17:39,080 and one is sort of fast-thinking, an automatic effortless mode, 290 00:17:39,080 --> 00:17:41,560 and that's the one we're in most of the time. 291 00:17:44,360 --> 00:17:48,400 This fast, automatic mode of thinking, he called System 1. 292 00:17:51,120 --> 00:17:54,760 It's powerful, effortless and responsible for most of what we do. 293 00:17:56,560 --> 00:18:00,680 And System 1 is, you know, that's what happens most of the time. 294 00:18:00,680 --> 00:18:05,400 You're there, the world around you provides all kinds of stimuli 295 00:18:05,400 --> 00:18:07,320 and you respond to them. 296 00:18:07,320 --> 00:18:11,120 Everything that you see and that you understand, you know, 297 00:18:11,120 --> 00:18:14,120 this is a tree, that's a helicopter back there, 298 00:18:14,120 --> 00:18:16,120 that's the Statue of Liberty. 299 00:18:16,120 --> 00:18:20,320 All of this visual perception, all of this comes through System 1. 300 00:18:21,520 --> 00:18:26,400 The other mode is slow, deliberate, logical and rational. 301 00:18:27,440 --> 00:18:32,440 This is System 2 and it's the bit you think of as you, 302 00:18:32,440 --> 00:18:34,560 the voice in your head. 303 00:18:34,560 --> 00:18:36,680 The simplest example of the two systems is 304 00:18:36,680 --> 00:18:42,160 really two plus two is on one side, and 17 times 24 is on the other. 305 00:18:42,160 --> 00:18:43,520 What is two plus two? 306 00:18:43,520 --> 00:18:45,000 Four. Four. Four. 307 00:18:45,000 --> 00:18:49,840 Fast System 1 is always in gear, producing instant answers. 308 00:18:49,840 --> 00:18:51,720 And what's two plus two? Four. 309 00:18:51,720 --> 00:18:53,360 A number comes to your mind. 310 00:18:53,360 --> 00:18:55,120 Four. Four. Four. 311 00:18:55,120 --> 00:18:59,200 It is automatic. You do not intend for it to happen. 312 00:18:59,200 --> 00:19:01,920 It just happens to you. It's almost like a reflex. 313 00:19:01,920 --> 00:19:04,520 And what's 22 times 17? 314 00:19:04,520 --> 00:19:05,760 That's a good one. 315 00:19:10,160 --> 00:19:13,440 But when we have to pay attention to a tricky problem, 316 00:19:13,440 --> 00:19:16,560 we engage slow-but-logical System 2. 317 00:19:17,760 --> 00:19:19,160 If you can do that in your head, 318 00:19:19,160 --> 00:19:22,280 you'll have to follow some rules and to do it sequentially. 319 00:19:24,360 --> 00:19:26,680 And that is not automatic at all. 320 00:19:26,680 --> 00:19:30,760 That involves work, it involves effort, it involves concentration. 321 00:19:30,760 --> 00:19:32,240 22 times 17? 322 00:19:36,520 --> 00:19:38,640 There will be physiological symptoms. 323 00:19:38,640 --> 00:19:41,560 Your heart rate will accelerate, your pupils will dilate, 324 00:19:41,560 --> 00:19:45,760 so many changes will occur while you're performing this computation. 325 00:19:46,760 --> 00:19:49,920 Three's...oh, God! 326 00:19:52,000 --> 00:19:53,240 That's, so... 327 00:19:53,240 --> 00:19:56,800 220 and seven times 22 is... 328 00:20:00,160 --> 00:20:02,800 ..54. 374? 329 00:20:02,800 --> 00:20:06,440 OK. And can I get you to just walk with me for a second? OK. 330 00:20:06,440 --> 00:20:07,600 Who's the current...? 331 00:20:07,600 --> 00:20:13,160 System 2 may be clever, but it's also slow, limited and lazy. 332 00:20:13,160 --> 00:20:16,920 I live in Berkeley during summers and I walk a lot. 333 00:20:16,920 --> 00:20:19,680 And when I walk very fast, I cannot think. 334 00:20:19,680 --> 00:20:24,160 Can I get you to count backwards from 100 by 7? Sure. 335 00:20:24,160 --> 00:20:28,480 193, 80... It's hard when you're walking. 336 00:20:28,480 --> 00:20:31,640 It takes up, interestingly enough, 337 00:20:31,640 --> 00:20:36,480 the same kind of executive function as...as thinking. 338 00:20:36,480 --> 00:20:39,640 Fourty...four? 339 00:20:41,040 --> 00:20:46,280 If you are expected to do something that demands a lot of effort, 340 00:20:46,280 --> 00:20:48,320 you will stop even walking. 341 00:20:48,320 --> 00:20:52,040 Eighty...um...six? 342 00:20:52,040 --> 00:20:53,240 51. 343 00:20:55,120 --> 00:20:56,320 Uh.... 344 00:20:57,440 --> 00:20:59,120 16, 345 00:20:59,120 --> 00:21:00,600 9, 2? 346 00:21:03,320 --> 00:21:06,800 Everything that you're aware of in your own mind 347 00:21:06,800 --> 00:21:09,960 is part of this slow, deliberative System 2. 348 00:21:09,960 --> 00:21:13,600 As far as you're concerned, it is the star of the show. 349 00:21:14,920 --> 00:21:19,200 Actually, I describe System 2 as not the star. 350 00:21:19,200 --> 00:21:23,760 I describe it as, as a minor character who thinks he is the star, 351 00:21:23,760 --> 00:21:25,120 because, in fact, 352 00:21:25,120 --> 00:21:28,360 most of what goes on in our mind is automatic. 353 00:21:28,360 --> 00:21:31,320 You know, it's in the domain that I call System 1. 354 00:21:31,320 --> 00:21:34,880 System 1 is an old, evolved bit of our brain, and it's remarkable. 355 00:21:34,880 --> 00:21:38,560 We couldn't survive without it because System 2 would explode. 356 00:21:38,560 --> 00:21:41,440 If Mr Spock had to make every decision for us, 357 00:21:41,440 --> 00:21:45,640 it would be very slow and effortful and our heads would explode. 358 00:21:45,640 --> 00:21:48,720 And this vast, hidden domain is responsible 359 00:21:48,720 --> 00:21:51,960 for far more than you would possibly believe. 360 00:21:51,960 --> 00:21:55,000 Having an opinion, you have an opinion immediately, 361 00:21:55,000 --> 00:21:58,200 whether you like it or not, whether you like something or not, 362 00:21:58,200 --> 00:22:02,040 whether you're for something or not, liking someone or not liking them. 363 00:22:02,040 --> 00:22:04,720 That, quite often, is something you have no control over. 364 00:22:04,720 --> 00:22:08,320 Later, when you're asked for reasons, you will invent reasons. 365 00:22:08,320 --> 00:22:11,560 And a lot of what System 2 does is, it provides reason. 366 00:22:11,560 --> 00:22:17,600 It provides rationalisations which are not necessarily the true reasons 367 00:22:17,600 --> 00:22:22,120 for our beliefs and our emotions and our intentions and what we do. 368 00:22:28,160 --> 00:22:31,960 You have two systems of thinking that steer you through life... 369 00:22:34,320 --> 00:22:38,600 Fast, intuitive System 1 that is incredibly powerful 370 00:22:38,600 --> 00:22:40,240 and does most of the driving. 371 00:22:42,760 --> 00:22:45,680 And slow, logical System 2 372 00:22:45,680 --> 00:22:48,960 that is clever, but a little lazy. 373 00:22:48,960 --> 00:22:52,000 Trouble is, there's a bit of a battle between them 374 00:22:52,000 --> 00:22:54,440 as to which one is driving your decisions. 375 00:23:02,920 --> 00:23:06,120 And this is where the mistakes creep in, 376 00:23:06,120 --> 00:23:09,240 when we use the wrong system to make a decision. 377 00:23:09,240 --> 00:23:10,840 Just going to ask you a few questions. 378 00:23:10,840 --> 00:23:12,400 We're interested in what you think. 379 00:23:12,400 --> 00:23:15,280 This question concerns this nice bottle of champagne I have here. 380 00:23:15,280 --> 00:23:19,400 Millesime 2005, it's a good year, genuinely nice, vintage bottle. 381 00:23:20,960 --> 00:23:25,400 These people think they're about to use slow, sensible System 2 382 00:23:25,400 --> 00:23:28,200 to make a rational decision about how much they would pay 383 00:23:28,200 --> 00:23:30,600 for a bottle of champagne. 384 00:23:30,600 --> 00:23:33,480 But what they don't know is that their decision 385 00:23:33,480 --> 00:23:35,400 will actually be taken totally 386 00:23:35,400 --> 00:23:40,360 without their knowledge by their hidden, fast auto-pilot, System 1. 387 00:23:42,520 --> 00:23:45,080 And with the help of a bag of ping-pong balls, 388 00:23:45,080 --> 00:23:46,960 we can influence that decision. 389 00:23:48,400 --> 00:23:52,160 I've got a set of numbered balls here from 1 to 100 in this bag. 390 00:23:52,160 --> 00:23:55,320 I'd like you to reach in and draw one out at random for me, 391 00:23:55,320 --> 00:23:56,520 if you would. 392 00:23:56,520 --> 00:23:58,400 First, they've got to choose a ball. 393 00:23:58,400 --> 00:24:00,720 The number says ten. Ten. Ten. Ten. 394 00:24:00,720 --> 00:24:04,920 They think it's a random number, but in fact, it's rigged. 395 00:24:04,920 --> 00:24:07,600 All the balls are marked with the low number ten. 396 00:24:07,600 --> 00:24:13,120 This experiment is all about the thoughtless creation of habits. 397 00:24:13,120 --> 00:24:17,440 It's about how we make one decision, and then other decisions follow it 398 00:24:17,440 --> 00:24:20,600 as if the first decision was actually meaningful. 399 00:24:22,520 --> 00:24:24,320 What we do is purposefully, 400 00:24:24,320 --> 00:24:28,320 we give people a first decision that is clearly meaningless. 401 00:24:28,320 --> 00:24:30,920 Ten. Ten, OK. Would you be willing to pay ten pounds 402 00:24:30,920 --> 00:24:33,200 for this nice bottle of vintage champagne? 403 00:24:33,200 --> 00:24:34,640 I would, yes. 404 00:24:34,640 --> 00:24:37,440 No. Yeah, I guess. OK. 405 00:24:37,440 --> 00:24:40,120 This first decision is meaningless, 406 00:24:40,120 --> 00:24:43,360 based as it is on a seemingly random number. 407 00:24:44,440 --> 00:24:49,920 But what it does do is lodge the low number ten in their heads. 408 00:24:49,920 --> 00:24:53,000 Would you buy it for ten pounds? Yes, I would. Yes. You would? OK. 409 00:24:54,880 --> 00:24:57,280 Now for the real question where we ask them 410 00:24:57,280 --> 00:25:00,480 how much they'd actually pay for the champagne. 411 00:25:00,480 --> 00:25:03,600 What's the maximum amount you think you'd be willing to pay? 412 00:25:03,600 --> 00:25:05,240 20? OK. 413 00:25:05,240 --> 00:25:07,080 Seven pounds. Seven pounds, OK. 414 00:25:07,080 --> 00:25:08,720 Probably ten pound. 415 00:25:08,720 --> 00:25:10,920 A range of fairly low offers. 416 00:25:12,400 --> 00:25:15,400 But what happens if we prime people with a much higher number, 417 00:25:15,400 --> 00:25:17,640 65 instead of 10? 418 00:25:20,160 --> 00:25:22,680 What does that one say? 65. 65, OK. 419 00:25:22,680 --> 00:25:23,800 65. OK. 420 00:25:23,800 --> 00:25:25,120 It says 65. 421 00:25:27,560 --> 00:25:31,120 How will this affect the price people are prepared to pay? 422 00:25:31,120 --> 00:25:33,520 What's the maximum you would be willing to pay for this 423 00:25:33,520 --> 00:25:35,000 bottle of champagne? 424 00:25:35,000 --> 00:25:37,760 40? £45. 45, OK. 425 00:25:37,760 --> 00:25:39,000 50. 426 00:25:39,000 --> 00:25:41,400 40 quid? OK. 427 00:25:41,400 --> 00:25:45,840 £50? £50? Yeah, I'd pay between 50 and £80. 428 00:25:45,840 --> 00:25:48,160 Between 50 and 80? Yeah. 429 00:25:48,160 --> 00:25:50,400 Logic has gone out of the window. 430 00:25:52,160 --> 00:25:55,800 The price people are prepared to pay is influenced by nothing more 431 00:25:55,800 --> 00:25:58,200 than a number written on a ping-pong ball. 432 00:26:01,360 --> 00:26:04,040 It suggests that when we can't make decisions, 433 00:26:04,040 --> 00:26:06,920 we don't evaluate the decision in itself. 434 00:26:06,920 --> 00:26:08,320 Instead what we do is, 435 00:26:08,320 --> 00:26:12,320 we try to look at other similar decisions we've made in the past 436 00:26:12,320 --> 00:26:16,000 and we take those decisions as if they were good decisions 437 00:26:16,000 --> 00:26:17,600 and we say to ourselves, 438 00:26:17,600 --> 00:26:19,800 "Oh, I've made this decision before. 439 00:26:19,800 --> 00:26:23,040 "Clearly, I don't need to go ahead and solve this decision. 440 00:26:23,040 --> 00:26:25,600 "Let me just use what I did before and repeat it, 441 00:26:25,600 --> 00:26:27,720 "maybe with some modifications." 442 00:26:29,840 --> 00:26:32,720 This anchoring effect comes from the conflict 443 00:26:32,720 --> 00:26:35,080 between our two systems of thinking. 444 00:26:37,760 --> 00:26:41,760 Fast System 1 is a master of taking short cuts 445 00:26:41,760 --> 00:26:45,160 to bring about the quickest possible decision. 446 00:26:45,160 --> 00:26:48,000 What happens is, they ask you a question 447 00:26:48,000 --> 00:26:50,120 and if the question is difficult 448 00:26:50,120 --> 00:26:54,920 but there is a related question that is a lot...that is somewhat simpler, 449 00:26:54,920 --> 00:26:58,520 you're just going to answer the other question and... 450 00:26:58,520 --> 00:27:00,160 and not even notice. 451 00:27:00,160 --> 00:27:02,840 So the system does all kinds of short cuts to feed us 452 00:27:02,840 --> 00:27:06,000 the information in a faster way and we can make actions, 453 00:27:06,000 --> 00:27:08,840 and the system is accepting some mistakes. 454 00:27:11,280 --> 00:27:13,960 We make decisions using fast System 1 455 00:27:13,960 --> 00:27:17,080 when we really should be using slow System 2. 456 00:27:18,680 --> 00:27:22,080 And this is why we make the mistakes we do, 457 00:27:22,080 --> 00:27:26,080 systematic mistakes known as cognitive biases. 458 00:27:28,720 --> 00:27:30,280 Nice day. 459 00:27:36,640 --> 00:27:41,600 Since Kahneman first began investigating the glitches in our thinking, 460 00:27:41,600 --> 00:27:45,280 more than 150 cognitive biases have been identified. 461 00:27:46,480 --> 00:27:49,160 We are riddled with these systematic mistakes 462 00:27:49,160 --> 00:27:52,360 and they affect every aspect of our daily lives. 463 00:27:54,800 --> 00:27:57,480 Wikipedia has a very big list of biases 464 00:27:57,480 --> 00:28:00,640 and we are finding new ones all the time. 465 00:28:00,640 --> 00:28:03,040 One of the biases that I think is the most important 466 00:28:03,040 --> 00:28:05,000 is what's called the present bias focus. 467 00:28:05,000 --> 00:28:07,080 It's the fact that we focus on now 468 00:28:07,080 --> 00:28:09,560 and don't think very much about the future. 469 00:28:09,560 --> 00:28:13,320 That's the bias that causes things like overeating and smoking, 470 00:28:13,320 --> 00:28:16,920 and texting and driving, and having unprotected sex. 471 00:28:16,920 --> 00:28:19,880 Another one is called the halo effect, 472 00:28:19,880 --> 00:28:21,680 and this is the idea 473 00:28:21,680 --> 00:28:25,440 that if you like somebody or an organisation, 474 00:28:25,440 --> 00:28:29,120 you're biased to think that all of its aspects are good, 475 00:28:29,120 --> 00:28:30,600 that everything is good about it. 476 00:28:30,600 --> 00:28:33,120 If you dislike it, everything is bad. 477 00:28:33,120 --> 00:28:35,280 People really are quite uncomfortable, you know, 478 00:28:35,280 --> 00:28:39,320 by the idea that Hitler loved children, you know. 479 00:28:39,320 --> 00:28:40,720 He did. 480 00:28:40,720 --> 00:28:43,240 Now, that doesn't make him a good person, 481 00:28:43,240 --> 00:28:47,560 but we feel uncomfortable to see an attractive trait 482 00:28:47,560 --> 00:28:51,480 in a person that we consider, you know, the epitome of evil. 483 00:28:51,480 --> 00:28:54,400 We are prone to think that what we like is all good 484 00:28:54,400 --> 00:28:56,240 and what we dislike is all bad. 485 00:28:56,240 --> 00:28:57,920 That's a bias. 486 00:28:57,920 --> 00:29:01,240 Another particular favourite of mine is the bias to get attached 487 00:29:01,240 --> 00:29:04,440 to things that we ourselves have created. 488 00:29:04,440 --> 00:29:06,080 We call it the IKEA effect. 489 00:29:06,080 --> 00:29:10,120 Well, you've got loss aversion, risk aversion, present bias. 490 00:29:10,120 --> 00:29:13,120 Spotlight effect, and the spotlight effect is the idea 491 00:29:13,120 --> 00:29:16,240 that we think that other people pay a lot of attention to us 492 00:29:16,240 --> 00:29:17,600 when in fact, they don't. 493 00:29:17,600 --> 00:29:20,920 Confirmation bias. Overconfidence is a big one. 494 00:29:20,920 --> 00:29:23,200 But what's clear is that there's lots of them. 495 00:29:23,200 --> 00:29:25,640 There's lots of ways for us to get things wrong. 496 00:29:25,640 --> 00:29:27,680 You know, there's one way to do things right 497 00:29:27,680 --> 00:29:31,440 and many ways to do things wrong, and we're capable of many of them. 498 00:29:37,280 --> 00:29:41,040 These biases explain so many things that we get wrong. 499 00:29:42,880 --> 00:29:45,800 Our impulsive spending. 500 00:29:45,800 --> 00:29:48,360 Trusting the wrong people. 501 00:29:48,360 --> 00:29:51,520 Not seeing the other person's point of view. 502 00:29:51,520 --> 00:29:53,480 Succumbing to temptation. 503 00:29:56,760 --> 00:29:59,080 We are so riddled with these biases, 504 00:29:59,080 --> 00:30:02,560 it's hard to believe we ever make a rational decision. 505 00:30:10,200 --> 00:30:13,960 But it's not just our everyday decisions that are affected. 506 00:30:28,840 --> 00:30:30,920 What happens if you're an expert, 507 00:30:30,920 --> 00:30:34,760 trained in making decisions that are a matter of life and death? 508 00:30:38,240 --> 00:30:41,600 Are you still destined to make these systematic mistakes? 509 00:30:44,600 --> 00:30:47,600 On the outskirts of Washington DC, 510 00:30:47,600 --> 00:30:52,160 Horizon has been granted access to spy on the spooks. 511 00:30:56,640 --> 00:31:00,520 Welcome to Analytical Exercise Number Four. 512 00:31:01,800 --> 00:31:04,360 Former intelligence analyst Donald Kretz 513 00:31:04,360 --> 00:31:07,200 is running an ultra-realistic spy game. 514 00:31:08,840 --> 00:31:13,480 This exercise will take place in the fictitious city of Vastopolis. 515 00:31:15,360 --> 00:31:19,440 Taking part are a mixture of trained intelligence analysts 516 00:31:19,440 --> 00:31:20,720 and some novices. 517 00:31:23,120 --> 00:31:25,040 Due to an emerging threat... 518 00:31:26,680 --> 00:31:30,040 ..a terrorism taskforce has been stood up. 519 00:31:30,040 --> 00:31:31,960 I will be the terrorism taskforce lead 520 00:31:31,960 --> 00:31:36,120 and I have recruited all of you to be our terrorism analysts. 521 00:31:39,800 --> 00:31:41,960 The challenge facing the analysts 522 00:31:41,960 --> 00:31:45,000 is to thwart a terrorist threat against a US city. 523 00:31:47,800 --> 00:31:50,600 The threat at this point has not been determined. 524 00:31:50,600 --> 00:31:53,400 It's up to you to figure out the type of terrorism... 525 00:31:54,600 --> 00:31:56,880 ..and who's responsible for planning it. 526 00:31:59,200 --> 00:32:02,080 The analysts face a number of tasks. 527 00:32:02,080 --> 00:32:06,240 They must first investigate any groups who may pose a threat. 528 00:32:06,240 --> 00:32:08,160 Your task is to write a report. 529 00:32:08,160 --> 00:32:11,880 The subject in this case is the Network of Dread. 530 00:32:11,880 --> 00:32:14,960 The mayor has asked for this 15 minutes from now. 531 00:32:17,960 --> 00:32:21,240 Just like in the real world, the analysts have access 532 00:32:21,240 --> 00:32:25,560 to a huge amount of data streaming in, from government agencies, 533 00:32:25,560 --> 00:32:29,880 social media, mobile phones and emergency services. 534 00:32:33,600 --> 00:32:35,760 The Network of Dread turns out to be 535 00:32:35,760 --> 00:32:38,520 a well-known international terror group. 536 00:32:38,520 --> 00:32:41,800 They have the track record, the capability 537 00:32:41,800 --> 00:32:45,120 and the personnel to carry out an attack. 538 00:32:45,120 --> 00:32:46,640 The scenario that's emerging 539 00:32:46,640 --> 00:32:48,640 is a bio-terror event, 540 00:32:48,640 --> 00:32:51,280 meaning it's a biological terrorism attack 541 00:32:51,280 --> 00:32:53,920 that's going to take place against the city. 542 00:32:56,760 --> 00:33:00,360 If there is an emerging threat, they are the likely candidate. 543 00:33:00,360 --> 00:33:02,560 We need to move onto the next task. 544 00:33:04,880 --> 00:33:06,600 It's now 9th April. 545 00:33:06,600 --> 00:33:09,240 This is another request for information, 546 00:33:09,240 --> 00:33:13,920 this time on something or someone called the Masters of Chaos. 547 00:33:16,960 --> 00:33:21,440 The Masters of Chaos are a group of cyber-hackers, 548 00:33:21,440 --> 00:33:24,960 a local bunch of misfits with no history of violence. 549 00:33:27,240 --> 00:33:31,880 And while the analysts continue to sift through the incoming data, 550 00:33:31,880 --> 00:33:36,040 behind the scenes, Kretz is watching their every move. 551 00:33:36,040 --> 00:33:39,520 In this room, we're able to monitor what the analysts are doing 552 00:33:39,520 --> 00:33:41,440 throughout the entire exercise. 553 00:33:42,840 --> 00:33:44,480 We have set up a knowledge base 554 00:33:44,480 --> 00:33:46,560 into which we have been inserting data 555 00:33:46,560 --> 00:33:48,440 throughout the course of the day. 556 00:33:48,440 --> 00:33:50,640 Some of them are related to our terrorist threat, 557 00:33:50,640 --> 00:33:52,000 many of them are not. 558 00:33:53,440 --> 00:33:56,320 Amidst the wealth of data on the known terror group, 559 00:33:56,320 --> 00:33:58,480 there's also evidence coming in 560 00:33:58,480 --> 00:34:00,920 of a theft at a university biology lab 561 00:34:00,920 --> 00:34:05,320 and someone has hacked into the computers of a local freight firm. 562 00:34:06,520 --> 00:34:10,560 Each of these messages represents, essentially, a piece of the puzzle, 563 00:34:10,560 --> 00:34:14,520 but it's a puzzle that you don't have the box top to, 564 00:34:14,520 --> 00:34:17,720 so you don't have the picture in advance, 565 00:34:17,720 --> 00:34:20,800 so you don't know what pieces go where. 566 00:34:20,800 --> 00:34:23,760 Furthermore, what we have is a bunch of puzzle pieces that don't 567 00:34:23,760 --> 00:34:25,200 even go with this puzzle. 568 00:34:27,680 --> 00:34:31,040 The exercise is part of a series of experiments to investigate 569 00:34:31,040 --> 00:34:33,520 whether expert intelligence agents 570 00:34:33,520 --> 00:34:37,840 are just as prone to mistakes from cognitive bias as the rest of us, 571 00:34:37,840 --> 00:34:43,280 or whether their training and expertise makes them immune. 572 00:34:44,440 --> 00:34:47,760 I have a sort of insider's point of view of this problem. 573 00:34:47,760 --> 00:34:51,240 I worked a number of years as an intelligence analyst. 574 00:34:51,240 --> 00:34:53,880 The stakes are incredibly high. 575 00:34:53,880 --> 00:34:57,360 Mistakes can often be life and death. 576 00:35:00,360 --> 00:35:02,560 We roll ahead now. The date is 21st May. 577 00:35:04,840 --> 00:35:07,320 If the analysts are able to think rationally, 578 00:35:07,320 --> 00:35:09,520 they should be able to solve the puzzle. 579 00:35:10,840 --> 00:35:15,320 But the danger is, they will fall into the trap set by Kretz 580 00:35:15,320 --> 00:35:18,440 and only pay attention to the established terror group, 581 00:35:18,440 --> 00:35:20,200 the Network of Dread. 582 00:35:21,520 --> 00:35:27,240 Their judgment may be clouded by a bias called confirmation bias. 583 00:35:27,240 --> 00:35:30,640 Confirmation bias is the most prevalent bias of all, 584 00:35:30,640 --> 00:35:33,120 and it's where we tend to search for information 585 00:35:33,120 --> 00:35:35,240 that supports what we already believe. 586 00:35:37,480 --> 00:35:39,880 Confirmation bias can easily lead people 587 00:35:39,880 --> 00:35:42,560 to ignore the evidence in front of their eyes. 588 00:35:44,200 --> 00:35:47,040 And Kretz is able to monitor if the bias kicks in. 589 00:35:48,440 --> 00:35:51,200 We still see that they're searching for Network of Dread. 590 00:35:51,200 --> 00:35:55,160 That's an indication that we may have a confirmation bias operating. 591 00:35:57,040 --> 00:35:59,600 The Network of Dread are the big guys - 592 00:35:59,600 --> 00:36:04,360 they've done it before, so you would expect they'd do it again. 593 00:36:04,360 --> 00:36:07,240 And I think we're starting to see some biases here. 594 00:36:08,480 --> 00:36:11,640 Analysts desperately want to get to the correct answer, 595 00:36:11,640 --> 00:36:15,160 but they're affected by the same biases as the rest of us. 596 00:36:15,160 --> 00:36:19,480 So far, most of our analysts seem to believe 597 00:36:19,480 --> 00:36:24,000 that the Network of Dread is responsible for planning this attack, 598 00:36:24,000 --> 00:36:26,080 and that is completely wrong. 599 00:36:30,600 --> 00:36:31,680 How are we doing? 600 00:36:33,480 --> 00:36:36,640 It's time for the analysts to put themselves on the line 601 00:36:36,640 --> 00:36:40,120 and decide who the terrorists are and what they're planning. 602 00:36:42,280 --> 00:36:43,560 So what do you think? 603 00:36:43,560 --> 00:36:47,080 It was a bio-terrorist attack. 604 00:36:47,080 --> 00:36:49,320 I had a different theory. What's your theory? 605 00:36:49,320 --> 00:36:51,240 Cos I may be missing something here, too. 606 00:36:51,240 --> 00:36:54,640 They know that the Network of Dread is a terrorist group. 607 00:36:54,640 --> 00:36:58,600 They know that the Masters of Chaos is a cyber-hacking group. 608 00:36:58,600 --> 00:37:01,760 Either to the new factory or in the water supply. 609 00:37:01,760 --> 00:37:05,080 Lots of dead fish floating up in the river. 610 00:37:06,280 --> 00:37:09,960 The question is, did any of the analysts manage to dig out 611 00:37:09,960 --> 00:37:12,960 the relevant clues and find the true threat? 612 00:37:14,160 --> 00:37:18,440 In this case, the actual threat is due to the cyber-group, 613 00:37:18,440 --> 00:37:20,160 the Masters of Chaos, 614 00:37:20,160 --> 00:37:23,600 who become increasingly radicalised throughout the scenario 615 00:37:23,600 --> 00:37:27,120 and decide to take out their anger on society, essentially. 616 00:37:27,120 --> 00:37:31,560 Who convinced them to switch from cyber-crime to bio-terrorism? 617 00:37:31,560 --> 00:37:34,720 Or did they succumb to confirmation bias 618 00:37:34,720 --> 00:37:38,560 and simply pin the blame on the usual suspects? 619 00:37:38,560 --> 00:37:40,040 Will they make that connection? 620 00:37:40,040 --> 00:37:43,520 Will they process that evidence and assess it accordingly, 621 00:37:43,520 --> 00:37:46,480 or will their confirmation bias drive them 622 00:37:46,480 --> 00:37:50,000 to believe that it's a more traditional type of terrorist group? 623 00:37:50,000 --> 00:37:53,520 I believe that the Masters of Chaos are actually the ones behind it. 624 00:37:53,520 --> 00:37:56,360 It's either a threat or not a threat, but the Network of Dread? 625 00:37:56,360 --> 00:37:59,800 And time's up. Please go ahead and save those reports. 626 00:38:02,080 --> 00:38:03,880 At the end of the exercise, 627 00:38:03,880 --> 00:38:08,240 Kretz reveals the true identity of the terrorists. 628 00:38:08,240 --> 00:38:10,640 We have a priority message from City Hall. 629 00:38:12,200 --> 00:38:15,160 The terrorist attack was thwarted, 630 00:38:15,160 --> 00:38:20,040 the planned bio-terrorist attack by the Masters of Chaos 631 00:38:20,040 --> 00:38:22,600 against Vastopolis was thwarted. 632 00:38:22,600 --> 00:38:25,560 The mayor expresses his thanks for a job well done. 633 00:38:25,560 --> 00:38:29,400 Show of hands, who...who got it? 634 00:38:30,840 --> 00:38:32,600 Yeah. 635 00:38:32,600 --> 00:38:37,400 Out of 12 subjects, 11 of them got the wrong answer. 636 00:38:38,520 --> 00:38:42,360 The only person to spot the true threat was in fact a novice. 637 00:38:44,160 --> 00:38:48,760 All the trained experts fell prey to confirmation bias. 638 00:38:53,520 --> 00:38:55,160 It is not typically the case 639 00:38:55,160 --> 00:38:57,520 that simply being trained as an analyst 640 00:38:57,520 --> 00:39:00,880 gives you the tools you need to overcome cognitive bias. 641 00:39:02,520 --> 00:39:05,680 You can learn techniques for memory improvement. 642 00:39:05,680 --> 00:39:08,640 You can learn techniques for better focus, 643 00:39:08,640 --> 00:39:11,800 but techniques to eliminate cognitive bias 644 00:39:11,800 --> 00:39:13,280 just simply don't work. 645 00:39:19,080 --> 00:39:21,880 And for intelligence analysts in the real world, 646 00:39:21,880 --> 00:39:26,320 the implications of making mistakes from these biases are drastic. 647 00:39:28,480 --> 00:39:32,320 Government reports and studies over the past decade or so 648 00:39:32,320 --> 00:39:35,920 have cited experts as believing that cognitive bias 649 00:39:35,920 --> 00:39:37,440 may have played a role 650 00:39:37,440 --> 00:39:41,440 in a number of very significant intelligence failures, 651 00:39:41,440 --> 00:39:44,600 and yet it remains an understudied problem. 652 00:39:51,960 --> 00:39:53,200 Heads. 653 00:39:54,680 --> 00:39:55,920 Heads. 654 00:39:58,320 --> 00:39:59,960 But the area of our lives 655 00:39:59,960 --> 00:40:04,480 in which these systematic mistakes have the most explosive impact 656 00:40:04,480 --> 00:40:06,120 is in the world of money. 657 00:40:07,480 --> 00:40:11,920 The moment money enters the picture, the rules change. 658 00:40:13,600 --> 00:40:16,520 Many of us think that we're at our most rational 659 00:40:16,520 --> 00:40:18,800 when it comes to decisions about money. 660 00:40:20,120 --> 00:40:24,480 We like to think we know how to spot a bargain, to strike a good deal, 661 00:40:24,480 --> 00:40:28,800 sell our house at the right time, invest wisely. 662 00:40:29,880 --> 00:40:32,600 Thinking about money the right way 663 00:40:32,600 --> 00:40:36,920 is one of the most challenging things for human nature. 664 00:40:38,840 --> 00:40:41,720 But if we're not as rational as we like to think, 665 00:40:41,720 --> 00:40:45,280 and there is a hidden force at work shaping our decisions, 666 00:40:45,280 --> 00:40:47,040 are we deluding ourselves? 667 00:40:47,040 --> 00:40:51,280 Money brings with it a mode of thinking. 668 00:40:51,280 --> 00:40:54,080 It changes the way we react to the world. 669 00:40:55,760 --> 00:40:57,200 When it comes to money, 670 00:40:57,200 --> 00:41:00,800 cognitive biases play havoc with our best intentions. 671 00:41:02,120 --> 00:41:05,760 There are many mistakes that people make when it comes to money. 672 00:41:18,640 --> 00:41:21,880 Kahneman's insight into our mistakes with money 673 00:41:21,880 --> 00:41:25,600 were to revolutionise our understanding of economics. 674 00:41:27,160 --> 00:41:29,760 It's all about a crucial difference in how we feel 675 00:41:29,760 --> 00:41:32,720 when we win or lose 676 00:41:32,720 --> 00:41:35,640 and our readiness to take a risk. 677 00:41:35,640 --> 00:41:37,440 I would like to take a risk. 678 00:41:37,440 --> 00:41:39,560 Take a risk, OK? 679 00:41:39,560 --> 00:41:40,760 Let's take a risk. 680 00:41:40,760 --> 00:41:44,440 Our willingness to take a gamble is very different depending on 681 00:41:44,440 --> 00:41:47,000 whether we are faced with a loss or a gain. 682 00:41:47,000 --> 00:41:49,800 Excuse me, guys, can you spare two minutes to help us 683 00:41:49,800 --> 00:41:52,840 with a little experiment? Try and win as much money as you can, OK? 684 00:41:52,840 --> 00:41:54,080 OK. OK? 685 00:41:54,080 --> 00:41:56,360 In my hands here I have £20, OK? 686 00:41:56,360 --> 00:41:58,440 Here are two scenarios. 687 00:41:58,440 --> 00:41:59,920 And I'm going to give you ten. 688 00:41:59,920 --> 00:42:03,320 In the first case, you are given ten pounds. 689 00:42:03,320 --> 00:42:06,120 That's now yours. Put it in your pocket, take it away, 690 00:42:06,120 --> 00:42:08,080 spend it on a drink on the South Bank later. 691 00:42:08,080 --> 00:42:09,400 OK. OK? 692 00:42:09,400 --> 00:42:10,960 OK. 693 00:42:10,960 --> 00:42:15,120 Then you have to make a choice about how much more you could gain. 694 00:42:15,120 --> 00:42:17,720 You can either take the safe option, in which case, 695 00:42:17,720 --> 00:42:19,840 I give you an additional five, 696 00:42:19,840 --> 00:42:21,760 or you can take a risk. 697 00:42:21,760 --> 00:42:24,720 If you take a risk, I'm going to flip this coin. 698 00:42:24,720 --> 00:42:28,640 If it comes up heads, you win ten, 699 00:42:28,640 --> 00:42:30,560 but if it comes up tails, 700 00:42:30,560 --> 00:42:32,800 you're not going to win any more. 701 00:42:32,800 --> 00:42:36,880 Would you choose the safe option and get an extra five pounds 702 00:42:36,880 --> 00:42:40,520 or take a risk and maybe win an extra ten or nothing? 703 00:42:40,520 --> 00:42:41,960 Which is it going to be? 704 00:42:45,800 --> 00:42:47,240 I'd go safe. 705 00:42:47,240 --> 00:42:49,520 Safe, five? Yeah. 706 00:42:49,520 --> 00:42:52,240 Take five. You'd take five? Yeah, man. Sure? There we go. 707 00:42:52,240 --> 00:42:54,400 Most people presented with this choice 708 00:42:54,400 --> 00:42:56,960 go for the certainty of the extra fiver. 709 00:42:56,960 --> 00:42:59,280 Thank you very much. Told you it was easy. 710 00:42:59,280 --> 00:43:03,040 In a winning frame of mind, people are naturally rather cautious. 711 00:43:03,040 --> 00:43:04,520 That's yours, too. 712 00:43:04,520 --> 00:43:06,320 That was it? 713 00:43:06,320 --> 00:43:09,400 That was it. Really? Yes. Eh? 714 00:43:11,760 --> 00:43:13,680 But what about losing? 715 00:43:13,680 --> 00:43:17,160 Are we similarly cautious when faced with a potential loss? 716 00:43:17,160 --> 00:43:20,000 In my hands, I've got £20 and I'm going to give that to you. 717 00:43:20,000 --> 00:43:21,040 That's now yours. 718 00:43:21,040 --> 00:43:23,720 OK. You can put it in your handbag. 719 00:43:23,720 --> 00:43:26,360 This time, you're given £20. 720 00:43:28,280 --> 00:43:30,520 And again, you must make a choice. 721 00:43:32,920 --> 00:43:37,440 Would you choose to accept a safe loss of £5 or would you take a risk? 722 00:43:39,120 --> 00:43:42,120 If you take a risk, I'm going to flip this coin. 723 00:43:42,120 --> 00:43:45,400 If it comes up heads, you don't lose anything, 724 00:43:45,400 --> 00:43:48,520 but if it comes up tails, then you lose ten pounds. 725 00:43:49,800 --> 00:43:52,720 In fact, it's exactly the same outcome. 726 00:43:52,720 --> 00:43:55,640 In both cases, you face a choice between ending up with 727 00:43:55,640 --> 00:44:00,520 a certain £15 or tossing a coin to get either ten or twenty. 728 00:44:01,840 --> 00:44:06,000 I will risk losing ten or nothing. OK. 729 00:44:06,000 --> 00:44:09,480 But the crucial surprise here is that when the choice is framed 730 00:44:09,480 --> 00:44:12,640 in terms of a loss, most people take a risk. 731 00:44:13,840 --> 00:44:16,240 Take a risk. Take a risk, OK. 732 00:44:16,240 --> 00:44:18,120 I'll risk it. You'll risk it? OK. 733 00:44:18,120 --> 00:44:20,680 Our slow System 2 could probably work out 734 00:44:20,680 --> 00:44:23,320 that the outcome is the same in both cases. 735 00:44:23,320 --> 00:44:24,960 And that's heads, you win. 736 00:44:24,960 --> 00:44:27,520 But it's too limited and too lazy. 737 00:44:27,520 --> 00:44:29,360 That's the easiest £20 you'll ever make. 738 00:44:29,360 --> 00:44:34,400 Instead, fast System 1 makes a rough guess based on change. 739 00:44:34,400 --> 00:44:38,960 And that's all there is to it, thank you very much. Oh, no! Look. 740 00:44:38,960 --> 00:44:41,400 And System 1 doesn't like losing. 741 00:44:47,080 --> 00:44:49,920 If you were to lose £10 in the street today 742 00:44:49,920 --> 00:44:53,520 and then find £10 tomorrow, you would be financially unchanged 743 00:44:53,520 --> 00:44:55,560 but actually we respond to changes, 744 00:44:55,560 --> 00:45:00,360 so the pain of the loss of £10 looms much larger, it feels more painful. 745 00:45:00,360 --> 00:45:02,840 In fact, you'd probably have to find £20 746 00:45:02,840 --> 00:45:05,720 to offset the pain that you feel by losing ten. 747 00:45:05,720 --> 00:45:06,800 Heads. 748 00:45:06,800 --> 00:45:10,040 At the heart of this, is a bias called loss aversion, 749 00:45:10,040 --> 00:45:13,800 which affects many of our financial decisions. 750 00:45:13,800 --> 00:45:17,280 People think in terms of gains and losses. 751 00:45:17,280 --> 00:45:20,760 Heads. It's tails. Oh! 752 00:45:20,760 --> 00:45:25,760 And in their thinking, typically, losses loom larger than gains. 753 00:45:26,840 --> 00:45:29,440 We even have an idea by...by how much, 754 00:45:29,440 --> 00:45:33,320 by roughly a factor of two or a little more than two. 755 00:45:34,360 --> 00:45:36,400 That is loss aversion, 756 00:45:36,400 --> 00:45:38,960 and it certainly was the most important thing 757 00:45:38,960 --> 00:45:40,640 that emerged from our work. 758 00:45:43,000 --> 00:45:46,640 It's a vital insight into human behaviour, 759 00:45:46,640 --> 00:45:49,680 so important that it led to a Nobel prize 760 00:45:49,680 --> 00:45:53,200 and the founding of an entirely new branch of economics. 761 00:45:55,120 --> 00:45:58,120 When we think we're winning, we don't take risks. 762 00:45:59,560 --> 00:46:04,200 But when we're faced with a loss, frankly, we're a bit reckless. 763 00:46:14,040 --> 00:46:16,720 But loss aversion doesn't just affect people 764 00:46:16,720 --> 00:46:19,400 making casual five pound bets. 765 00:46:19,400 --> 00:46:23,560 It can affect anyone at any time, 766 00:46:23,560 --> 00:46:27,760 including those who work in the complex system of high finance, 767 00:46:27,760 --> 00:46:30,160 in which trillions of dollars are traded. 768 00:46:32,560 --> 00:46:35,400 In our current complex environments, 769 00:46:35,400 --> 00:46:39,280 we now have the means as well as the motive 770 00:46:39,280 --> 00:46:42,120 to make very serious mistakes. 771 00:46:44,040 --> 00:46:48,600 The bedrock of economics is that people think rationally. 772 00:46:48,600 --> 00:46:52,600 They calculate risks, rewards and decide accordingly. 773 00:46:55,040 --> 00:46:57,720 But we're not always rational. 774 00:46:57,720 --> 00:47:00,320 We rarely behave like Mr Spock. 775 00:47:01,960 --> 00:47:05,400 For most of our decisions, we use fast, intuitive, 776 00:47:05,400 --> 00:47:08,360 but occasionally unreliable System 1. 777 00:47:11,120 --> 00:47:13,680 And in a global financial market, 778 00:47:13,680 --> 00:47:16,240 that can lead to very serious problems. 779 00:47:18,800 --> 00:47:23,600 I think what the financial crisis did was, it simply said, 780 00:47:23,600 --> 00:47:25,280 "You know what? 781 00:47:25,280 --> 00:47:30,440 "People are a lot more vulnerable to psychological pitfalls 782 00:47:30,440 --> 00:47:33,600 "than we really understood before." 783 00:47:33,600 --> 00:47:38,080 Basically, human psychology is just too flawed 784 00:47:38,080 --> 00:47:41,520 to expect that we could avert a crisis. 785 00:47:48,360 --> 00:47:53,000 Understanding these pitfalls has led to a new branch of economics. 786 00:47:54,720 --> 00:47:56,480 Behavioural economics. 787 00:47:58,680 --> 00:48:01,480 Thanks to psychologists like Hersh Shefrin, 788 00:48:01,480 --> 00:48:04,560 it's beginning to establish a toehold in Wall Street. 789 00:48:05,920 --> 00:48:09,040 It takes account of the way we actually make decisions 790 00:48:09,040 --> 00:48:11,000 rather than how we say we do. 791 00:48:17,360 --> 00:48:21,120 Financial crisis, I think, was as large a problem as it was 792 00:48:21,120 --> 00:48:24,320 because certain psychological traits like optimism, 793 00:48:24,320 --> 00:48:27,000 over-confidence and confirmation bias 794 00:48:27,000 --> 00:48:31,840 played a very large role among a part of the economy 795 00:48:31,840 --> 00:48:36,480 where serious mistakes could be made, and were. 796 00:48:44,160 --> 00:48:48,160 But for as long as our financial system assumes we are rational, 797 00:48:48,160 --> 00:48:50,760 our economy will remain vulnerable. 798 00:48:52,360 --> 00:48:53,560 I'm quite certain 799 00:48:53,560 --> 00:48:58,240 that if the regulators listened to behavioural economists early on, 800 00:48:58,240 --> 00:49:01,800 we would have designed a very different financial system 801 00:49:01,800 --> 00:49:05,560 and we wouldn't have had the incredible increase 802 00:49:05,560 --> 00:49:10,480 in housing market and we wouldn't have this financial catastrophe. 803 00:49:11,800 --> 00:49:15,080 And so when Kahneman collected his Nobel prize, 804 00:49:15,080 --> 00:49:17,040 it wasn't for psychology, 805 00:49:17,040 --> 00:49:19,120 it was for economics. 806 00:49:25,000 --> 00:49:29,000 The big question is, what can we do about these systematic mistakes? 807 00:49:30,600 --> 00:49:34,600 Can we hope to find a way round our fast-thinking biases 808 00:49:34,600 --> 00:49:36,640 and make better decisions? 809 00:49:39,080 --> 00:49:40,280 To answer this, 810 00:49:40,280 --> 00:49:43,560 we need to know the evolutionary origins of our mistakes. 811 00:49:46,840 --> 00:49:48,840 Just off the coast of Puerto Rico 812 00:49:48,840 --> 00:49:52,640 is probably the best place in the world to find out. 813 00:49:59,760 --> 00:50:03,040 The tiny island of Cayo Santiago. 814 00:50:05,720 --> 00:50:08,880 So we're now in the boat, heading over to Cayo Santiago. 815 00:50:08,880 --> 00:50:12,440 This is an island filled with a thousand rhesus monkeys. 816 00:50:12,440 --> 00:50:13,880 Once you pull in, it looks 817 00:50:13,880 --> 00:50:16,920 a little bit like you're going to Jurassic Park. You're not sure 818 00:50:16,920 --> 00:50:19,840 what you're going to see. Then you'll see your first monkey, 819 00:50:19,840 --> 00:50:21,320 and it'll be comfortable, like, 820 00:50:21,320 --> 00:50:24,120 "Ah, the monkeys are here, everything's great." 821 00:50:30,760 --> 00:50:33,520 It's an island devoted to monkey research. 822 00:50:34,640 --> 00:50:37,000 You see the guys hanging out on the cliff up there? 823 00:50:37,000 --> 00:50:38,040 Pretty cool. 824 00:50:41,560 --> 00:50:43,520 The really special thing about Cayo Santiago 825 00:50:43,520 --> 00:50:44,640 is that the animals here, 826 00:50:44,640 --> 00:50:47,720 because they've grown up over the last seven years around humans, 827 00:50:47,720 --> 00:50:49,360 they're completely habituated, 828 00:50:49,360 --> 00:50:52,120 and that means we can get up close to them, show them stuff, 829 00:50:52,120 --> 00:50:53,720 look at how they make decisions. 830 00:50:53,720 --> 00:50:54,960 We're able to do this here 831 00:50:54,960 --> 00:50:57,920 in a way that we'd never be able to do it anywhere else, really. 832 00:50:57,920 --> 00:50:58,960 It's really unique. 833 00:51:02,040 --> 00:51:04,400 Laurie Santos is here to find out 834 00:51:04,400 --> 00:51:08,040 if monkeys make the same mistakes in their decisions that we do. 835 00:51:11,240 --> 00:51:14,480 Most of the work we do is comparing humans and other primates, 836 00:51:14,480 --> 00:51:16,760 trying to ask what's special about humans. 837 00:51:16,760 --> 00:51:18,600 But really, what we want to understand is, 838 00:51:18,600 --> 00:51:21,600 what's the evolutionary origin of some of our dumber strategies, 839 00:51:21,600 --> 00:51:23,920 some of those spots where we get things wrong? 840 00:51:23,920 --> 00:51:26,080 If we could understand where those came from, 841 00:51:26,080 --> 00:51:28,200 that's where we'll get some insight. 842 00:51:31,080 --> 00:51:32,480 If Santos can show us 843 00:51:32,480 --> 00:51:35,880 that monkeys have the same cognitive biases as us, 844 00:51:35,880 --> 00:51:38,960 it would suggest that they evolved a long time ago. 845 00:51:43,880 --> 00:51:49,000 And a mental strategy that old would be almost impossible to change. 846 00:51:50,120 --> 00:51:53,760 We started this work around the time of the financial collapse. 847 00:51:56,080 --> 00:51:57,600 So, when we were thinking about 848 00:51:57,600 --> 00:52:01,080 what dumb strategies could we look at in monkeys, it was pretty obvious 849 00:52:01,080 --> 00:52:03,360 that some of the human economic strategies 850 00:52:03,360 --> 00:52:06,120 which were in the news might be the first thing to look at. 851 00:52:06,120 --> 00:52:09,000 And one of the particular things we wanted to look at was 852 00:52:09,000 --> 00:52:12,280 whether or not the monkeys are loss averse. 853 00:52:12,280 --> 00:52:16,320 But monkeys, smart as they are, have yet to start using money. 854 00:52:16,320 --> 00:52:18,840 And so that was kind of where we started. 855 00:52:18,840 --> 00:52:21,720 We said, "Well, how can we even ask this question 856 00:52:21,720 --> 00:52:24,000 "of if monkeys make financial mistakes?" 857 00:52:24,000 --> 00:52:27,200 And so we decided to do it by introducing the monkeys 858 00:52:27,200 --> 00:52:31,000 to their own new currency and just let them buy their food. 859 00:52:32,560 --> 00:52:36,400 So I'll show you some of this stuff we've been up to with the monkeys. 860 00:52:37,520 --> 00:52:39,040 Back in her lab at Yale, 861 00:52:39,040 --> 00:52:42,960 she introduced a troop of monkeys to their own market, 862 00:52:42,960 --> 00:52:46,840 giving them round shiny tokens they could exchange for food. 863 00:52:48,840 --> 00:52:50,280 So here's Holly. 864 00:52:50,280 --> 00:52:52,080 She comes in, hands over a token 865 00:52:52,080 --> 00:52:55,240 and you can see, she just gets to grab the grape there. 866 00:52:55,240 --> 00:52:57,560 One of the first things we wondered was just, 867 00:52:57,560 --> 00:52:59,000 can they in some sense learn 868 00:52:59,000 --> 00:53:02,520 that a different store sells different food at different prices? 869 00:53:02,520 --> 00:53:06,560 So what we did was, we presented the monkeys with situations 870 00:53:06,560 --> 00:53:10,600 where they met traders who sold different goods at different rates. 871 00:53:10,600 --> 00:53:13,920 So what you'll see in this clip is the monkeys meeting a new trader. 872 00:53:13,920 --> 00:53:17,920 She's actually selling grapes for three grapes per one token. 873 00:53:21,360 --> 00:53:23,040 And what we found is that in this case, 874 00:53:23,040 --> 00:53:24,520 the monkeys are pretty rational. 875 00:53:24,520 --> 00:53:26,960 so when they get a choice of a guy who sells, you know, 876 00:53:26,960 --> 00:53:30,720 three goods for one token, they actually shop more at that guy. 877 00:53:36,160 --> 00:53:38,800 Having taught the monkeys the value of money, 878 00:53:38,800 --> 00:53:42,440 the next step was to see if monkeys, like humans, 879 00:53:42,440 --> 00:53:46,360 suffer from that most crucial bias, loss aversion. 880 00:53:48,840 --> 00:53:52,080 And so what we did was, we introduced the monkeys to traders 881 00:53:52,080 --> 00:53:56,120 who either gave out losses or gains relative to what they should. 882 00:53:57,160 --> 00:54:00,120 So I could make the monkey think he's getting a bonus 883 00:54:00,120 --> 00:54:01,520 simply by having him trade 884 00:54:01,520 --> 00:54:04,000 with a trader who's starting with a single grape 885 00:54:04,000 --> 00:54:06,240 but then when the monkey pays this trader, 886 00:54:06,240 --> 00:54:08,640 she actually gives him an extra, so she gives him a bonus. 887 00:54:08,640 --> 00:54:10,280 At the end, the monkey gets two, 888 00:54:10,280 --> 00:54:13,320 but he thinks he got that second one as a bonus. 889 00:54:13,320 --> 00:54:16,080 We can then compare what the monkeys do with that guy 890 00:54:16,080 --> 00:54:18,320 versus a guy who gives the monkey losses. 891 00:54:18,320 --> 00:54:19,760 This is a guy who shows up, 892 00:54:19,760 --> 00:54:22,360 who pretends he's going to sell three grapes, 893 00:54:22,360 --> 00:54:25,280 but then when the monkey actually pays this trader, 894 00:54:25,280 --> 00:54:28,880 he'll take one of the grapes away and give the monkeys only two. 895 00:54:30,280 --> 00:54:33,080 The big question then is how the monkeys react 896 00:54:33,080 --> 00:54:36,160 when faced with a choice between a loss and a gain. 897 00:54:37,240 --> 00:54:40,120 So she'll come in, she's met these two guys before. 898 00:54:40,120 --> 00:54:42,520 You can see she goes with the bonus option, 899 00:54:42,520 --> 00:54:46,680 even waits patiently for her additional piece to be added here, 900 00:54:47,840 --> 00:54:51,680 and then takes the bonus, avoiding the person who gives her losses. 901 00:54:57,360 --> 00:55:00,160 So monkeys hate losing just as much as people. 902 00:55:08,080 --> 00:55:11,840 And crucially, Santos found that monkeys, as well, 903 00:55:11,840 --> 00:55:15,840 are more likely to take risks when faced with a loss. 904 00:55:19,440 --> 00:55:20,520 This suggests to us 905 00:55:20,520 --> 00:55:23,120 that the monkeys seem to frame their decisions 906 00:55:23,120 --> 00:55:24,760 in exactly the same way we do. 907 00:55:24,760 --> 00:55:27,040 They're not thinking just about the absolute, 908 00:55:27,040 --> 00:55:29,400 they're thinking relative to what they expect. 909 00:55:29,400 --> 00:55:32,000 And when they're getting less than they expect, 910 00:55:32,000 --> 00:55:35,600 when they're getting losses, they too become more risk-seeking. 911 00:55:38,680 --> 00:55:42,240 The fact that we share this bias with these monkeys suggests 912 00:55:42,240 --> 00:55:45,480 that it's an ancient strategy etched into our DNA 913 00:55:45,480 --> 00:55:47,880 more than 35 million years ago. 914 00:55:50,920 --> 00:55:53,280 And what we learn from the monkeys is that 915 00:55:53,280 --> 00:55:55,520 if this bias is really that old, 916 00:55:55,520 --> 00:55:59,800 if we really have had this strategy for the last 35 million years, 917 00:55:59,800 --> 00:56:02,560 simply deciding to overcome it is just not going to work. 918 00:56:02,560 --> 00:56:06,720 We need better ways to make ourselves avoid some of these pitfalls. 919 00:56:08,880 --> 00:56:13,600 Making mistakes, it seems, is just part of what it is to be human. 920 00:56:19,840 --> 00:56:23,400 We are stuck with our intuitive inner stranger. 921 00:56:26,000 --> 00:56:29,000 The challenge this poses is profound. 922 00:56:31,480 --> 00:56:34,720 If it's human nature to make these predictable mistakes 923 00:56:34,720 --> 00:56:38,400 and we can't change that, what, then, can we do? 924 00:56:40,360 --> 00:56:43,520 We need to accept ourselves as we are. 925 00:56:43,520 --> 00:56:46,560 The cool thing about being a human versus a monkey 926 00:56:46,560 --> 00:56:50,440 is that we have a deliberative self that can reflect on our biases. 927 00:56:50,440 --> 00:56:53,160 System 2 in us has for the first time realised 928 00:56:53,160 --> 00:56:56,360 that there's a System 1, and with that realisation, 929 00:56:56,360 --> 00:56:59,920 we can shape the way we set up policies. 930 00:56:59,920 --> 00:57:03,040 We can shape the way we set up situations to allow ourselves 931 00:57:03,040 --> 00:57:04,560 to make better decisions. 932 00:57:04,560 --> 00:57:07,640 This is the first time in evolution that this has happened. 933 00:57:12,000 --> 00:57:16,520 If we want to avoid mistakes, we have to reshape the environment 934 00:57:16,520 --> 00:57:20,960 we've built around us rather than hope to change ourselves. 935 00:57:27,800 --> 00:57:31,400 We've achieved a lot despite all of these biases. 936 00:57:31,400 --> 00:57:34,200 If we are aware of them, we can probably do things 937 00:57:34,200 --> 00:57:38,400 like design our institutions and our regulations 938 00:57:38,400 --> 00:57:43,760 and our own personal environments and working lives to minimise 939 00:57:43,760 --> 00:57:45,560 the effect of those biases 940 00:57:45,560 --> 00:57:49,160 and help us think about how to overcome them. 941 00:57:54,760 --> 00:57:56,760 We are limited, we're not perfect. 942 00:57:56,760 --> 00:57:58,960 We're irrational in all kinds of ways, 943 00:57:58,960 --> 00:58:01,920 but we can build a world that is compatible with this 944 00:58:01,920 --> 00:58:05,560 and get us to make better decisions rather than worse decisions. 945 00:58:05,560 --> 00:58:06,600 That's my hope. 946 00:58:10,640 --> 00:58:13,520 And by accepting our inner stranger, 947 00:58:13,520 --> 00:58:18,320 we may come to a better understanding of our own minds. 948 00:58:18,320 --> 00:58:21,480 I think it is important, in general, 949 00:58:21,480 --> 00:58:24,920 to be aware of where beliefs come from. 950 00:58:27,240 --> 00:58:30,000 And if we think that we have reasons for what 951 00:58:30,000 --> 00:58:34,040 we believe, that is often a mistake, 952 00:58:34,040 --> 00:58:38,360 that our beliefs and our wishes and our hopes 953 00:58:38,360 --> 00:58:41,160 are not always anchored in reasons. 954 00:58:41,160 --> 00:58:43,240 They're anchored in something else 955 00:58:43,240 --> 00:58:46,760 that comes from within and is different.