Anyone who’s ever spent some time with a basketball knows the hot hand is real. A player who’s made several shots in a row is more likely to make the next shot as well. Teammates know to feed him the ball and defenders know to guard him more tightly.
But in 1985, an academic paper supposedly debunked the hot hand. The hot hand was a “misperception of random sequences”, “a powerful and widely-shared cognitive illusion”. In short, the hot hand was a fallacy.
This 1985 paper was highly influential. Thereafter, the hot-hand fallacy would be frequently cited as yet another textbook example of human stupidity.
In 1988, Stephen Jay Gould wrote, “Everybody knows about hot hands. The only problem is that no such phenomenon exists.” In 2011, Nobel Laureate Daniel Kahneman wrote, “The hot hand is a massive and widespread cognitive illusion.” And in 2015, a Hollywood movie would even take the time to explain the hot-hand fallacy.
But here’s the ironic twist. 30 years later, two economists would discover that the 1985 paper itself contained a subtle but fatal statistical fallacy. The very data that supposedly debunked the hot hand were actually evidence in favor of the hot hand. The hot-hand fallacy was itself a fallacy. Turns out the dumb jocks weren’t so dumb after all.
This is the story of the rise and fall of the hot-hand fallacy.
Coin-Tossing & Behavioral Economics
Hot-hand deniers argue that basketball shots are just like coin tosses. Here’s what one of the 1985 authors said in a recent podcast, “The distribution of hits and misses in the game of basketball looks just like the distribution of heads and tails when you’re flipping a coin.”
Let’s first try to understand where the hot-hand deniers are coming from and why on earth they’d even begin to think that basketball shots are just like coin tosses.
Traditional economists assume that people are perfectly rational beings who never make any mistakes. One justification for this absurd assumption is that irrational beings act senselessly and arbitrarily. Their behavior is neither explicable nor predictable and thus not amenable to scientific analysis.
A new school of thought called behavioral economics disagrees. Just because people aren’t perfectly rational doesn’t mean they act senselessly and arbitrarily. Instead, people are predictably irrational. They commit systematic and predictable errors that social scientists can and should study.
Behavioral economics borrows heavily from psychology. Indeed, two Israeli psychologists were among the pioneers of behavioral economics. Daniel Kahneman was the first and still the only psychologist to win the Economics Nobel Prize. And Amos Tversky would also have won it, had he not died early of cancer.
Circa 1985, Tversky and two of his students set out to debunk the hot hand. They believed the hot hand to be yet another example of apophenia — the erroneous tendency to find patterns and meaning where none exist.
The tendency to see faces everywhere is the prototypical example of apophenia. Another example of apophenia is the tendency to see patterns, even in sequences that are completely random. In particular, when the inevitable long streak shows up, people mistakenly believe that more than mere chance is at work.
Say an NBA player shoots 50% and takes 20 shots per game. Then simply by chance alone, he’d have a hot shooting streak of five consecutive makes once every four games. Tversky suspected that this explained the supposed hot hand: Hot streaks due simply to chance were erroneously taken by fans to be instances of the hot hand.
The Statistical Analysis
Is Tversky right? Well, we can argue until the cows come home, but to settle the debate, let’s turn to statistical analysis.
Say we toss a fair coin four times. If the first three are heads, what’s the probability the fourth is also heads? And if instead the first three are tails, what’s the probability the fourth is heads?
The answer to both questions is simply 50%. The first three tosses have no bearing on the fourth. As statisticians say, coin tosses are independent and memoryless. Regardless of what happened before, the probability that any coin toss is heads is fixed at 50%.
Now. Are basketball shots likewise independent and memoryless? Is the probability of making a shot likewise fixed?
Of course, with basketball, we have two additional complications. First, not every player shoots 50%. Second, shot difficulty varies. For example, uncontested dunks are easier than contested three-pointers. But we can easily get around these complications by doing a controlled shooting experiment.
Here’s what Tversky did with 26 Cornell basketball players.
First, determine a spot from which each experimental subject shoots 50%. Next, make her take 100 shots there. Then examine if her makes and misses resemble coin tosses.
One simple statistical test is to compare those shots taken immediately after three consecutive misses versus those taken immediately after three consecutive makes. If the hot hand is real, players should be more likely to make shots after three makes than after three misses. On average, Tversky’s 26 subjects made 45% of their shots after three misses and 49% after three makes.
Here we might conclude, as Tversky did, that these subjects shoot only four percentage points better after three makes than after three misses.
This seems like a perfectly reasonable conclusion. But amazingly, it’s wrong. Indeed, this is the statistical fallacy that befell Tversky. It turns out that when you go back and compare shots taken after three makes and shots taken after three misses, there’s actually a subtle and unexpected selection bias going on.
Interestingly, this selection bias also crops up in classic probability puzzles like Monty Hall. But the explanation is a little wonky and so I’ll leave it for the next video.
Now. With a little work, we can correct for this selection bias. And once we do so, the very same data actually show that these subjects shoot 13 percentage points better after three makes than after three misses. This is a very real and very large hot-hand effect.
The Hot Hand in Three-Point Contests
The hot hand also shows up in the NBA’s Three-Point Contests. After three consecutive makes, contestants are 6.3 percentage points more likely than usual to make the next shot.
6.3 percentage points is nothing to sneeze at. In comparison, in 2017, Steph Curry shot only 4.8 percentage points better than LeBron.
6.3 percentage points is, moreover, merely an average. Indeed, one problem with many hot-hand studies is that they usually look at some sort of an average. But by focusing on the average, we miss out on important and interesting variations.
For example, following three consecutive makes, four contestants including Mr. Cupcake shot over 20 percentage points better. But perhaps more remarkably, there were four contestants who dragged down the average by shooting over 10 percentage points worse.
So. We’ve detected the hot hand in a controlled shooting experiment. We’ve also detected it in Three-Point Contests. But what about in actual NBA games?
Unfortunately, detecting the hot hand in actual games is really difficult and we haven’t yet managed to do so. There are at least two reasons for this.
First, players and coaches can respond to the hot hand. Defenders may double-team the hot hand. And the overconfident hot hand may himself jack up ever-tougher shots. These responses to the hot hand typically diminish its effect and thus make it harder to detect.
Second, small sample sizes make the statistical analysis especially challenging. In 2017, the most shots taken by a single player in a single game was 44. The mean was 8. And the median was 7. With so few shots to work with, even if the hot hand were consistently strong, we’d rarely be able to detect it.
Here’s a very simple example to illustrate the problem of small sample sizes. Imagine we want to know if Curry shoots better than Shaq. We get each to take one free throw. Curry makes his while Shaq misses. Based solely on these two free throws, can we conclude that Curry shoots better than Shaq? Nope. One free throw apiece is far too tiny a sample for us to make any conclusions whatsoever.
And so we cannot conclude that Curry shoots better than Shaq. But equally and just as importantly, we cannot conclude that Curry doesn’t shoot better than Shaq.
There is an old cliché in science: absence of evidence is not evidence of absence. Our data can’t prove that Curry shoots better than Shaq. But this doesn’t mean we should leap to the opposite conclusion that Shaq shoots better than Curry.
Yet this is exactly the mistake made by hot-hand deniers. Their poor data and methods couldn’t detect the hot hand. They then leapt to the opposite conclusion and labeled the hot hand a fallacy, “a powerful and widely-shared cognitive illusion”. Turns out they had been the only ones laboring under an illusion.
Postscript: Evidence from Free-Throw Data
Now, when we talk about the hot hand, we don’t usually think of free throws. But one good thing about free throws is that they take place in a fairly consistent setting. It’s always 15 feet from the backboard, no defense.
We can actually use free-throw data to say at least three things about the hot hand.
First, in Karl Malone’s rookie year, he made only 48.1% of his free throws. He then improved by over ten percentage points in his second season and again in his third. In contrast, The Artist Formerly Known as Dwight Howard (TAFKADH) has gotten worse over time.
TAFKADH: “In high school, I was 90% from the line.”
Stephen B. Smith: “Ninety?”
Stephen B. Smith: “Nine-zero?”
Stephen B. Smith: “You?”
Stephen B. Smith: “Wow.”
TAFKADH: “Heh heh heh. So you know, it’s all mental.”
Players can shoot better in one season than another. So why can’t they shoot better in one game or one quarter than another? In other words, why not the hot hand? The burden is on hot-hand deniers to explain why one is possible, but not the other.
Once again, the issue is one of sample sizes. If a player shot 50% in Season 1 and 70% in Season 2, even the hot-hand denier must admit she was a better shooter in Season 2. But if she shot 50% on Tuesday and 70% on Thursday, he says this could’ve been due to chance. Which is true enough. But where he goes wrong is when he concludes she wasn’t hot on Thursday. Again, absence of evidence is not evidence of absence. Just because our sample sizes are too small to conclude she was hot on Thursday doesn’t mean she wasn’t.
Next. NBA players consistently shoot four to five percentage points better on their second free throw. The data here are only for trips to the line for exactly two free throws.
In 2017, players made only 74.6% of the first free throws but 79.1% of the second. That’s a difference of 4.5 percentage points. Year after year, this difference is remarkably stable — NBA players consistently shoot four to five percentage points better on their second free throw.
Now. What this shows is that free throws are not like coin tosses. The probability of making a free throw is not fixed.
By the way, what explains this phenomenon? Maybe it’s that the first free-throw serves as practice. Or maybe it’s that by the second free-throw, the player will have had another 20 seconds to catch his breath.
Whatever the explanation, what this proves is that NBA players are human after all. They have identifiable periods of ups and downs and do not shoot with a fixed probability.
For our final finding, we again look only at trips to the line for exactly two free throws. But we now ask: If a player made the first free throw, is he also more likely to make the second?
In 2017, if Steven Adams missed the first free throw, he went on to make only 57.7% of the second. But if he made the first, he went on to make 81.7% of the second. That’s a difference of 24 percentage points.
In contrast, LeBron was 16.1 percentage points less likely to make the second free throw, if he had made the first. This is a very large anti-hot-hand effect. Perhaps not coincidentally, in 2017, LeBron also shot a career low 67.4%.
Now. These are merely quick findings that demand further investigation. But they do strongly suggest that at least for some players, free throws are neither independent nor memoryless. Once again, free throws are not like coin tosses.
 The two heroes are Joshua Miller and Adam Sanjurjo. They are milking their discovery for all it’s worth. As of 2017-09-14, they have five working papers about the hot hand (see Sanjurjo’s website): (1) “Surprised by the Gambler’s and Hot Hand Fallacies? A Truth in the Law of Small Numbers”; (2) “A Bridge from Monty Hall to the (Anti-) Hot Hand: Restricted Choice, Selection Bias, and Empirical Practice”; (3) “Is it a Fallacy to Believe in the Hot Hand in the NBA Three Point Contest?”; (4) “A Cold Shower for the Hot Hand Fallacy”; and (5) “A Visible hand? Betting on the hot hand in Gilovich, Vallone, and Tversky (1985)”
 The 1985 paper explicitly and repeatedly compared basketball shots to coin tosses. For example, “If people’s perceptions of coin tossing are biased, it should not be surprising that they perceive sequential dependencies in basketball when none exist.”
 For a popular account of the life and work of Daniel Kahneman and Amos Tversky, see Michael Lewis’s The Undoing Project: A Friendship That Changed Our Minds (2016).
 As stated in their footnote 4, “three of the players were not able to complete all 100 shots”. Inspection of their Table 4 reveals that these three players were males #4, #7, and #8, who took only 90, 75, and 50 shots.
 Actually, here’s what he actually did: “For each player we determined a distance from which his or her shooting percentage was roughly 50%. At this distance we then drew two 15-ft arcs on the floor from which each player took all of his or her shots. The centers of the arcs were located 60° out from the left and right sides of the basket. When shooting baskets, the players were required to move along the arc between shots so that consecutive shots were never taken from exactly the same spot. Each player was to take 100 shots, 50 from each arc.”
 According to Miller and Sanjurjo (2016-11-16), “Surprised by the Gambler’s and Hot Hand Fallacies: A Truth in the Law of Small Numbers”, Table 2 (p. 18).
 According to Miller and Sanjurjo (2015-06-11), “Is it a Fallacy to Believe in the Hot Hand in the NBA Three-Point Contest?”
 These figures are again from Miller and Sanjurjo (2015-06-11).
 I should emphasize that it is merely my opinion that researchers haven’t been able to detect the hot hand in actual NBA games. Bocskocsky, Ezekowitz, and Stein (2014) for example claim to have detected the hot hand by controlling for shot difficulty and other factors.
However, I do not have great confidence in that paper as they seem to be confused about what a “percentage point” is. In their abstract, they state, “Our estimates of the Hot Hand effect range from 1.2 to 2.4 percentage points in increased likelihood of making a shot.” Many who didn’t read the paper carefully took this at face value. For example, Andrew Gelman.
But in the main text, they actually state instead, “A player who makes one more of his past four shots sees his shooting percentage increase by 0.54 percentage points. Given that the average NBA player has a field goal percentage of about 45%, this represents about a 1.2% improvement. In the same vein, if a player makes two more of his past four shots (perhaps more indicative of what it truly means to be “hot”), we see a 2.4% improvement.”
But even ignoring this error, I would not consider a measured hot-hand effect of 0.54 percentage points to be significant in any sense other than the narrow one of statistical significance.
 2016-2017 regular season. From the NBA.com player box score search, I found 1,114 player-games with 0 FGA; 1,461 with 1; etc. I believe these include all player-games where the player was on the court for at least one second. See this spreadsheet. The average was 8.04 and the median was 7.
 The data here are from https://fansided.com/2017/05/04/expanded-free-throw-splits-since-1997/.
 To my knowledge, this phenomenon has not been seriously studied.
 Gilovich, Vallone, and Tversky (1985) actually looked at such free-throw data, but they looked only at nine Celtics players across two seasons (1981 and 1982). As usual, they run into the problem of small sample sizes — their study was woefully underpowered and set up to fail. But even so, they found that McHale was 14 percentage points more likely to make his second free throw if he had made his first, while ML Carr was 13 percentage points less likely to do so. However, these apparently did not cross the sacred threshold of statistical significance and thus did not merit even a passing remark in the text.
Yaari & Eisenmann (2011) redo the analysis but with five seasons’ worth of data (2006-2010). They find that the hot hand indeed exists.
I have actually also done a little bit of the analysis myself across additional seasons. Excel files: 2017 season alone and the past 11 seasons (2007-2017) combined. In the combined data, we see for example that TAFKADH was 5.8 p.p. more likely to make the second free throw, if he had also made the first. But this may merely be some Simpson’s Paradox type thing though and more analysis is needed.