Something was conspicuous by its absence from my TrueHoop post earlier this week on how I watch summer league: the role of player statistics. That's because I've never seen a study answering the question of whether summer-league stats translate to regular-season performance ... until now.
The counterpoints against the value of summer performance are obvious. Just look at a list of past MVPs from the NBA Summer League in Las Vegas, which includes Jerryd Bayless, Nate Robinson and Josh Selby alongside future stars like Blake Griffin, Damian Lillard and John Wall. Yet if strong performances in Vegas (and Orlando) haven't guaranteed success in the NBA, rough performances have proven ominous in the case of top picks like Thomas Robinson and Evan Turner.
To assess the predictive value of summer-league statistics more thoroughly, I compiled last season's numbers for players who participated in both NBA summer leagues, then compared them to how those same players played during the 2012-13 regular season. The results confirm some assumptions about summer-league stats, but discredit others.
More meaningful for rookies
On TrueHoop, I explained that the way I watch NBA veterans in summer league is different than my philosophy with rookies. Turns out the numbers back that up. Overall, the correlation between players with at least 100 minutes in summer league and during the regular season is just .288, a relatively small figure on the correlation scale of zero (meaning no linear relationship) to one (or minus one, meaning completely in line).
However, that figure gets more interesting when it's broken down by NBA experience. Veteran players who were in the league the previous season have an even smaller correlation of .101 between 2012 summer performance (as measured by my per-minute win percentage rating) and 2012-13. Once we account for how well these players were projected to play in 2012-13, their summer-league stats have zero predictive value.
By contrast, the correlation of .463 for rookies is far higher. In fact, it's nearly as strong as the relationship between my college stat translations and rookie performance (.468). When we combine the two factors to try to predict how well players will fare as rookies, summer-league stats make up about a quarter of the combined projection.
The difference might be explained by motivation. While rookies only miss summer league due to injury, for talented second-year players, going to Vegas or Orlando can feel like punishment. Motivation matters. Cleveland Cavaliers guard Dion Waiters, the No. 4 overall pick last June, might be the most accomplished player seeing action this summer, but he sleepwalked his way through his first couple of games before turning up his effort level.
Summer league was more predictive in the case of Lillard, who followed up his NBA Summer League MVP trophy by winning Rookie of the Year nine months later. And Robinson's poor effort in Vegas -- among rookies who played more than 100 minutes, he ranked ahead of just Khris Middleton and Marquis Teague -- was the first hint he might suffer through a poor rookie season that resulted in Robinson being traded twice in a five-month span.
What translates
Dean Oliver of ESPN Stats & Info has explained that a human scout can assess an individual game better than the box score, but the upside of statistics is that they "see" every game. That's not much of an advantage during the summer league, when teams generally play five games apiece. (This year's tournament format for the NBA Summer League means some teams in Las Vegas will play more.)
Not counting players who saw action in both Orlando and Las Vegas, Harrison Barnes led all players with 168 summer minutes -- still well short of the 250-minute threshold that is the minimum I would ever use during the regular season. For comparison purposes, I looked at the first 11 days of the 2012-13 regular season, during which teams played about five games apiece. During that span, the NBA's top five players by WARP included LeBron James, Chris Paul, Tim Duncan, Kevin Durant ... and Marcin Gortat, demonstrating the volatility of five-game samples.
A comparison of the correlation between summer-league stats and their regular-season equivalents with those from the first five games and the rest of the season can help shed light on which factors are nothing more than randomness and which represent the different style of play and level of competition in summer league.
Shot-blocking and defensive rebounding translate particularly well from the summer league to the regular season. That's good news for players, like Milwaukee's John Henson, who have been controlling the glass and blocking shots.
Conversely, shooting during the summer tends to be much less predictive than randomness alone would indicate. The better players shot from 3-point range during last season's summer league (minimum 20 attempts), the worse they shot come November. Accurate 2-point shooting also failed to translate, along with offensive rebounding, steals and even usage rate.
Even stat-haters can be swayed by big performances that are obviously unsustainable. Selby won MVP honors last summer on the strength of obviously fluky 27-of-42 shooting from beyond the arc. He ended up making just one of his six 3-point attempts during the regular season, playing 59 minutes for the Memphis Grizzlies before he was traded to Cleveland and waived in March.
Selby's experience is the perfect example of the stats that don't matter from summer league. But with the appropriate care, they can shed some light on performance in Las Vegas (and Orlando) that might be more meaningful than mirage.