This week's mailbag features your questions on James Harden's and Russell Westbrook's total points created, All-NBA selections as All-Star snubs, how much the tourney matters for the draft and more.
You can tweet your questions using the hashtag #peltonmailbag or email them to peltonmailbag@gmail.com.
@kpelton any evidence that conf/ncaa touneys are more, less, or equally predictive of nba success? Should scouts even watch these games?
-- Sean Derenthal (@OdetoOden) March 9, 2017
First, let's start with the evidence that they are paying close attention to these games, as you might expect. A 2011 study by David J. Berri, Stacey L. Brook and Aju J. Fenn found that, on average, a player who reaches the Final Four during his final season is selected 11 picks higher than a player with identical stats whose team did not reach the Final Four.
They also found no link between reaching the Final Four and production in Berri's win score statistic, allowing them to conclude that NBA teams were overvaluing a deep tournament run.
I found something different when I studied the issue using my own draft database. Looking at players drafted between 2003 and 2011 (the last group that has completed the five NBA seasons I use in my projections), I compared the expected wins above replacement player (WARP) from a player drafted in that spot to what players actually produced and sorted by how far their team advanced during their final college season. Here's how that looks.
My method also showed players getting drafted earlier after a deeper tournament run, after accounting for their statistical projections -- but nonetheless players who reached the Final Four still outperformed their draft pick. So perhaps teams are right to value tournament success.
While it's easy to think of players who boosted their stock with NCAA tournament runs and then fell short of NBA expectations, looking specifically at NCAA champions yields plenty who were drafted later than they should have been. Danny Green, Ty Lawson, Joakim Noah and Kemba Walker are four such players.
Of course, this doesn't completely answer your question. Looking at whether individual tournament performances like De'Aaron Fox outplaying Lonzo Ball head-to-head are overrated is beyond the scope of this column, as are conference tournaments. But it does suggest teams have been justified in paying close attention to the Final Four.
"What are some of the top seasons of points created (points plus points from assists) in NBA history? Where do LeBron, Harden, Westbrook and Wall stack up in those top seasons? What are your general thoughts on this stat? Should we be using it more?"
-- Wes Franson
With eight games left to play, Westbrook currently ranks 11th all-time on this single-season leaderboard with Harden (who's got seven games remaining) just behind him. It's likely that both will pass Isiah Thomas and Michael Jordan for the best single-season total since Tiny Archibald led the NBA in both points and assists per game in 1972-73.
Should we be using this stat more? No, I don't think so. Treating assists and points equivalently doesn't really make sense (for one thing, at the team level, this means an assisted field goal is equivalent to two unassisted ones) and it doesn't account for efficiency, playing time or pace of play. There's a reason the leaders here, as in most raw stats, are almost entirely from the pre-merger NBA. To me this is more of a junk stat -- something that can be interesting to look at or illustrative but is not a good measure of overall value.
"What's the history of All-NBA selections that were not also selected to the All-Star Game in that same year? Is Damian Lillard on the verge of doing that for a second year in a row? Even though he was considered an All-Star "snub" both of the last two years, it does appear he also has played better after the break both years. Is there some precedent to certain players consistently playing better after the All-Star break?" \
-- Jesse Arick
Consider me skeptical that Lillard will ultimately make it, but he's certainly got a better chance than the typical non-All-Star. Three players have done this twice since All-NBA expanded to three teams in 1987-88: Carmelo Anthony, DeAndre Jordan (the only one to do it back-to-back the past two seasons) and Kevin Johnson.
I'd say it typically has more to do with positional imbalance than turning it on in the second half per se. Jordan has benefited from the relatively thin crop of centers, while Johnson was left off the All-Star roster because the West was deep with point guards during his prime. That's certainly a factor with Lillard too, but I guess that leaves Anthony as the best example of a player regularly playing better after the break.
"What is the basketball equivalent of the save metric as it is discussed in the Sam Miller article you tweeted recently?"
-- Trent Gill
Hmm. While there's plenty of stat chasing in basketball, I'm not sure I can think of an invented stat that has had as much impact on how the game is actually played as the save has in baseball. (If you have ideas, feel free to suggest them.)
Ultimately, I'd say that games started -- despite being the literal opposite of the save -- is probably the closest equivalent. The figurative importance of starting sometimes makes it difficult for coaches to best balance their rotations. I've wondered whether there's a way to track "games finished" in the same way as starts to give credit to reserves who are on the court for crunch time. But that would probably create the same problems as the save rule.
Anyway, I'd suggest reading the linked story about Cleveland Indians reliever Andrew Miller and his willingness to buck the conventional usage of closers. Sam Miller is such a compelling writer, and the ideas he tackles so universal, that I'm a devoted reader despite not following baseball closely enough anymore to really care about the subjects.
"I liked your post about what to expect from Kevin Durant after his return based on the historical precedent set by others with a grade 2 MCL sprain. You note that the average player tends to perform about as well as expected after the injury. I'm curious about the worst-case and best-case scenarios as well and how widely disparate they are, in addition to just the average. What was the worst post-injury performance in your data and the best, and how much did they diverge from expectations?"
-- Dan Weiss
Thanks, Dan! The reason I didn't include the distribution was because it was symmetric around the average, which suggests to me that it's not so much a "best case" or "worst case" as the typical variation in performance any time we look at a small sample.
I don't think there's any particular reason to believe that the players who performed better than expected were somehow healthier after an MCL sprain (though I suppose there may be a benefit in terms of rest), and in the same way we probably shouldn't attribute underperformance to the injury either. This is a case in which I'd invoke my razor: Never attribute to causal relationships that which can adequately be explained by random chance.