This week's mailbag features your questions on advanced stats references, evaluating coaches and more.
You can tweet your questions using the hashtag #peltonmailbag or email them to peltonmailbag@gmail.com.
"Since we are nearing the end of the season and award season is coming up, it got me wondering, why is there an All-Defensive team but not an All-Offensive team. So who would make your All-Offensive team this year?"
-- Midilan Sivayoganathan
So this was a fun exercise that I think also showed why All-Offensive teams aren't really necessary. Here were my picks:
First Team
SF: Kevin Durant, Golden State Warriors
Second Team
SG: Lou Williams, LA Clippers
SF: Jimmy Butler, Minnesota Timberwolves
The two hardest calls here were point guard, where Curry has been far and away the best offensive player but in fewer minutes because of injuries; and center, where you could just about flip a coin between Jokic and Towns. There were also a lot of deserving candidates for the second spot at shooting guard, most notably DeMar DeRozan and Victor Oladipo.
So why not pick All-Offensive teams? Of the 12 players I've mentioned as candidates, only Williams is not realistically in consideration for an All-NBA spot. For the most part, the All-NBA teams are the All-Offensive teams. And that's OK.
While I'd imagine defense probably does get undersold a little in All-NBA voting, the evidence suggests that elite offense is somewhat more valuable than elite defense. Results of pure adjusted plus-minus (including RAPM, the forerunner of ESPN's real plus-minus that considers only team performance with and without a player) show a wider spread of ratings on offense than defense -- that is, the best offensive players help their offenses more than the best defensive players help their defenses.
@kpelton Coach of the Year thought. Since it's often expectation vs result, would you be able to plug in actual minutes played to your preseason projection model and see which teams overachieved the most?
— Paul M (@monsone87) April 4, 2018
Let's start with a note of caution about this exercise. There are a couple of issues with evaluating coaches based on how much their teams exceed projections. First, for teams that retain most of their roster, to some extent coaching is baked into the projection. That is, if players performed better in the past because of their coaching, we're already going to expect that to carry over.
Second, we're attributing any changes in performance to coaching, which obviously isn't accurate. Players can improve their physical capabilities or their skills individually over the offseason.
Those caveats noted, let's take a look at projections using the multiyear version of ESPN's real plus-minus (RPM) based on actual minutes played this season and how they compare to teams' records and Pythagorean expectation given point differential. (All stats are through Friday and projected to 82 games.)
While nearly half of all teams are on track to come in within three games of their RPM projection using actual minutes, there are an unusually large number of significant outliers this season. And, unsurprisingly, the list of Coach of the Year candidates mirrors the top of these rankings.
Because of Gordon Hayward's absence and other injuries, the Boston Celtics' projection -- already too low entering the season -- dropped all the way to 38 wins using actual minutes, so the Celtics will have the largest difference in absolute terms. But the Toronto Raptors have overperformed their projection more in terms of point differential, with the Indiana Pacers not far behind Boston.
"The Cleveland Cavaliers are 18 games over .500 yet have a scoring margin of less than plus-1.0. I have to imagine that such a imbalanced record vs. scoring margin is a fairly uncommon occurrence in NBA history. Is that true?"
- Tarek Mohamed
Typically, a team's point differential will closely track its winning percentage, and the Cavaliers are indeed an outlier. Here's a graph of win percentage vs. margin of victory this season (through Friday):
That holds up on a historical level too. No team since the ABA-NBA merger has won more than 48 games (or the equivalent in a lockout-shortened season) with a point differential worse than plus-1.3 points per game. The 2011-12 L.A. Lakers are probably the best match for what Cleveland has done this season. Those Lakers went 41-25 (a 51-win pace) with a plus-1.4 differential. After beating the Denver Nuggets in a seven-game series, those Lakers lost to the Oklahoma City Thunder 4-1 in the second round.
The Cavaliers winning the Eastern Conference would also buck historical trends. No team has made the NBA Finals with a point differential worse than plus-3.2 (Cleveland's mark last season) since the New York Knicks in 1999 (plus-1.0) after a lockout-shortened season. Just three other teams since the merger (the 1978 Seattle SuperSonics and Washington Bullets and 1981 Houston Rockets) have made it with a point differential worse than plus-2.
"I was looking around different NBA sites that track team performance, and I noticed that ESPN's 'Hollinger stats' and Basketball-Reference stats disagreed on which teams have the best offensive and defensive ratings. In Hollinger's stats, the Golden State Warriors actually had a better offense and defense than the Houston Rockets, for all the discussion about how GSW had regressed this season. Why do these discrepancies exist? And which sites do you trust for meaningful predictive value?"
- Andy Liu
So Andy's specific observation is no longer true after Golden State's injuries, but answering this question led to an interesting discovery. I knew that the two sites used different formulas to estimate possessions. The Hollinger version, also used by NBA Advanced Stats, estimates possessions using a relatively simple formula: FGA + .44 * FTA - OR + TO. This overestimates the number because it doesn't account for team rebounds: plays where the offensive team retains possession of the ball by virtue of a loose-ball foul or the defense tipping the ball out of bounds, so no individual is credited with an offensive rebound.
To account for that, Basketball-Reference.com uses a more complicated formula that accounts for the team's offensive rebound rate and the number of missed shots available: (FGA + 0.4 * FTA - 1.07 * (OR / (OR + OppDR)) * (FGA - FG) + TO). However, the difference between teams' relative positioning using those two formulas are relatively small -- too small to account for the discrepancy that Andy observed between the Rockets and Warriors.
What I hadn't realized is that Basketball-Reference also averages team and opponent possessions to calculate offensive and defensive ratings. While I understand the logic that large differences between two teams' possessions in a single game are probably just random noise owing to those uncounted team rebounds, over time it is possible for teams to build up a surplus of extra possessions by effectively managing 2-for-1 opportunities. So I think the Hollinger model is a little better at capturing offensive and defensive performance.
The inevitable follow-up here is why, in a world in which play-by-play data allows us to track the actual number of possessions, we're bothering to estimate them at all. Actual possession data is available on CleaningTheGlass.com, which also excludes garbage time and end-of-quarter heaves. The challenge is comparing those numbers historically to teams before the advent of play-by-play data. I like that historical perspective, which is why I continue to use the estimates. Your mileage, as always, may vary.