"A good scout, a good analyst, they go and they watch the game. Your eyes see the game much better than the numbers. But the numbers see all the games. And that's a big deal." That's what Dean Oliver, statistician, author of the seminal "Basketball on Paper," onetime head of analytics for ESPN's Stats & Information Group and former front-office employee for multiple NBA teams, told Bleacher Report's Howard Beck back in 2015. I've seen Oliver give a similar quote many times, and I always get jealous that I didn't come up with it first.
Numbers see all the games. They connect the dots in a way we never could. While people sometimes try to treat them as the end of the conversation, I like to think of them more as a grounding force, a way to start the conversation in the right place. And I can't stop myself from wishing they played more of a role in the College Football Playoff rankings.
To be sure, numbers are involved. The CFP committee is given a lovely dashboard for comparing teams, with green and red dots to signify superiority in certain categories, and their references to terms like game control or strength of record show that they internalize the numbers they see to some degree.
Plus, as I always feel the need to mention when quibbling over how the committee does its job, it mainly gets it right when it comes to picking the playoff's participants. Of the 32 teams that have made the playoff to date, I may have had minor complaints about a few. I wished Cincinnati had gotten more of a look in 2020 (mainly at Notre Dame's expense) and I wouldn't have complained if Baylor or TCU had gotten the fourth spot in 2014 instead of Ohio State (though I didn't have a problem with the Buckeyes either). I was annoyed by the treatment of unbeaten UCF in 2017, though I probably wouldn't have put the Knights in the top four. Still, the committee's picks have been more than acceptable. So when we complain, it's more about process than outcome.
The process still matters, though. And the importance of the process will certainly not be diminished in the future, when the committee is tasked with choosing 12 teams to play for the national title instead of four (and choosing which of those teams will receive byes or first-round home games). The CFP proudly refuses to use power rankings or résumé rankings of any kind in any direct fashion, choosing instead to throw numbers at its committee and ask each member to create their own, untested power ratings in their respective heads. That is never going to be the most advisable approach.
So as we work through the week-to-week rankings routine, knowing that in the end the result will likely be acceptable and predictable, let's think about the role numbers could play in this process.
Résumé tools
One of the tools the committee has at its disposal is strength of record, the ESPN Stats & Info Group's measure of the probability that an average top-25 team could achieve a given team's record given the same schedule. It doesn't look at a team's margin of victory -- this sport's decision-makers have always been strangely terrified of Team A running up the score on Team B in an unsporting way, god forbid -- but it does a nice job of providing an at-a-glance résumé evaluation.
Years ago, before I joined ESPN, I began taking my own stab at something similar with what I call résumé SP+. While my SP+ ratings, like ESPN's FPI, are designed to be predictive and forward-facing, résumé SP+ attempts to look backward at how a team has played against whom it has played. It is a look at two things: (1) how the average SP+ top-five team would be projected to perform against a given team's schedule -- in terms of scoring margin (which I cap at 50 points) instead of straight wins and losses -- and (2) how the team's scoring margin compares to that projection. Throw in a seven-point penalty for every loss a team has suffered (because losses matter on the résumé!), and we have what the CFP rankings would look like if SP+ were in charge.
Ohio State came into Week 10 ranked first in résumé SP+, but the Buckeyes' sluggish win over Northwestern, combined with Georgia's surprisingly easy victory over Tennessee, moved the Dawgs to the top spot. Here is this week's top 15:
1. Georgia (9-0)
2. Ohio State (9-0): 0.4 points behind
3. Michigan (9-0): 8.1 points behind
4. Tennessee (8-1): 12.5 points behind
5. TCU (9-0): 15.4 points behind
6. Alabama (7-2): 19.7 points behind
7. Oregon (8-1): 22.2 points behind
8. Ole Miss (8-1): 23.7 points behind
9. USC (8-1): 24.9 points behind
10. UCLA (8-1): 26.9 points behind
11. LSU (7-2): 27.9 points behind
12. Clemson (8-1): 28.0 points behind
13. Tulane (8-1): 28.9 points behind
14. Penn State (7-2): 29.8 points behind
15. Utah (7-2): 30.1 points behind
While I would personally rank unbeaten TCU fourth instead of Tennessee and one-loss Oregon ahead of Alabama, I don't have too many quibbles with this.
What if both strength of record and résumé SP+ were more direct pieces of the rankings process? What impact might that have? To fiddle with an answer, I decided to go back in time.
Return of The Formula
The College Football Playoff and all of its processes replaced the Bowl Championship Series as the mechanism for determining the national champion. The BCS (and its rankings formula) had become a source of frustration for many college football fans, but while the formula itself often drew ire, the main problem wasn't the formula but its task: It could choose only two teams to play for the national title. For many of the BCS' 16 seasons, there were more than two deserving teams.
Anytime the BCS selections caused controversy, the sport's decision-makers seemed to panic and change the formula. It changed the weighting of Factor A and Factor B. It changed which computer rankings were involved, then it forced said rankings to stop using margin of victory in any way, even though that is more of a signifier of quality than wins and losses alone. (Case in point: Georgia has handed both Tennessee and Oregon their only losses of the current season, but the Vols lost by 14 and the Ducks lost by 46. That seems like a relevant difference.)
By the end of the BCS era, the formula had come around to a pretty simple overall construct: two polls were worth two-thirds of the rankings, and a set of (neutered) computer rankings accounted for one-third.
As a thought exercise, let's go back to that for a moment and see what it produces. What if we took the two most prominent college football polls (the AP poll and the Coaches poll) for the former piece and four computer ratings -- FPI and SP+ (both because they are two of the most accurate ratings available and they are very familiar to me) for team quality, strength of record and résumé SP+ (ditto) for proper résumé evaluation -- for the latter? This gives us the eyes of professional football writers and coaches, it gives us the emotionless, watch-every-game aspect that numbers perform better than humans, and it accounts for the ongoing and often conflicting "best vs. most deserving" debate that impacts any tournament selection. The committee always says it is choosing "best"; its constant and justifiable references to résumés prove it chooses "most deserving."
(Since we're playing around in Imagination Land, we don't have to consider the potential conflict-of-interest issues arising from ESPN-based formulas controlling the rankings and can just focus on what the formula produces. I love Imagination Land. My Twitter mentions are much kinder there.)
Here's what this formula would have produced last week for the first iteration of the CFP rankings. The aforementioned computer ratings, listed as percentiles, each have their own column, as does the poll average, which is calculated by the percentage of points a team got compared to the maximum available. The CFP BCS column is the final total based on the formula of two-thirds for the polls and one-third for the computers. The actual CFP rankings, and the differences between the two, are on the far right.
The formula and actual rankings aligned quite well -- so well, in fact, that it made the outliers stand out quite a bit. Tennessee and especially LSU were both given boosts by the CFP committee, primarily at the expense of Georgia and, further down, a couple of the Pac-12's better teams (UCLA and Utah). The committee's odd boosting of Clemson over Michigan also stands out a bit considering how much distance there was between the two teams in this faux-BCS formula.
Now that we've digested that, let's see what has changed for Week 11 after Georgia's big win, LSU's upset of Alabama, Clemson's loss to Notre Dame and everything else.
This is nice and clean at the top, with the four unbeaten teams taking the top spots, followed by each of the two teams that have lost only to Georgia. In the Group of 5 space, Tulane and UCF both rose in the rankings, setting the table nicely for the teams' Saturday showdown in New Orleans, while Liberty jumped from 30th to 22nd following the Flames' win over Arkansas.
This week's most interesting teams
Looking at the biggest differences between what this formula produced and what the actual CFP committee released, the rankings for both LSU and Tennessee could tell us a lot this week.
How far does LSU rise? While Georgia's win over Tennessee didn't change much in the rankings, two Week 10 results did. Notre Dame's stomping of Clemson both bumped the Fighting Irish up 11 spots and dropped the Tigers seven. Meanwhile, LSU's win over Bama knocked the Crimson Tide down three spots but bumped the Tigers up eight.
Despite its two losses, LSU ranks seventh overall in the formula. It would make sense if the committee placed the Tigers there Tuesday night, directly behind the unbeatens and if-not-for-Georgia unbeatens, but could the committee go even further than that? After all, it was already much higher on the Tigers than the formula -- actually, let's start capitalizing that -- than The Formula last week, ranking them in the top 10 despite both a loss to an unranked Florida State team and a blowout loss at home to Tennessee.
I was most thrown off by Clemson's No. 4 ranking last Tuesday, but in retrospect LSU was by far the biggest outlier of the week. The Tigers likely ranked 10th because of a clunky desire to place them ahead of the Ole Miss team they had just beaten. (That's the same desire that got Oregon ranked ahead of Ohio State for a while last year despite the fact that the Ducks' on-field quality collapsed quickly the moment they returned from Columbus after their upset win over the Buckeyes.) But now LSU has scored another huge, résumé-boosting win. The committee members wouldn't rank it ahead of Oregon, would they?
How far does Tennessee fall? It's noteworthy that Tennessee still ranks second in strength of record, a measure we know the committee takes into account to some degree. The Volunteers clinched "great résumé" status by destroying LSU in Baton Rouge in Week 6, then knocking off Alabama at home in Week 7. Saturday was their last great résumé-building opportunity -- they finish the season with Missouri, South Carolina and Vanderbilt and will almost certainly miss the SEC championship game now -- but the committee will likely give them a pretty cushy landing spot as they drop down from No. 1.
How cushy, though? Do the Vols only fall to third, ahead of the Michigan and TCU teams the committee clearly didn't love a week ago? Are they fourth, ahead of one of those two? The Formula incorporates SOR and still ranks them fifth, but the committee thought higher of them than The Formula last week.
If nothing else, the answer to that will tell us which division is more likely to get two zero- or one-loss teams into the CFP -- the SEC East or the Big Ten East. Michigan's poor nonconference schedule clearly dragged its ranking down last week and will likely continue to hinder the Wolverines even though the computer power ratings tell us they are clearly an awesome team (better than Tennessee, in fact). It's hard to think an 11-1 Michigan team, with even a super-tight loss to Ohio State in a couple of weeks, would get in over an 11-1 Tennessee at this point. Tuesday night's rankings could all but confirm that, though if a higher-ranked Ohio State ends up losing and becomes the one-loss team in question, that might change the calculus a bit.
I'm not going to lie: I like the results of The Formula more than I tend to like the weekly committee rankings. The current committee boasts 50-something years of head coaching experience, more than 125 years of athletic director experience, a couple of decades of NFL experience and all of committee member John Urschel's many math degrees. It does not lack for brain power and football know-how. But numbers see every game. The more directly they are involved in the process, the better the process becomes.