Not how they do now.
There are a number of things wrong with the way the NCAA selection committee currently goes about deciding the 68 lucky teams to participate in March Madness, but the most glaring is its lack of transparency. Of course there are selfish reasons for which they are not perfectly transparent–the NCAA almost certainly benefits from all the hype generated by “bracketology” forecasts and the revenue made from Selection Sunday. And it is now tradition for teams and fans alike to eagerly watch the selection show, praying for a high seed for their favorite and critiquing the committee’s decisions like the wannabe pundits they are.
But none of this is in the spirit of the game. All hell would be raised if professional sports leagues ditched their objective metrics for clinching a spot in the postseason. And with billions of dollars at stake, it is possible, even likely, that some bribery or rigging would occur. So why do we let the NCAA operate behind closed doors?
The reason the NCAA is subjective about these decisions is because, historically at least, college sports standings have been hard to measure objectively. With 300+ teams, about 30 games in a season, and wildly disparate conference strengths, using any traditional metric, like win percentage, would be nonsensical, unless you’d prefer seeing Gonzaga and a handful of mid-majors occupy the top three seeds each year.
But it does seem that increasing the objectivity of tournament selections is of interest to the NCAA. Beginning with RPI, the NCAA has used various metrics to inform their decisions. And perhaps they would have relied on RPI entirely if its inventors could have done any advanced math and created an index that was not arbitrary garbage. But last year the NCAA announced the NCAA Evaluation Tool (NET) as RPI’s replacement, making a strong step in the right direction despite continuing to operate in secrecy.
In a perfect world two things would happen: (1) The NCAA would rely upon an objective metric to make selections and seeding entirely, without any personal deliberation, and (2) this metric would be similar to the NET, but modified in a couple ways to accommodate the true spirit of the game.
As far as ranking models go, I believe the NET is a good (fantastic by NCAA standards). But I think it is only good as an objective predictive tool, and has no place in the selection process.
The problem with the NET is the same as the problem with BPI, which is that they each incorporate margin of victory and location into their models. It seems that the NCAA was trying to do their own version of what KenPom and Sagarin have famously done for years, and for this reason created a model that used the same variables. But advanced metrics like KenPom, Sagarin, and now the NET should not be used as an evaluative tool because that’s not what they are; they are predictive tools. And this difference is crucial. Whereas for predictive models the question of what variables matter is an objective one (what makes the best predictions?), evaluative models require a more philosophical approach.
So the question is, what variables should matter in evaluation? Or put a different way, for what variables do we want to reward teams? And to this question I answer that two variables should matter: a team’s wins and losses, and against whom those losses occurred.
Should it matter how much a team wins or loses by? Absolutely not. If Oregon escapes another upset by Payton Pritchard’s overtime heroics, should Dana Altman be scolding his team in the locker room for not winning by more? Should the two points that gave his team the lead at the buzzer be just as meaningful as the two points scored with 13 minutes left in the first half? That’s ridiculous. If a team escapes an upset, allow them to feel the full relief of the victory sweep over them.
Because what kind of a game is it that only measures the outcome of its competitions by margin of victory, absolving the need for the won/loss stat? Basketball is a game of victory and defeat. The binary outcome is what makes sports so appealing to the viewer and the participant. The result of the game does not exist on a continuum; it comes in the form of “I am better than you,” no matter what the score is.
And on a more practical level, we should not incentivize teams to run up the score, or try to get an easy bucket as the clock expires in a dead game when the rest of the players are walking to the bench. Players should not be playing for points. Further, if Gonzaga wants to bench one of their star players because they are playing their umpteenth worthless conference game in a row, they should be able to do that. They should not be forced, over and over again, that not only are they better than their opponent, but they are 40-point blowout win better than their opponent. Simply a win should suffice.
I add that location of the game should not matter in evaluating a team’s performance, because this is not in the spirit of the game. Perhaps I just love the binary, but I think it sullies the spirit of the game if every win is qualified by a “home” or “away.” The selection committee should not look upon an ACC loss and decide that it does not matter that much, because it is at Cameron Indoor, where is it historically hard to play. Are we then punishing fans for coming to the game and being successful in their rowdiness? It seems antithetical to what basketball is about.
I believe that my own method can be modified to produce a reliable ranking system for NCAA tournament decisions. My method, briefly, creates a matrix of Division I competitions, and uses the eigenvector that corresponds to the greatest eigenvalue as its rankings. This may sound arbitrary, but I assure you it’s not. It is in my opinion the most objective algorithm to rank teams, which is why I use it. Where as for every game I assign a value that reflects the scoring margin and game location for predictive purposes, I propose a matrix where a 0 is assigned for a loss, and a 1 is assigned for a win. The author of the paper who first described this method to rank college football teams, James Keener, in fact insisted that a prominent college football coach suggested that a certain version of this method should be used to decide the national champion.
A benefit of this method as well is that a team can easily see how their ranking will adjust after a certain series of events; there is no guesswork involved. A team on the cusp of making the tournament can set clear goals to do so and not wait until a couple math-illiterate television personalities go on CBS to tell them they didn’t make the cut.
Maybe I will start publishing what I believe the NCAA should use to decide tournament selections and seeding. For now, it is just an idea.