clock menu more-arrow no yes

Filed under:

Here’s how the Cougs ended up with a 4 seed in the NIT

New, comments

Most projections had the Cougs out of the field, but they were wrong. This is why.

Washington State v UCLA Photo by Ethan Miller/Getty Images

As we approached the NIT selection show on Sunday night, the consensus among those who try to predict the tournament’s bracket was that Washington State was squarely on the bubble — opinions were split on whether the Cougars were in the field.

By the time the bracket had been revealed, we learned that they were never on the NIT bubble, and that they not only were comfortably in the field, but that they were seeded in the top half of the field — high enough to host their first round game tonight, against Santa Clara.

Why the disparity?

The committee was opaque with its selection process after revealing the field (no big surprise there), but it doesn’t take a rocket scientist to figure out what went on here.

In the run-up to the selection, NIT bracketologists assumed the same selection criteria would be applied to the NIT that is applied to the NCAA tournament. Having not paid attention to NIT selections in years and years, I can only assume that these folks were doing that because that’s what the NIT committee has traditionally done. When you apply that criteria to WSU’s resume — as we’ve talked about ad nauseam — the Cougars looked extremely iffy, thanks to their lack of quality wins.

As it turned out, the NIT committee was pretty obviously not using the same selection criteria as the NCAA tournament committee. When you look at it these numbers, you see pretty clearly what went on. The table below is first ordered by “NET,” which is the NCAA’s own ranking metric (I’ve also thrown in the rankings for fun). What do you notice about the correlation between NET and seeding?

If you’re on mobile, it might be easier to look this link.

2022 NIT Field

Seed Team NET KP
Seed Team NET KP
1 Oklahoma 39 30
2 Xavier 40 60
1 Texas A&M 43 43
1 SMU 45 54
2 North Texas 47 50
2 Wake Forest 48 37
2 BYU 54 51
3 VCU 56 67
3 Mississippi State 57 45
1 Dayton 58 57
3 Florida 59 56
4 Utah State 60 44
4 Washington State 61 55
3 Saint Louis 64 64
4 Vanderbilt 66 65
5 Santa Clara 67 68
8 Missouri State 68 63
4 Colorado 70 80
5 Belmont 71 82
7 TOWSON 72 77
8 TOLEDO 75 90
5 Oregon 76 79
6 Virginia 83 84
5 St. Bonaventure 85 88
6 IONA 90 94
6 PRINCETON 105 104
7 TEXAS STATE 129 137
7 LONG BEACH ST. 154 153
7 CLEVELAND ST. 182 203
8 NICHOLLS 192 207
8 ALCORN 265 268
Seeds 5-8 are implied, since the bracket only seeds 1-4 in each region. Teams that are bolded and all caps were automatic bids into the field. NET rankings;

Hopefully you quickly noticed the same thing I did: That the NIT committee basically looked at the NET rankings and selected teams rated highly by that metric. Then they went one step further and also used it to seed the teams (with a couple of obvious regional adjustments), which is how the Cougs ended up with a 4 seed: They were among the top half of the field in NET.

The lowest NET team to be selected for an at-large bid was St. Bonaventure (85); here’s a list of the teams ranked lower than them who were not selected for either the NCAA tournament or the NIT:

  • 69: St. John’s (17-15)
  • 73: Fresno State (19-13)
  • 74: Kansas State (14-17)
  • 78: West Virginia (16-17)
  • 80: Furman (22-12)
  • 82: Drake (24-10)
  • 84: Clemson (17-16)

(No. 51 Oklahoma State also was not selected, but they are serving a postseason ban.)

(Also, as Mark Sandritter said on our CougCenter Slack when I pointed all this out: “So basically Rutgers made the NCAA tournament because of stupid quad 1 wins and if they hadn’t they would have barely made the NIT.” It appears so!)

We can assume that Kansas State and West Virginia were never considered because of their under .500 records, and we probably can also assume that similar logic was applied to St. John’s and Clemson’s barely-above-.500 records. That left basically Fresno State, Furman, and Drake all in the ballpark for the final few spots that ostensibly went to Virginia, St. Bonaventure, and Oregon. I can’t speak to why the committee chose those teams; perhaps as they got to the bottom, they did decide to use quality wins as some sort of tiebreaker. I don’t really know.

I do, however, think that using NET in this way is pretty remarkable — and also an extremely welcome development.

Craig had a great explainer on NET earlier this year, and you can go back and read that if you want a deep dive. While the recipe for the secret sauce is a secret, we do know the ingredients — NET primarily relies on scoring margin (as famously does, because it has proven to be predictive), but it also adjusts for actual game results, giving credits for wins. Most teams end up more or less in the same ballpark in both NET and kenpom, since a team’s record tends to follow their scoring margin. But not always! A great example is Providence, which won an unusual number of close games this season. Kenpom (which is based solely on margin) ranks the Friars 49th, while NET — which includes bumps for all those victories — ranks them 32nd.

That’s why it makes a whole heck of a lot of sense to use NET in the fashion of the NIT: It’s a metric that measures both what you’ve done and how you’ve done it. What’s not to like about that, particularly when there’s no possible way for these tournament committees to watch enough of all of these teams to be truly informed? Of the top 85 teams in the NCAA’s NET metric, all but seven eligible teams either qualified for, or were selected to, the 100 spots available in the NCAA’s top two tournaments. That seems like a pretty great outcome.

Instead of using the NET in this fashion, though, the NCAA tournament committee chooses to continue to use it in its own convoluted way: As merely a “sorting tool” (their term) that gives teams credit for beating other teams rated highly by NET but does not give a team any credit for being rated highly by NET themselves. It’s a vestige of the (deeply flawed) logic they used to use with the (deeply flawed) RPI, and I can only hope that constructing the NIT in this way is a harbinger of things to come.

It probably isn’t, but I can dream!

Not everyone is thrilled with this development, though. John Templon — of NIT Bracketology — thought South Carolina should have been included in the field, using that as a jumping off point for this:

So what held South Carolina out? It appears to be its low “Predictive” metrics. South Carolina had by far the lowest KenPom and Sagarin of any team being considered. Theoretically these metrics explain how well a team will do in the future. I’m fine with using these as a seeding tool, but using them for selection seems to set a dangerous precedent. It encourages blowouts and undervalues a team’s ultimate won/loss record. If the low predictive metrics are why South Carolina wasn’t selected despite otherwise strong credentials that would be an important piece of information to understand moving forward. (It’s worth noting though that selection and seeding seemed to follow general past principles except for this outlier case.)

From my perspective, I find the “encourages blowouts” argument pretty hollow because the potential effect is likely overstated — teams are generally trying to score and prevent points throughout, and when an opponent is so clearly overmatched that a team actually does decide to truly empty the bench, it’s usually only for the final couple of minutes. I just don’t find that argument compelling because I think it would apply to only a small number of games.

Additionally, whether this “undervalues a team’s ultimate won/loss record” is in the eye of the beholder. Personally, I think that selections and seedings have traditionally overvalued won/loss record. Because we can’t just rank teams by wins as professional leagues do, we use a system that is trying to subjectively assess a team’s “resume,” which has always been some combination of record and scoring margin, even if only subconsciously. I understand that reasonable minds can differ on the importance of record, but I think a metric like NET trying to strike a balance between record and scoring margin is the way to go when trying to evaluate teams for postseason tournaments.

Besides: If you think NET undervalues a team’s record, it seems like weighting wins even a bit more heavily — or discounting for losses — would be a fairly simple tweak that would assuage those concerns.

What do you think? Are you down with the NCAA tournament moving to an approach like this?