Big East NET Rankings | Page 4 | The Boneyard
.-.

Big East NET Rankings

I don't object, or care, whether gamblers want to use these tools, and they do appear to have some value for that, but final scores in blowouts captures a lot of things beyond just relative quality of teams.

Your example is a stupid strawman. How about this example: Butler beats South Carolina State 70-55 because Matta gives a starter the night off and pulls the rest of the starters up by 30 with 12 minutes left to protect them against injury and give his bench some game time. A week later, South Carolina beats South Carolina State by 60 because the SC coach leaves his starters on the court until a minute left. By your logic, South Carolina should gain ground on Butler in the NET because South Carolina beat the same team by 4 times as much even though neither game was ever in doubt. If it was up to me, just playing more than 2 non-conference teams outside the top 300 would count as a home loss to a mid-major for the purposes of tournament selection and seeding. I would probably add a separate limit for teams outside the Top 200. There is no competitive reason for >5 games against teams that bad to have any positive factor at all in tournament selection or seeding.

Throwing these absurd games into an efficiency stew and thinking there is any way to make a game against South Carolina State remotely comparable to even an A10 or MWC opponent is ridiculous.

Also, there was nothing stopping the committee from evaluating a 2 point win over CCSU negatively for seeding in the RPI era. They just had enough intellectual honesty back then not to pretend there was a way to elevate unholy blowouts against terrible, D1-in-name-only, programs to approach wins against majors or mid-majors. RPI hammered teams for playing terrible schedules, and those teams deserved it.
Bro, what is your bottom line point? That the Big East is good this year?
 
On the other hand, no one would seriously propose that, in the standings, the Sox should get more wins for their sweep than the Yankees get from theirs. So when folks object to putting much weight on the KenPom/Net type numbers that's all they're saying. Because schedules in college sports are far less balanced than in a professional league, it's fine to reward and punish teams for their strengths of schedule when you compare their records, but just because margins are useful in making predictions doesn't mean they should be used in ranking teams for anything meaningful.

Do we know how much a big blowout win is even weighted?

As I understand it, for the most part the last possessions in a 50 point blowout against a plus 300 team have close to the same importance as the last possessions in a game against a top ten importance. The expectations aren’t the same, but the importance to the computer rankings is pretty close.
 
So take that one step further. If the Yankees sweep a 3 game set against the Nationals at home, winning each game by 1 run, and the Red Sox sweep the Nationals at home winning each game by 10, would the margin be relevant in trying to predict who would win a Yankees--Red Sox series the following week? Of course it would be relevant. Not determinative, but certainly relevant.

The way I see it, how each team's starting pitching lines up would be most relevant.
 
Bro, what is your bottom line point? That the Big East is good this year?

Actually, you and I agree about 95% on the issue of the value of efficiency rankings, so you are just trolling with posts like this.
 
.-.
So take that one step further. If the Yankees sweep a 3 game set against the Nationals at home, winning each game by 1 run, and the Red Sox sweep the Nationals at home winning each game by 10, would the margin be relevant in trying to predict who would win a Yankees--Red Sox series the following week? Of course it would be relevant. Not determinative, but certainly relevant.

On the other hand, no one would seriously propose that, in the standings, the Sox should get more wins for their sweep than the Yankees get from theirs. So when folks object to putting much weight on the KenPom/Net type numbers that's all they're saying. Because schedules in college sports are far less balanced than in a professional league, it's fine to reward and punish teams for their strengths of schedule when you compare their records, but just because margins are useful in making predictions doesn't mean they should be used in ranking teams for anything meaningful.
And why is predicting the outcome of the next game the decisive factor about which source to use for selecting and seeding the tournament field anyway? Isn't tournament seeding about who actually won more games against better competition?
This is why the committee puts more weight into resume metrics when selecting teams for the field than predictive metrics. It's why your own NET matters very little for selection. The wins and resume matter more than the margin models on the bubble because they want to reward wins and losses and achievement (and it's also why they made the NET a hybrid instead of making it straight efficiency and more predictive). At the top for the first few seedlines it's also pretty heavily resume-shaded.

However, when seeding the field in other parts, they rely on predictive metrics so that the difficulty integrity and value of the seeds of the bracket are better upheld. You don't want to punish better seeds with tougher matchups just because a lower seed lost a couple coinflip games and ended up with a worse resume. But they don't tend to do straight predictive metrics for the middle seeds. More of a "your resume place you here... but your predictives are a LOT better so we'll bump you up a couple seedlines." See Gonzaga last season.

Makers of predictive metrics, like Ken Pomeroy, made this exact case to the NCAA when they were gathering feedback from analytics people.
These representatives of the seven metrics used by the NCAA tournament selection committee all agreed the NCAA improved the selection process by eliminating the Ratings Percentage Index (RPI), developing the NCAA Evaluation Tool (or NET) and embracing a variety of ratings systems, beginning with the 2018-19 season.

But they also agreed on this point :Only some of the seven metrics should actually be used to pick the 68 teams that make the NCAA tournament. Pomeroy said his rankings shouldn’t be used. Torvik said his rankings shouldn’t be used. Nobody said the NET should be used. Not even Pattani, who helped create the NET through the NCAA’s corporate partnership with Google. “It’s a little weird I’m on the teamsheet,” Pomeroy admitted in an interview with USA TODAY Sports. “But I think everyone (on the selection committee) understands they’re not going through my rating system and picking the best teams. They understand my rating system is more predictive and you’re not picking teams based on how good they are in a predictive sense. You’re picking them based on their accomplishments.”

But fans nonetheless read and hear about most NCAA Tournament hopefuls in terms of their NET ranking around Selection Sunday, with the nuance of each ratings system often lost in the emotions of March Madness and whether a team is perceived to b eranked too high or too low. The seven metrics on NCAA teamsheets are technically divided into two categories. The NET, KenPom ratings, ESPN’s Basketball Power Index (BPI) and Torvik ratings are considered predictive rankings, or how good a team is based on its offensive and defensive efficiency, adjusted for opponent strength and location. ESPN’s strength of record, the Kevin Pauga Index (KPI) and wins above bubble (or WAB) are results based rankings that judge how hard it was for a team to attain its resume.

Torvik and WAB are making their debut on NCAA Tournament team sheets, with particular interest being paid to the WAB because creator Seth Burn believes if selection committee members “just use that, they can simplify it quite a lot,” he told USA Today Sports, “and it will guide them in who they should select. ”Though the general principles used to formulate these metrics are made public, the exact formulas used for them are not. It’s viewed as proprietary information, even though “most of them are pretty similar,” Morris told USA TODAY Sports. “They’re using a lot of the same input data. ... We’ve converged to some degree.”

Everybody in the NCAA-produced round table said results-based metrics are what should be used to choose teams for the NCAA tournament.
 
Last edited:
This is why the committee puts more weight into resume metrics when selecting teams for the field than predictive metrics. It's why your own NET matters very little for selection. The wins and resume matter more than the margin models on the bubble because they want to reward wins and losses and achievement (and it's also why they made the NET a hybrid instead of making it straight efficiency and more predictive). At the top for the first few seedlines it's also pretty heavily resume-shaded.

However, when seeding the field in other parts, they rely on predictive metrics so that the difficulty integrity and value of the seeds of the bracket are better upheld. You don't want to punish better seeds with tougher matchups just because a lower seed lost a couple coinflip games and ended up with a worse resume. But they don't tend to do straight predictive metrics for the middle seeds. More of a "your resume place you here... but your predictives are a LOT better so we'll bump you up a couple seedlines." See Gonzaga last season.

Makers of predictive metrics, like Ken Pomeroy, made this exact case to the NCAA when they were gathering feedback from analytics people.
Outstanding post.
 
This is why the committee puts more weight into resume metrics when selecting teams for the field than predictive metrics. It's why your own NET matters very little for selection. The wins and resume matter more than the margin models on the bubble because they want to reward wins and losses and achievement (and it's also why they made the NET a hybrid instead of making it straight efficiency and more predictive). At the top for the first few seedlines it's also pretty heavily resume-shaded.

However, when seeding the field in other parts, they rely on predictive metrics so that the difficulty integrity and value of the seeds of the bracket are better upheld. You don't want to punish better seeds with tougher matchups just because a lower seed lost a couple coinflip games and ended up with a worse resume. But they don't tend to do straight predictive metrics for the middle seeds. More of a "your resume place you here... but your predictives are a LOT better so we'll bump you up a couple seedlines." See Gonzaga last season.

Makers of predictive metrics, like Ken Pomeroy, made this exact case to the NCAA when they were gathering feedback from analytics people.

Good article. Several of the owners of these metrics are saying that these metrics should not be used for tournament selection and seeding, which is exactly what I am saying.

Mr. Pomeroy's comments notwithstanding, there is no evidence that the rest of your post reflects the reality of how the tournament is selected. The NCAA periodically publishes the NET ratings, and it is used to determine Quad 1 through 4 wins, which makes it critically important.

If we are being honest, the NCAA replaced a simple if imperfect metric, the RPI, that measured teams' performance against the quality of their records, with a black box that clearly favored the power conferences over the mid-majors, and within the power conferences, clearly favors some conferences over others.
 
Good article. Several of the owners of these metrics are saying that these metrics should not be used for tournament selection and seeding, which is exactly what I am saying.

Mr. Pomeroy's comments notwithstanding, there is no evidence that the rest of your post reflects the reality of how the tournament is selected. The NCAA periodically publishes the NET ratings, and it is used to determine Quad 1 through 4 wins, which makes it critically important.
You are misreading it in order to apply your agenda. They are universally saying they would not use predictive metrics to select the field. Only predictive and only not for selection. None of them say don't use predictive metrics for seeding and all of them say use resume metrics for selection.

There is plenty of evidence that the committee relies more heavily on resume metrics in general and especially for selection:

WAB (and resume metrics in general) was the most aligned metric with the field last year:

From a few years ago, (and from before WAB was on the teamsheet) but already resume metrics were favored for selecting teams on the bubble
1767453717102.png
 
Last edited:
You are misreading it in order to apply your agenda. They are universally saying they would not use predictive metrics to select the field. Only predictive and only not for selection. None of them say don't use predictive metrics for seeding and all of them say use resume metrics for selection.

There is plenty of evidence that the committee relies more heavily on resume metrics in general and especially for selection:

WAB (and resume metrics in general) was the most aligned metric with the field last year:

From a few years ago, (and from before WAB was on the teamsheet) but already resume metrics were favored for selecting teams on the bubble
View attachment 115480

You are digging deep to look for things to disagree with me about.

In summary, you are defending metrics systems as perfect when they are deeply flawed and their owners are even saying have serious limitations. I work with a lot of models, and I always apply a common sense test to see if the model makes sense. In this case, the models are identifying this year's SEC as one of the best conferences in history, yet that SEC has a losing record against the other majors, and no one can explain why that obvious contradiction exists.
 
.-.
You are digging deep to look for things to disagree with me about.

In summary, you are defending metrics systems as perfect when they are deeply flawed and their owners are even saying have serious limitations.
In summary, you misrepresent my argument, because that's how you argue. Show me where I say they are perfect. I would consider my stance that certain metrics are the best tool we have for the job, as long as you are using the right tool for the specific job you have in mind.

I think the distinctions I raised from your previous post are important and it was worth pointing out so you don't continue spreading misconceptions. Predictive metrics are good at what they do and better than the alternatives, but the things they measure (what team is likely better) should not be what the committee values when selecting the teams for the tournament. Bids should be earned with achievement. Wins and losses are what matter for achievement, and resume metrics do a much better job of measuring achievement.
 
In summary, you misrepresent my argument, because that's how you argue. Show me where I say they are perfect. I would consider my stance that certain metrics are the best tool we have for the job, as long as you are using the right tool for the specific job you have in mind.

I think the distinctions I raised from your previous post are important and it was worth pointing out so you don't continue spreading misconceptions. Predictive metrics are good at what they do and better than the alternatives, but the things they measure (what team is likely better) should not be what the committee values when selecting the teams for the tournament. Bids should be earned with achievement. Wins and losses are what matter for achievement, and resume metrics do a much better job of measuring achievement.
Look no further than quality wins (usually against other tourney teams) as the main criteria. This is a criteria the BE is sorely lacking in. Measuring league strength based on bad teams beating each other up isn’t much if you ask me.

Give me our top 3 league wins outside of UConn?
 
You are digging deep to look for things to disagree with me about.

In summary, you are defending metrics systems as perfect when they are deeply flawed and their owners are even saying have serious limitations. I work with a lot of models, and I always apply a common sense test to see if the model makes sense. In this case, the models are identifying this year's SEC as one of the best conferences in history, yet that SEC has a losing record against the other majors, and no one can explain why that obvious contradiction exists.
Judge Judy Eye Roll GIF
 
Look no further than quality wins (usually against other tourney teams) as the main criteria. This is a criteria the BE is sorely lacking in. Measuring league strength based on bad teams beating each other up isn’t much if you ask me.

Give me our top 3 league wins outside of UConn?
Nova over Wisconsin in Milwaukee, Butler over Virginia in West Virginia, St. John's over Baylor in Vegas, and Seton Hall over NC State in Maui.
 
Look no further than quality wins (usually against other tourney teams) as the main criteria. This is a criteria the BE is sorely lacking in. Measuring league strength based on bad teams beating each other up isn’t much if you ask me.

Give me our top 3 league wins outside of UConn?
Probably Butler over Virginia in what was essentially a home game for Virginia, Nova over Wisconsin in Milwaukee, and Seton Hall over NC State.

Curious if you don't include the best team in every other power basketball conference when you look at this stuff? Did you exclude Duke from the ACC last season and constantly post about how much the ACC sucks all of last season while not including Duke?
 
.-.
Nova over Wisconsin in Milwaukee, Butler over Virginia in West Virginia, St. John's over Baylor in Vegas, and Seton Hall over NC State in Maui.
Typical misinformation from you again! He said to name the top THREE wins!
 
Probably Butler over Virginia in what was essentially a home game for Virginia, Nova over Wisconsin in Milwaukee, and Seton Hall over NC State.

Curious if you don't include the best team in every other power basketball conference when you look at this stuff? Did you exclude Duke from the ACC last season and constantly post about how much the ACC sucks all of last season while not including Duke?
Fair enough, take out the top team from every conference, then compare.

I have a strong suspicion that you won't like the results, at least not when specific to this season.
 
In summary, you misrepresent my argument, because that's how you argue. Show me where I say they are perfect. I would consider my stance that certain metrics are the best tool we have for the job, as long as you are using the right tool for the specific job you have in mind.

I think the distinctions I raised from your previous post are important and it was worth pointing out so you don't continue spreading misconceptions. Predictive metrics are good at what they do and better than the alternatives, but the things they measure (what team is likely better) should not be what the committee values when selecting the teams for the tournament. Bids should be earned with achievement. Wins and losses are what matter for achievement, and resume metrics do a much better job of measuring achievement.

We are on page 4 of this thread, plus several pages of another thread, of me saying almost everything you say in the second paragraph, and you are making a stream of insults and attacks against me.
 
Look no further than quality wins (usually against other tourney teams) as the main criteria. This is a criteria the BE is sorely lacking in. Measuring league strength based on bad teams beating each other up isn’t much if you ask me.

Give me our top 3 league wins outside of UConn?

Cool story, bro.
 
We are on page 4 of this thread, plus several pages of another thread, of me saying almost everything you say in the second paragraph, and you are making a stream of insults and attacks against me.
We agree on plenty of things on this topic, which is why I made that long post mostly in support of what you've said. But not everything. The things that make up the difference "almost" and everything have been what we've been arguing about. But I'm pedantic and you're insufferable, so we're bound to clash.
 
We agree on plenty of things on this topic, which is why I made that long post mostly in support of what you've said. But not everything. The things that make up the difference "almost" and everything have been what we've been arguing about. But I'm pedantic and you're insufferable, so we're bound to clash.

St. John’s had the #24 offense in KenPom. He should teach the model to watch St. John’s play.
 
.-.
Lookie loo, my Duke example came true already, albeit not that drastically. A close road win against FSU dropped their Kenpom ranking. RPI would have given them a bonus for playing in front of friends and family in Tallahassee.
 

Online statistics

Members online
465
Guests online
6,673
Total visitors
7,138

Forum statistics

Threads
166,340
Messages
4,476,633
Members
10,350
Latest member
Donec


Top Bottom