Inter-conference records among the majors | Page 4 | The Boneyard
.-.

Inter-conference records among the majors

This is what I said up-thread about the SEC:

It's 12 in the top 52 now, and most of the pre-season ratings have been phased out. 11 in the top 53 of the NET (Texas is a lot worse in NET). As discussed, KenPom rates conferences based on the expected strength of a team needed to go .500 in conference play based on a round robin schedule. The algorithm spits out around the 35th ranked team for the SEC, low 40s for Big 12/Big Ten, low 50s for ACC, and mid-50s for Big East.

Big 12 has 9 of their teams in the top 53. 7 of 16 schools outside of the top 65. SEC has 3 outside the top 65, and their worst school (South Carolina at 90) is 38 spots better than the Big 12's worst school (Utah at 128). If you did just a straight average (and not the win50 method), Big 12 is 47.6, SEC is 39.7.

SEC is #1 by a large margin in KenPom because the algorithm thinks they have a lot of really good teams and an extremely good depth through the conference.

Torvik has cumulative conference WAB as one of the measurements on his site, so we can see what a good resume metric has for the conferences (something better than RPI). In that, he has the Big 12 #1 at +1.0 WAB, and then the SEC and Big Ten tied at 0.5 WAB, ACC at basically net 0 and BE at -0.27, then a big gap to the rest of the conferences.

So to answer your question: Why does KenPom have SEC #1 when their record doesn't indicate they should be #1? Because KenPom isn't measuring their record quality, just projecting current/future strength. The Big 12 has the best resume and that lines up with your high major records.

In other words, the SEC runs up the score on bad teams.
 
In other words, the SEC runs up the score on bad teams.
Using Torvik to filter performance, we can take a look.

This season, 5 of their teams have been better against good teams (Q1+Q2 games), 4 have been about the same against both types, and 7 have been significantly better against worse teams (Q3+Q4 games).


Kentucky, A&M, Missouri, and Ole Miss are driving most of the effect. Vanderbilt has been excellent in their toughest games.
 
Using Torvik to filter performance, we can take a look.

This season, 5 of their teams have been better against good teams (Q1+Q2 games), 4 have been about the same against both types, and 7 have been significantly better against worse teams (Q3+Q4 games).


Kentucky, A&M, Missouri, and Ole Miss are driving most of the effect. Vanderbilt has been excellent in their toughest games.


LSU is #42 in KenPom right now after losing to South Carolina at home, despite Depaul being its second best win. South Carolina had no wins over teams in the top 188 coming into night. South Carolina is 69.

There is an SEC bias in KenPom. I don't need to hear the other side of the argument anymore.
 
Texas has a nice win over KenPom #33 NC State. Its next best win is #257 Southern. Texas is 53 in KenPom. What does an SEC have to do to get knocked out of the Top 80?

According to KenPom, the 2025-2026 SEC is not just the best conference this year, it is one of the best conferences in the history of college basketball. That is utterly ridiculous.
 
Ironically, Texas' game against us (an 8 point loss on the road) is one of the data points holding them up the most. But you keep harping on about wins and losses.

The SEC's kenpom ranking isn't even that out of line to past #1s, so i also don't know what you're looking at there.
 
Ironically, Texas' game against us (an 8 point loss on the road) is one of the data points holding them up the most. But you keep harping on about wins and losses.

The SEC's kenpom ranking isn't even that out of line to past #1s, so i also don't know what you're looking at there.

Why shouldn't I harp about wins and losses? Is there some other objective of a basketball game? Do you think it is a fashion show or a spelling bee?

The SEC's current +18.94 is the 6th highest conference rating in the history of KenPom, going back to the 1996-1997 season. Look at the team's schedules. The league is not that good compared to other leagues, and the results prove it. RPI says it is the 4th best league (it pulled a little ahead of the Big East), and looking through the schedules, it feels like the 4th or 5th best league. So why do the efficiency ratings so disproportionately point in the other direction?

 
.-.
Why shouldn't I harp about wins and losses? Is there some other objective of a basketball game? Do you think it is a fashion show or a spelling bee?

The SEC's current +18.94 is the 6th highest conference rating in the history of KenPom, going back to the 1996-1997 season. Look at the team's schedules. The league is not that good compared to other leagues, and the results prove it. RPI says it is the 4th best league (it pulled a little ahead of the Big East), and looking through the schedules, it feels like the 4th or 5th best league. So why do the efficiency ratings so disproportionately point in the other direction?

You can harp on wins and losses, just don't do it in relation to a predictive metric that doesn't factor in wins and losses at all.

"This team's resume is bad, why doesn't this metric that doesn't care about wins and losses reflect that?"

We've explained it to you 30 times, but you're intentionally obtuse about it. Adjusted scoring margin is more predictive than resume wins and losses for predicting future wins and losses. That's a fact. It's very well established over decades and thousands of games. That includes the games against bad teams, because it's harder and takes a better team to win by 50 than it is by 40 and by 30, etc. Metrics add techniques and adjustments to refine the raw scoring margin to make it even more predictive. So when you say "the results prove it", you're suggesting that we trust less information that is less predictive, only because it is more relevant to you. But the sample sizes are way too small in those games that "prove it".

The win/loss performance has been that of the 4th best league, as pointed out by RPI and WAB and other resume metrics. The performance as a whole indicates it's better than that, and the models that are the best at predicting how good the teams actually are suggest that over a larger sample it will likely move up and improve in those resume models.

Your point as I see it is that the collective performance hasn't justified the KenPom ratings. Whereas what the best data we have is actually telling you is that the win/loss performance hasn't lived up to how good the teams actually are. Resume and predictive metrics haven't aligned because the sample sizes are too small.
 
You can harp on wins and losses, just don't do it in relation to a predictive metric that doesn't factor in wins and losses at all.

"This team's resume is bad, why doesn't this metric that doesn't care about wins and losses reflect that?"

We've explained it to you 30 times, but you're intentionally obtuse about it. Adjusted scoring margin is more predictive than resume wins and losses for predicting future wins and losses. That's a fact. It's very well established over decades and thousands of games. That includes the games against bad teams, because it's harder and takes a better team to win by 50 than it is by 40 and by 30, etc. Metrics add techniques and adjustments to refine the raw scoring margin to make it even more predictive. So when you say "the results prove it", you're suggesting that we trust less information that is less predictive, only because it is more relevant to you. But the sample sizes are way too small in those games that "prove it".

The win/loss performance has been that of the 4th best league, as pointed out by RPI and WAB and other resume metrics. The performance as a whole indicates it's better than that, and the models that are the best at predicting how good the teams actually are suggest that over a larger sample it will likely move up and improve in those resume models.

Your point as I see it is that the collective performance hasn't justified the KenPom ratings. Whereas what the best data we have is actually telling you is that the win/loss performance hasn't lived up to how good the teams actually are. Resume and predictive metrics haven't aligned because the sample sizes are too small.

As I have said more than 30 times, the only way scoring margin separates from wins and losses at a conference level is if teams are systematically running up the score on bad teams.

That does not reflect superior teams, it reflects coaching decisions to game a flawed metric.

And winning by 50 is not harder than winning by 30 when you are playing a grossly inferior team. There are teams on the UConn women’s schedule that Geno could beat by 100 points but only beats by 50. Would UConn by a better team if it ran up the score?
 
As I have said more than 30 times, the only way scoring margin separates from wins and losses at a conference level is if teams are systematically running up the score on bad teams.

That does not reflect superior teams, it reflects coaching decisions to game a flawed metric.

And winning by 50 is not harder than winning by 30 when you are playing a grossly inferior team. There are teams on the UConn women’s schedule that Geno could beat by 100 points but only beats by 50. Would UConn by a better team if it ran up the score?
Your first sentence is just wrong. The sample size even at a conference level for just non-conference games is not enough for the resume and predictive metrics to converge because each game contains only 1 datapoint. Something like Florida losing 3 games against top 10 teams by a combined 11 points makes a huge impact even on conference-wide WAB and it is basically the difference between a handful of shots being different. If that happens to even a few teams in a conference, your results metrics are drastically different than your predictives. Resume metrics for a conference have like 200 datapoints, and predictive metrics have 15,000 (the data points are not 1:1 valued the same, but the point is made).

Yes, it is harder to win by 50 than 30 even against a grossly inferior team. The data backs this up. Your feelings on it not being true doesn't make it false. Sorry. There will be game outliers where a team really steps off the gas, but it is valuable for the models to include them on the whole, and they improve the predictiveness of the model. On the men's side, there are a handful of games that fall into that super extreme mismatch variety, and the models have methods to account for that.
 
You are wrong about 50 vs 30.

Fun fact: SEC football went 2-8 in bowls against non-SEC schools, and one of those wins was over Tulane. Are you sticking with your position that there is no chance the SEC is overrated and simply gaming the ratings?
 
You are wrong about 50 vs 30.

Fun fact: SEC football went 2-8 in bowls against non-SEC schools, and one of those wins was over Tulane. Are you sticking with your position that there is no chance the SEC is overrated and simply gaming the ratings?
Your brain believing teams can just basically set their margin of victory against weaker teams is so hilarious.
 

Online statistics

Members online
314
Guests online
3,380
Total visitors
3,694

Forum statistics

Threads
166,375
Messages
4,477,643
Members
10,351
Latest member
XF Support s2LH


Top Bottom