- Joined
- Sep 14, 2011
- Messages
- 2,676
- Reaction Score
- 6,257
I understand the idea - but a team that made one semi-final 5 elite eights 2 second round and 2 missed the tournament would have 30 points and I think they are not as good a team as one that made the sweet sixteen 10 times in ten years. One team had was good half the time and bad the rest of the time while the other was one of the top 9-16 teams in the country for all 10 years. If you don't subtract points for bad years, some extremely inconsistent teams may rise toward the top of your rankings.
But I do "subtract" points for poor performance. If the Sweet Sixteen is your norming point, I subtract 3 points from that position for every team that doesn't make the tourney, 2 points for every team that exits in the first round, and 1 point for every team that exits in the second round. If the Sweet Sixteen is the borderline that separates elite status from the non-elite, I deduct 20 points from the team that exits in the first round every year for a decade when compared to one that makes the Sweet Sixteen every year.
You bring up an interesting point in your example, that of consistency. Both teams as you cite them played in 30 games. The "Games Played" measure does not assign any level of worth to a performance with greater variability versus one no variability. If the tortoise and hare had finished in a dead heat, would it have made any difference that the hare had sprinted and rested, sprinted and rested compared to the slow steady tortoise? If you're a bookie, no. If you're a finance manager looking profits, yes. A musician who plays the same note for three minutes probably won't find much of an audience.
Back to the case at hand. Both teams find equal value in my measure of games played. However, they can be differentiated based on the standard deviations of their respective performances. The first team has games played values of 6,4,4,4,4,4,2,2,0,0 while the second team has values of 3,3,3,3,3,3,3,3,3,3. The first team's performances have a standard deviation of 1.94 while the second team's is 0.00
Personally, given the complexities of real world situations, I don't believe measuring consistency adds enough light to the situation to justify it's inclusion (and therefore the added complexities). For example, in my list, Oklahoma and Baylor both tied at 42 games played over the interval 2000-2013.
Oklahoma made the field every year, were eliminated in the first round twice, eliminated in the second round 3 times, eliminated in the Sweet Sixteen 6 times, eliminated in the Final Four twice and were national runner-up once.
Baylor missed the tournament entirely twice, were eliminated in the first round once, eliminated in the second round 3 times, eliminated in the Sweet Sixteen 4 times, eliminated in the Elite Eight once, eliminated in the Final Four once, and were national champs twice.
I don't know if you have a preference for which team performed better over the interval, but if you want to add in consistency, Oklahoma had a standard deviation of 1.47 while Baylor's was 2.18.