Ok, so it’s no secret that we’ve been incredibly successful over the past 5, 10, 20 years. So much so that the term “Blue Blood” has been raised from time to time. I have no interest in that term as it is vague and the definition changes depending on who you ask. I therefore do not wish to debate whether we are in that group (or what we have to do to get there). That being said, I am interested in assessing our performance over the years and how it stacks up relative to the other elite programs.
This topic is certainly not new. Many others have raised the question and have provided their own (sometimes insightful, other times not so much) thoughts about where we rank versus Dook, UNC, etc. It seems to me that these conversations always get sidetracked for a few reasons. First, and most importantly, there is obviously no “right” answer because this topic is subjective in nature. For example, it’s not clear what metrics should be used to define success. How much weight should national championships carry? What about final 4s, conference titles, wins, etc? The timeframe is also a subjective input into any analysis like this. Should recent past count more or does historical success count just as much (the UCLA/Indiana issue)?
I don’t have any definitive answers to the issues outlined above (nobody does), but there is another common roadblock to productive discussions on this topic: different people work from different fact bases, which makes it confusing for all involved. Well, I had some down time over the weekend, and I thought it would be interesting to create a tool to help shed some light on this topic. The attached file (I hope you can attach excel files here) allows you to set preferences and draw your own conclusions about how Uconn stacks up versus the competition.
So here’s how it works. I chose a peer set of 12 programs to compare. Besides Uconn, I included the traditional Blue Bloods (ok, last time I use that word) UNC, Dook, Kentucky, UCLA, Kansas, and Indiana. I also arbitrarily included 5 others from the next tier: Michigan State, Arizona, Syracuse, Texas, Louisville. I may have missed someone, but that’s a good start. Note I tried to include Florida, but couldn’t find win totals prior to 1996.
As for metrics indicating success, I chose (again arbitrarily) 8 of them. I don’t think they are all equally important (more on this later) but I wanted a flexible tool to accommodate differing opinions. Metrics included are
The model is very easy to use and works like this – pick a timeframe (you can go back 5 years, 10 years, whatever, though my data only goes back to 1985). Then, choose what weighting you want to apply to each of the 8 metrics above. The tool will do the rest. It will rank each team according to those metrics (1, 2, 3, etc) and will create a composite score which is the weighted average of those ranks. Lowest composite score wins.
So what’s the final verdict? If you use my weightings (in the attached file), Uconn ranks in the 2-5 range depending on timeframe. Dook is consistently #1 which makes me sick, but facts are facts and the data speaks for itself. UNC is also up there. You can draw your own conclusions (I found lots of interesting things but I’ve written enough for now), but Uconn is clearly “elite” no matter what definition you use.
Final note. I’m sure I screwed some things up, and I don’t make any claims as to accuracy of this data. I relied heavily on Wikipedia, so who knows. I also could have made mistakes. There is a tab for each team if you want to dive into the details.
What do you think?
This topic is certainly not new. Many others have raised the question and have provided their own (sometimes insightful, other times not so much) thoughts about where we rank versus Dook, UNC, etc. It seems to me that these conversations always get sidetracked for a few reasons. First, and most importantly, there is obviously no “right” answer because this topic is subjective in nature. For example, it’s not clear what metrics should be used to define success. How much weight should national championships carry? What about final 4s, conference titles, wins, etc? The timeframe is also a subjective input into any analysis like this. Should recent past count more or does historical success count just as much (the UCLA/Indiana issue)?
I don’t have any definitive answers to the issues outlined above (nobody does), but there is another common roadblock to productive discussions on this topic: different people work from different fact bases, which makes it confusing for all involved. Well, I had some down time over the weekend, and I thought it would be interesting to create a tool to help shed some light on this topic. The attached file (I hope you can attach excel files here) allows you to set preferences and draw your own conclusions about how Uconn stacks up versus the competition.
So here’s how it works. I chose a peer set of 12 programs to compare. Besides Uconn, I included the traditional Blue Bloods (ok, last time I use that word) UNC, Dook, Kentucky, UCLA, Kansas, and Indiana. I also arbitrarily included 5 others from the next tier: Michigan State, Arizona, Syracuse, Texas, Louisville. I may have missed someone, but that’s a good start. Note I tried to include Florida, but couldn’t find win totals prior to 1996.
As for metrics indicating success, I chose (again arbitrarily) 8 of them. I don’t think they are all equally important (more on this later) but I wanted a flexible tool to accommodate differing opinions. Metrics included are
- # of National titles: the ultimate measure of success
- # of final 4s: another commonly used indicator of elite seasons
- # Sweet 16s: rewards teams that are consistently “solid”, if not spectacular, in the tournament
- # NCAA appearances: I don’t like this one for elite programs, more useful for lesser programs
- # Tourney wins: an aggregate rather than seasonal measure
- # Conference tourney titles: need something to balance out reliance on NCAA tourney success, though levels of competition vary by conference so this may be less fair to Big East squads
- # Conference regular season titles: see above
- Total # wins: another aggregate measure
The model is very easy to use and works like this – pick a timeframe (you can go back 5 years, 10 years, whatever, though my data only goes back to 1985). Then, choose what weighting you want to apply to each of the 8 metrics above. The tool will do the rest. It will rank each team according to those metrics (1, 2, 3, etc) and will create a composite score which is the weighted average of those ranks. Lowest composite score wins.
So what’s the final verdict? If you use my weightings (in the attached file), Uconn ranks in the 2-5 range depending on timeframe. Dook is consistently #1 which makes me sick, but facts are facts and the data speaks for itself. UNC is also up there. You can draw your own conclusions (I found lots of interesting things but I’ve written enough for now), but Uconn is clearly “elite” no matter what definition you use.
Final note. I’m sure I screwed some things up, and I don’t make any claims as to accuracy of this data. I relied heavily on Wikipedia, so who knows. I also could have made mistakes. There is a tab for each team if you want to dive into the details.
What do you think?