Something was definitely wrong with 538's model, from a pure statistical sense. Baylor was only 11-3 against top 25 teams, using Sagarin computer rankings. Thus Baylor should have had no better than 78% odds of winning against a typical top 25 team and MissState was better than average. So why 538 had 90% odds for Baylor yesterday was a mystery. With regards to UConn, if you think that Oregon is truly not a top 25 type team, then the odds are in the 99% range. But if you think that UO is in the 10-25 range and that this is a typical UConn team, then the odds should be about 95%, we only lose about 5% of the time to top 25 teams that are not in the top ten.
Interesting analysis: thanks!
Let's remember that in the one modelling game where people pour unlimited resources and intelligence and for which there is now nearly 200 years of extremely rich data--the stock market--it is still not a reliable predictor, certainly not in the short term. And in macro economics generally, the Fed's predictors of long term economic trends are constantly shifting and remain at best an approximate game.
Compare all that to predicting basketball, with relatively very little data, but infinite variables. For example, in what you cite above about records against ranked teams, you don't factor in won-lost on home, away, and neutral courts. And yet, what exactly is a neutral court? But the sampling available is suspect because we don't know how to weigh games early in the season vs. late in the season, or against teams that are improving or regressing, etc.
Modelling sports outcomes over a single season is like modelling the weather: we know with absolute certainty that somewhere in New England it's going to rain today...most likely....probably....possibly....but where and when it's going to rain exactly, well, everyone better bring an umbrella from CT to Maine.