Massey Ratings | Page 2 | The Boneyard
.

Massey Ratings

It is my understanding that Massey does not accurately reflect the current season until after 10 games have been played.
Yes. These rankings are based entirely on last year's results. You need 5 to 10 games to have enough data to do a decent ranking because you need to compate your results to other teams who have played the same caliber of teams. And it takes more games if your schedule has a lot of cupcakes
 
Pretty sure Massey ratings require fresh games to change, and the ratings in the first month of the season are still reflecting some of the previous year's data.
Early Massey ratings not only used data from the previous year, but also from at least one year before that.
 
It is my understanding that Massey does not accurately reflect the current season until after 10 games have been played.
Sorry, I was only trying to give other BYers a link to the ratings in case they wanted to bookmark it (like I did).
In past seasons, people (me included) would post links to the weekly AP Top 25 but I don't recall anyone posting the link to the Massey ratings.
And I am aware of how the ratings are generated, meaning the current preseason ratings are "useless", but I didn't think that I had to mention that to anyone that might want the link. I apologize.
 
DD, you just exposed one of the major flaws with metric based rating systems. Such systems do not know just from analyzing the numbers that a team is preparing to go into the toilet, or ready to explode as UConn did in the latter half of the season. I think at best these ratings can give some insight into teams that you know very little about, not the teams that you are intimately familiar with. For those teams eyeballs work the best.
I'd just like to add that UConn was like a muscle car sitting at a red light revving its engine in a menacing way. You just knew what was going to happen when the light turns green. No way can you capture that with a metrics-based rating system, but you have a shot to capture it with your eyeballs.
 
I'd just like to add that UConn was like a muscle car sitting at a red light revving its engine in a menacing way. You just knew what was going to happen when the light turns green. No way can you capture that with a metrics-based rating system, but you have a shot to capture it with your eyeballs.
To your point, I remain baffled that, after UConn absolutely dismantled SC in Columbia last season, many of the so-called pundits continued to push either SC or UCLA as the team to beat for the national championship. My eyeballs told me otherwise, to the point where I began to wonder if I needed new glasses…. 🥸
 
I'd just like to add that UConn was like a muscle car sitting at a red light revving its engine in a menacing way. You just knew what was going to happen when the light turns green. No way can you capture that with a metrics-based rating system, but you have a shot to capture it with your eyeballs.
Man, what a great metaphor!
 
It is my understanding that Massey does not accurately reflect the current season until after 10 games have been played.
This number 10 has been plucked out of the ether and propagated as an urban myth. There's no set number of games for the previous season's results to disappear.
 
This number 10 has been plucked out of the ether and propagated as an urban myth. There's no set number of games for the previous season's results to disappear.
To identify a trend, it's generally recommended to have at least six data points to reduce the risk of error, as fewer points can lead to misleading conclusions. However, for more reliable analysis, having around 25 data points is often suggested to establish a stable trend.
 
To your point, I remain baffled that, after UConn absolutely dismantled SC in Columbia last season, many of the so-called pundits continued to push either SC or UCLA as the team to beat for the national championship. My eyeballs told me otherwise, to the point where I began to wonder if I needed new glasses…. 🥸
As has been noted elsewhere, filtering data for relevant results is useful, and I'd like to see what this shows about lots of teams, including SC. The first blowout down in Columbia exposed Dawn's front court, and especially Chloe Kitts, who was asked to do more than her physical skills allowed. Don't get me wrong, I like Chloe's game, but she is not a physical post player and that's what she was being asked to be. Her +/- numbers were the best on the team, though her defensive +/- was weakest among the starters. The same story is visible in the win share numbers: overall highest, but on defense among the lowest. And this reality is even more visible in the competitive games than in the blowouts -- the Texas games, the UCLA game and the UConn games.

If I go by the eyeball test, what I see is that Dawn's front court reserves were weak, especially after the loss of Watkins and Walker. Dauda and Tac simply couldn't carry the burden of the position in competitive games. As a result, It was only Feagin, Edwards and Chloe, and no one would seriously think this group was comparable to the great SC teams of recent years. In effect, they had a front court that topped out at 6'3" and wasn't nearly as athletic as it had been in the past.

Interestingly, on Geno's side, after that first SC game, he worried that his team would not recognize the possibility that a rematch could go very differently. Then he walked into the locker room to see Paige telling them all to expect a rematch to be much more competitive. That shows the other side of the eyeball test: Paige led a very savvy team that wasn't blind to its own vulnerabilities. And when the unsung heroes, like Jana and Ice stepped up in big games, she knew to celebrate them loud and long.
 

Online statistics

Members online
241
Guests online
2,949
Total visitors
3,190

Forum statistics

Threads
164,316
Messages
4,391,614
Members
10,200
Latest member
jamminxc


..
Top Bottom