Post by Ryan - Guardians on Sept 1, 2015 13:05:12 GMT -6
A place to discuss potential changes to the ratings system without blowing up our e-mail inbox.
Here are a few ideas that i've been tossing around since beginning the process for WHL21 ratings, some are no-brainers in my opinion, some may be more of a stretch especially with the more veteran GM's of this league:
1) No matter what we do with the rating tables, even if they stay exactly the same. I propose that we move to a three year weighted average. When calculating the ratings, we are supposed to look at past history anyway, yet it seems that in an effort to speed along a very lengthy process, that we often do not consider past seasons and simply through the most current season into the tables and pop out a new rating.
So, the way a weighted average would work (and I would compile the necessary statistics), would be that each of the three previous seasons would be calculated with more weight on the most recent. So for the WHL22 ratings, a player's three season weighted average would include 15-16 three times, 14-15 twice and 13-14 once. Assuming that Steven Stamkos scores 40 goals this coming season (either in 82 games, or prorated to 40). His weighted average would be as follows.
15-16: 40
15-16: 40
15-16: 40
14-15: 43
14-15: 43
13-14: 55
Total: 261
Average: 44
Stamkos would receive a 9 rating for shooting based on the current tables.
2) Creating more complex and comprehensive formulas for the rating tables. Similar to item 1, most of the rating categories in the current tables have a description that instructs the rating creator to consider more than just one criteria. For the shooting example it suggests that we not simply look at total goals scored, but also a players tendency to shoot, yet the table only provides a guideline for total goals scored. So lets create a formula that combines goals scored with shots taken. OR agree that we will rate only on goals scored and exclude shots taken all together, either would be a favorable outcome.
Some ratings, such as checking would benefit tremendously from this change. As it is at the moment, checking is largely an opinion based rating. I absolutely LOATHE the opinion based ratings, I believe they are the biggest reason that the ratings process becomes so dragged out every summer. A formula combining items such as SHTOI, Blocked Shots, +/- (or something like corsi or fenwick), as well as a "positional bonus" could be used to create a checking rating. The positional bonus being something like every D gets a bonus of 3 in Checking, C gets a 2 and W gets a 1.
The goalie ratings need some serious help, there is not one single rating for goaltenders that is based on actual results on the ice. SV% and GAA should heavily factor into goalie ratings, yet they are totally absent. Creating formulas based around key goaltender stats would really help the process and the end results.
3) Include statistics from other leagues as part of the three year average. Statistics from all other leagues could be used in the rating process. Now we know that 40 goals in the AHL does not equal 40 goals in the NHL, but we can develop a multiplier to reflect the difference. Each league would need to be ranked, as an example if we all agreed that the AHL was worth 50% of the NHL, it would be easy to say that 40 AHL goals would be equal to 20 NHL goals (40 x .5). Every league could be ranked this way, every Euro league, all minor and junior leagues, NCAA, USHL...
4) Cap ratings based on other leagues. This is where we really start to deviate from the current system. Although we currently cap NHL first rounders at a 70 OV rating, this idea would be based on an idea of assigning a player to a league each year, rating them only based on that league and setting limits for ratings based on each league. An agreed upon threshold would need to be met. Just throwing out a number like, if a player plays 30 games in the NHL then we gets a rating based only on his NHL statistics. If he plays less than 30, he is rating on his statistics based on whichever league he appeared in the most games. So a guy who played 26 NHL games and 40 AHL games would only be rated on his AHL statistics, and then capped at an agreed upon number, say no rating can be greater than 7, a guy rated on his junior stats would be capped at 5. I don't really prefer this idea, but feel it's still worth discussing and exploring during this process.
5) Assign ratings on a bell curve instead of the ratings table. In this scenario, ratings are not tied to a specific number achieved in a statistical category. Rather the "table" would float each season. This would really change the rating window to 3-9 instead of 0-9. A rating of 6 would be considered average, where most of the players would fall. So we prorate for 82 games (possibly with three season average and equalized statistics from other leagues), and rank every player top to bottom and then assign ratings on the following "bell curve" table...
Top 5%: 9
6%-15%: 8
16%-30%: 7
31%-70%: 6
71%-85%: 5
86%-95%: 4
Bottom 5%: 3
My feelings here are similar to item #4, I personally don't see this as the best solution, but it does have some merit and could be discussed further.
I'm sure there are some other ideas that I haven't covered, if they come up they can definitely be considered. I feel like the first three items would really improve the integrity of the ratings and remove a lot of the debates that come up after the ratings are completed. Under these proposals, once the league GM's agree to a certain framework it would remove the opinion based ratings, moving us to a purely statistic driven rating system and eliminate room for debate as we have all (or a great majority) agreed to the process and criteria and presumably accept the outcome as it is.
Here are a few ideas that i've been tossing around since beginning the process for WHL21 ratings, some are no-brainers in my opinion, some may be more of a stretch especially with the more veteran GM's of this league:
1) No matter what we do with the rating tables, even if they stay exactly the same. I propose that we move to a three year weighted average. When calculating the ratings, we are supposed to look at past history anyway, yet it seems that in an effort to speed along a very lengthy process, that we often do not consider past seasons and simply through the most current season into the tables and pop out a new rating.
So, the way a weighted average would work (and I would compile the necessary statistics), would be that each of the three previous seasons would be calculated with more weight on the most recent. So for the WHL22 ratings, a player's three season weighted average would include 15-16 three times, 14-15 twice and 13-14 once. Assuming that Steven Stamkos scores 40 goals this coming season (either in 82 games, or prorated to 40). His weighted average would be as follows.
15-16: 40
15-16: 40
15-16: 40
14-15: 43
14-15: 43
13-14: 55
Total: 261
Average: 44
Stamkos would receive a 9 rating for shooting based on the current tables.
2) Creating more complex and comprehensive formulas for the rating tables. Similar to item 1, most of the rating categories in the current tables have a description that instructs the rating creator to consider more than just one criteria. For the shooting example it suggests that we not simply look at total goals scored, but also a players tendency to shoot, yet the table only provides a guideline for total goals scored. So lets create a formula that combines goals scored with shots taken. OR agree that we will rate only on goals scored and exclude shots taken all together, either would be a favorable outcome.
Some ratings, such as checking would benefit tremendously from this change. As it is at the moment, checking is largely an opinion based rating. I absolutely LOATHE the opinion based ratings, I believe they are the biggest reason that the ratings process becomes so dragged out every summer. A formula combining items such as SHTOI, Blocked Shots, +/- (or something like corsi or fenwick), as well as a "positional bonus" could be used to create a checking rating. The positional bonus being something like every D gets a bonus of 3 in Checking, C gets a 2 and W gets a 1.
The goalie ratings need some serious help, there is not one single rating for goaltenders that is based on actual results on the ice. SV% and GAA should heavily factor into goalie ratings, yet they are totally absent. Creating formulas based around key goaltender stats would really help the process and the end results.
3) Include statistics from other leagues as part of the three year average. Statistics from all other leagues could be used in the rating process. Now we know that 40 goals in the AHL does not equal 40 goals in the NHL, but we can develop a multiplier to reflect the difference. Each league would need to be ranked, as an example if we all agreed that the AHL was worth 50% of the NHL, it would be easy to say that 40 AHL goals would be equal to 20 NHL goals (40 x .5). Every league could be ranked this way, every Euro league, all minor and junior leagues, NCAA, USHL...
4) Cap ratings based on other leagues. This is where we really start to deviate from the current system. Although we currently cap NHL first rounders at a 70 OV rating, this idea would be based on an idea of assigning a player to a league each year, rating them only based on that league and setting limits for ratings based on each league. An agreed upon threshold would need to be met. Just throwing out a number like, if a player plays 30 games in the NHL then we gets a rating based only on his NHL statistics. If he plays less than 30, he is rating on his statistics based on whichever league he appeared in the most games. So a guy who played 26 NHL games and 40 AHL games would only be rated on his AHL statistics, and then capped at an agreed upon number, say no rating can be greater than 7, a guy rated on his junior stats would be capped at 5. I don't really prefer this idea, but feel it's still worth discussing and exploring during this process.
5) Assign ratings on a bell curve instead of the ratings table. In this scenario, ratings are not tied to a specific number achieved in a statistical category. Rather the "table" would float each season. This would really change the rating window to 3-9 instead of 0-9. A rating of 6 would be considered average, where most of the players would fall. So we prorate for 82 games (possibly with three season average and equalized statistics from other leagues), and rank every player top to bottom and then assign ratings on the following "bell curve" table...
Top 5%: 9
6%-15%: 8
16%-30%: 7
31%-70%: 6
71%-85%: 5
86%-95%: 4
Bottom 5%: 3
My feelings here are similar to item #4, I personally don't see this as the best solution, but it does have some merit and could be discussed further.
I'm sure there are some other ideas that I haven't covered, if they come up they can definitely be considered. I feel like the first three items would really improve the integrity of the ratings and remove a lot of the debates that come up after the ratings are completed. Under these proposals, once the league GM's agree to a certain framework it would remove the opinion based ratings, moving us to a purely statistic driven rating system and eliminate room for debate as we have all (or a great majority) agreed to the process and criteria and presumably accept the outcome as it is.