|
|
Rankings, Explained
The rankings on this site attempt to take a variety of available inputs and information and translate them into a 1 to 134 list that allows for the effective comparison of any two teams in the FBS (Division 1-A). Even with all this information, however, determining which college football teams are better than others is still far more an art than a science. Teams improve over time, suffer injuries, and often win or lose by chance as much as by skill. Couple all of that with most teams only playing 12 or 13 games (i.e. facing only around 9% of the rest of the field or less due to playing FCS teams, etc.), the resulting win-loss records hardly tell the full stories, even if winning the games on the schedule is paramount. As such, I’ve tried to incorporate as much data as I can to ensure that these rankings at least provide some guidance for making picks for the bowl games.
The rankings are based on the following factors to come up with an aggregate rating (AGR):
The rankings are calculated thus: the poll data is weight-averaged, with slight emphasis placed on the playoff rankings at 18% (because they are the most-used rankings once they come out, and they determine many bowl game participants). Then the AP/Coaches polls are averaged, counting for 13.5%, and the other polls/rankings (gathered from masseyratings.com, linked above) make up 11.5%. In total, all polls make up 43% of AGR. Next comes SRS, at 15%. And then finally, winning percentage makes up 35% of AGR. As I said above, the record is the one thing truly in a team's control, so winning the games on the schedule is why this factor gets as much weight as it does.
If you have some time to kill and are interested in stuff like this (like I am), check out those aggregated Massey rankings. The site links to each individual poll, and many of those polls are computer- or otherwise rules-based polls using their own sets of criteria to determine their rankings.
|
|
|
|
|