Monday, July 04, 2005
2005 PREVIEW: METHOD TO THE MADNESS
- - - - - - - - - - - - - - - -
For the rest of the month of July, I'll be posting my preseason picks for the fall, beginning with a daily countdown of my Top 25 (starting with No. 25 and working my way to No. 1. ). Along the way, there will be my All-America team along with conference-by-conference, team-by-team analysis and predictions for all 119 teams. The site should be updated every day until the end of the preseason picks section - all of which will be archived to the right for easy reference - and then the real fun should be ready to begin.
A note on the picks: the goal is to rank not only how good a team is, but how successful its season will be; I want my preseason picks to mirror as closely as possible the actual results come January. In the past, I've tried to do this by picking a winner and a loser for every game. Bad idea, and here's why: beforehand, you look at a team and think "it's probably going to lose X number of games," and you fill out the schedule to fit that notion, sometimes picking games based solely on the need for one team to pick up another win or loss to fit your preconceived analysis. This doesn't give you a very good idea of how the season will play out, especially if you've picked a team to win all of three or four toss-up games when you know its very likely to lose at least one - you just don't know which one. Upsets are inevitable, and a system needs to find a way to incorporate their potential while also recognizing the higher likelihood of the favored team winning.
The problem with picking based on a strictly black-and-white, definite win/definite loss basis is that it doesn't take into account probability, which is the essence of predicting anything. For example, tagging Stanford as a straight "win" on USC's schedule is one thing, because USC has an almost 100 percent probability of beating Stanford, but a game like Tennessee at Florida is essentially a toss-up; each team has about a 50 percent chance of winning, so putting it down as simply a "win" or "loss" for one team or the other isn't a fair representation of either team's real probability of winning or losing.
Accuracy dictated the creation of an algortihm that would take into account not only how good I judge a team to be, but also incorporated its schedule in the form of strength (so as not to overly elevate a mediocre team with an easy slate) and likely success (so as to recognize the potential struggles of a highly rated team with a brutal road ahead). I needed the most solid numerical model that my limited time, patience and mathematical abilities would allow.
First, each team has to be evaluated. I assigned a number to all 119 teams based on the average of nine categories (each rated from one to ten based on projected statistics in each category):
Total Points Per Game
Rushing Yards Per Game
Passing Yards Per Game
Points Allowed Per Game
Rushing Yards Allowed Per Game
Passing Efficiency Defense
Special Teams (Judgment rating)
Coaching (Judgment rating)
2004 Wins
Now, statistics are of course unreliable because some teams rack up big numbers against weaker competition. To neutralize this effect, I divided the final scores of teams in the five major conferences (ACC, Big Ten, Big XII, PAC Ten, SEC) by two, scores of teams from the Big East by 2.1, scores of teams from the Mountain West and Conference USA by 2.15, scores of teams from the MAC and WAC by 2.16 and scores of teams from the Sun Belt by 2.25. So teams from the better conferences - which are presumably better teams - had better aggregate scores that reflect how good the teams actually are, whereas the statistics without the adjustment would have given too much credit to smaller conference teams.
Now that each team had a serviceable rating, I was able to plug them into schedules. I then assigned five points to each game, to be divided based on each team's probability of winning. For example, there is no conceivable way Duke could be expected to beat Miami; therefore, Miami would get five points for the game, Duke would get zero. Less lopsided but still predicatable games (say, Auburn over Ole Miss) would be split four-one (Auburn would get four points, Ole Miss one), and toss-ups or projected close calls were split three-two. There were no even splits -every game gave more points to one team.
Therefore, the more likely a team is to win a game, the more points it's awarded, while avoiding the problem of saying "Utah WILL win this tough game," or "Georgia WILL lose to Tennessee," when it's really a toss-up. You're still picking a "winner" for each game, but it's a better method of playing the odds.
But while that system gave an idea of wins and losses, it still left the problem of schedule strength. To add that factor to the equation, I multiplied the number (one to five) assigned to each team for each game by its opponent's rating. For example, Miami received 7.5 points for beating Duke: five (win probability) times 1.5 (Duke's rating). Auburn got 9.6 against Ole Miss (win probability of 4 times 2.4 - Ole Miss' rating); for its 1-in-5 shot of beating Auburn, Ole Miss got Auburn's rating, 3.8 points. And so on...(A note here: I-AA opponents were automatically given a rating of one, so a win probability of five against, say, The Citadel, would net a team only five points).
After every schedule had been filled, I averaged the score of each game to get a final score for each team, which were hence ranked based on that average. The final average also includes projected championship games and bowl games. It should also be noted that conference champions for those championships and bowls were selected based only on regular season conference games, so those championship games, bowl games and regular season non-conference games may have changed some of the ratings; just because I pick, say, Tennessee at the top of the SEC East, DOES NOT necessarily mean I picked Tennessee to play in the SEC Championship Game, only that in January, after every game has been played, Tennessee will finish as the top-ranked team in the division.
Finally, know that my predictions are strictly based on the numbers resulting from my judgments; some of the results even surprised me. But I'm faithful to them with the conviction that, at the end of the year, the actual results on the field will be far more surprising.
It's not like anybody ever gets anything right in the preseason anyway (at the end of the year, I'll offer proof).
- - - - - - - - - - - - -
Permalink
9:32 PM