The NFL’s passer rating stat is an odd duck. Every other official statistic recorded by the league is either a raw number (yardage, TDs, etc.) or a simple average (yards/attempt, completion percentage, etc.), and with good reason: they’re easy to tally, easy to calculate, and even easier for fans to understand. Passer rating, on the other hand, produces numbers that aren’t easy or intuitive to interpret, and the formula used to calculate them looks like high-octane nightmare fuel for the mathematically challenged. On the plus side, it’s also the key to one of the most consistently reliable statistical predictors of success in all of football, but more on that in a minute.
Much like the old joke about democracy, passer rating is the worst stat in the world for measuring passing efficiency, except for all the other ones. You see, back when thee league started keeping official stats in 1932, the league’s best passer was determined by total passing yardage alone, which wasn’t a terribly satisfying measuring stick. In 1938 the league switched to deciding the passing crown via completion percentage, which was replaced in turn in 1941 by a goofy inverse system that ranked QBs against each other in six different stats. Several more statistical measuring sticks followed over the next few decades, some more complex than others, but all of them had one thing in common: nobody liked them.
That revolving door crap ended in 1973 when the current passer rating formula was introduced. For the men who created it, the main one being Pro Football Hall of Fame executive Don Smith, the reasoning behind the formula was surprisingly simple. First, the stat couldn’t be dependent on how other QBs performed, so ranked variables were out. Secondly, it needed to judge a QB’s performance based on averages, not raw stats. And thirdly, the result had to be normalized on a 100 point scale for ease of understanding, although results above 100 were possible for excellent performances — an A+ in passing, if you will.
As a baseline, the formula used passing stats from the 1970 season. A league average performance by 1970 standards would net you a 66.7 rating (in case you’re wondering, that number is based on nothing but the personal preference of the formula’s inventors). A performance in the 100s (the “perfect” score of 158.3 is just a byproduct of the average being 66.7) required a QB to post numbers that were, at least for the time, close to record levels. And if a QB really cratered, they would bottom out at zero.
So yeah, it’s not perfect, rule changes and offensive innovations have led to a steady creep in ratings over the years, and there’s most certainly a dash of pure arbitrariness in there, but that arbitrary touch isn’t excessive, and once you’ve had it explained to you the logic behind it is reasonable enough. And hey, at least you don’t have to work out the stats for every goddamn passer in the entire league anymore just to figure out one QB’s rating.
Besides, passer rating is only meant to measure one thing, and that’s overall passing efficiency. Among other things, it doesn’t factor in a QB’s leadership, scrambling ability, or ball handling skills in the backfield (think option reads, draws, and fakes), nor does it weight their performance in clutch situations. And if you prefer, you can always look at the rating as one that applies to the offense as a whole rather than just the QB — after all, the performance of the o-line, run game, and receiving corps has a little bit to do with the success of a team’s passing attack, too. And if you turn the stat around, you can look at it as a rating of how well the opposing defense was able to impede the passing efficiency of the offense they faced. Hooray for multi-purpose stats!
And now that I’ve given you all far, far too much background on all this, I’ll finally get to the point. What Smith et al couldn’t have possibly anticipated about their passer rating formula was how impossibly awesome it gets when you combine a team’s two passer ratings. Subtract the defensive passer rating from the offensive passer rating — for the math averse, you’re just comparing how efficiently your offense passed the ball to how well your pass defense held up — and out the other end pops a kickass little stat called the passer rating differential.
Simply put, passer rating differential is, in the words of Kerry Byrne of Cold Hard Football Facts, “the mother of all stats.” From an individual game perspective, winning the passing efficiency battle has a higher correlation to victory (~80%) than any other stat you care to put up against it, passing or otherwise. More importantly, that correlation has been remarkably consistent over time.
The predictive ability of passer rating differential gets even better when you look at it in terms of winning championships. Since 1940 (the beginning of the T-formation era), 74 NFL teams have won a championship or Super Bowl1, and 70 (94.59%!!!) of those teams ranked in the top ten in passer rating differential for that season.
The 2013 Seahawks, by the way, were number one in passer rating differential with a crazy good 38.97. For comparison, the 1985 Bears’ differential was 26.10, the 2000 Ravens had a 10.21, and the 2002 Buccaneers had a 37.97 (which begs the question, why the hell aren’t the ’02 Bucs brought up more often when the talking heads get into discussions of the all-time great defenses?).
However, that’s where the feel-good train ends its run, because as you might have guessed the 2014 Hawks’ passer rating differential is not up to the standard they set last season. Take a look at how they currently stack up against the rest of the league.
So yeah, not great, but judging by the proximity of Carolina and Arizona, Seattle isn’t the only great defense from last year that’s having trouble repeating their dominance. Still, hope is not lost here — here’s how those season stats break down for the Seahawks by game:
Yes, the Seahawks’ defense is underperforming, but aside from the Cowboys game the offense appears to be doing a pretty decent job passing the ball. And in case you’re wondering, the team’s differential prior to playing Dallas the Hawks ranked up in the top ten with a 15.53, which tells us something else: the season is still young enough that a single game has enough weight to cause big swings in a team’s cumulative stat line. So relax, take a deep breath, and don’t throw yourself into traffic or anything — there’s plenty of time to turn this thing around.
Unless of course you’re the guy who cut me off yesterday on my way to the post office, in which case you should definitely consider doing the traffic thing.
1 The 1968 Jets and 1969 Chiefs have been excluded here, as they played their entire seasons up to the Super Bowl versus other AFL teams and I haven’t had time to work out all the passer rating differential stats for that league. There are only so many hours in the day, you know?