Showing posts with label Myths. Show all posts
Showing posts with label Myths. Show all posts

Saturday, March 20, 2010

RBIs and Touchdowns

Joe Posnanski recently posted a nice article about the relative lack of value of RBIs, something that virtually any baseball fan with more than rudimentary knowledge of the game understands. These two paragraphs, in particular, helped solidify in my mind a similar idea I'd had for a while about football:

But it really isn’t so. Take this situation: One out, Rick Manning cracks a line drive single. Duane Kuiper hits a high chopper in front of the plate, he’s out, but Manning takes second. Jim Norris, with first base open and two outs, works for a walk. Manning and Norris move up on a wild pitch. Pitcher works around Andre Thornton, and he walks. Then, with a 3-1 count and the bases loaded, the pitcher has to throw a fastball that catches too much of the plate, and Rico Carty rolls a single between short and third, scoring two runs.

That’s a fairly typical sequence, I would guess. In our mind and in our statbook, Carty is the hero — two RBIs. He is, in fan and media shorthand, RESPONSIBLE for those runs. But he isn’t. Carty’s single didn’t make those two runs happen. Those two runs scored because of a series of events, and Carty’s single was just the last of those events.

I've emphasized that last sentence to drive home the notion that I have the same feeling regarding touchdowns. Last season, Adrian Peterson had 1,383 yards, a 4.3 average, and 18 touchdowns. In 2008, he had 1,760 yards, a 4.8 average, and 10 touchdowns. And I'd wager that at least a third of football fans would point to his 18 TDs in 2009 as a positive sign, despite the lower yardage and yards per carry.

I don't. I think they're meaningless, except to fantasy football players -- kinda like the RBI is to fantasy baseballers.

We've all seen drives where the quarterback passes and the featured back runs the ball down to the 1-yard-line. Then, in comes Mike Alstott (or Jerome Bettis or Craig Heyward) to plunge it in from the one. Alstott is the Rico Carty of this scenario. To paraphrase JoePo: Alstott's run didn’t make that touchdown happen. That touchdown was scored because of a series of events and Alstott's run was just the last of those events.

To be certain, there are times when the player scoring the touchdown is the "hero" of the drive and fully deserving of the stat bump and the accolades that come with scoring the TD. But taking another look at Peterson's 18 TDs in 2009, nine of them came from one yard out and only four came from further than five yards out. Peterson's good, to be certain, but a lot of backs could have scored from that distance, just as a lot of players can hit a single -- like Rico Carty did -- and drive in two runs in JoePo's scenario. All of which isn't to say AP's not a great player. He is, but it's not because he scored 18 TDs last year.

This is also why I've been slow to adapt to the notion, now professed by the guys at Pro-Football-Reference, that a TD should be worth 20 adjusted yards (instead of 10). To me, a touchdown doesn't require much more skill than any other run and shouldn't be rewarded in the stats. Yes, it is more difficult to gain a yard on the one-yard-line than it is on the 50, and I'm willing to give the 10-yard bump for that, but 20 just seems like too much to me.

Finally, JoePo goes on in his article to name a few situations where teams that added players who had poor averages put up big RBI numbers actually scored fewer runs the next season. I thought I'd see if there was any similar correlation in football. I did a search of players who scored more than 15 rushing TDs ("high RBI totals") but averaged fewer than 4.0 yards per carry ("low batting average/OBP") and got this list of nine players. (The Redskins apparently love these guys!) Did they improve their team's scoring the year they scored so many TDs? Let's see:

John Riggins: 24 TDs in 1983
1983 Redskins: 33.8 points per game
1982 Redskins: 21.1 ppg

Terry Allen: 21 TDs in 1996
1996 Redskins: 22.8 ppg
1995 Redskins: 20.4 ppg

George Rogers: 18 TDs in 1986
1986 Redskins: 23.0 ppg
1985 Redskins: 18.6 ppg

LaDainian Tomlinson: 17 TDs in 2004
2004 Chargers: 27.9 ppg
2003 Chargers: 19.6 ppg

Shaun Alexander: 16 TDs in 2002
2002 Seahawks: 22.2 ppg
2001 Seahawks: 18.8 ppg

Pete Banaszak: 16 TDs in 1975
1975 Raiders: 26.8 ppg
1974 Raiders: 25.4 ppg

Lenny Moore: 16 TDs in 1964
1964 Colts: 30.6 ppg
1963 Colts: 22.6 ppg

Karim-Abdul Jabbar: 16 TDs in 1997
1997 Dolphins: 21.2 ppg
1996 Dolphins: 21.2 ppg

Lendale White: 16 TDs in 2008
2008 Titans: 23.4 ppg
2007 Titans: 18.8 ppg

Well, that's not quite what I was expecting. In every situation except one (the Dolphins scored exactly 339 points in both 1996 and 1997), the team in the high-TD year scored more points than in the previous year -- and it usually wasn't even close. My only redeeming thought is that, unlike an "RBI machine," a high-TD featured runner can score around a third to a quarter of his team's points, compared to accounting for only about one-sixth to one-seventh of a team's RBI total, which is all most hitters can manage. Thus, with an outlying high-TD season, a high-TD back can have a bigger impact on his team's overall scoring than the RBI machine. I might also claim that five of these nine players were just barely under the 4.0 yards per carry mark (3.87 or better), so it's not like they were truly awful. And I'm not looking up any other team-related improvements that might have accounted for the increase in scoring. If I found a way to incorporate Adrian Peterson's 2008-09 seasons into this mix, I'd see that the Vikings scored 470 points in 2009 (when Peterson scored 18 TDs) and 379 in 2008 (when Peterson scored 10 TDs). But I think we all know who was responsible for that.

Maybe a wider search using this list (greater than 12 rushing TDs and less than 3.75 yards per carry) would shed some more light on the subject, but that's for another day. I'll still draft AP #1 overall in my fantasy football league, but I'll prefer if he has a season like 2008 than like he did in 2009.

Tuesday, August 4, 2009

Grinding it out: Good or bad?


I watch a lot of football with a Steelers fan. Invariably, it comes up while watching the Steelers -- either from him or by the announcers -- that, with a good running game and sturdy defense, the Steelers often attempt to "shorten the game" by "keeping the other team's offense off the field." The Steelers aren't the only team that does this, but it seems to come up a lot with them or when watching any team that's playing against Peyton Manning or Drew Brees or another elite quarterback (or, less frequently, against an elite running back or wide receiver).

This had always struck me as odd, and I think the feeling goes back to the first Super Bowl I really watched, Super Bowl XXV between the Giants and Bills. You might recall that the Giants controlled the ball for 40 of the 60 minutes of that game, including two long drives at the end of the first half and beginning of the second that, the analysts noted, kept the Bills' offense on the sideline for over an hour of real time. With Jim Kelly and Andre Reed and Thurman Thomas lined up on the other sideline, it seemed like a good strategy, right?

But then I got to thinking...don't football teams have, roughly, the same number of possessions each? Barring some shenanigans around the end of the first half or at the start of overtime, when you're done with your drive, the other team gets the ball. So what's the point of having a drive that's three minutes long versus one that's 10 minutes long? If, over a 60-minute game, each drive takes three minutes, that's 20 drives -- 10 for each team. If each drive takes five minutes, that's 12 drives -- 6 for each team. How does that actually help anyone? Yes, the Giants kept the Bills offense on the sideline in Super Bowl XXV, but, by the same token, they reduced the number of opportunities their offense had to score. What's the strategy there?

(I'll note here that I've never bought too heavily into the notion of "tiring out the defense" on long drives. Unless I'm mistaken, offensive players are pushing, shoving, and sweating on a drive, too. A 10-minute drive should have about the same effect on offensive players that it does on defensive players, shouldn't it? This article focuses solely on the strategy aspect of long drives.)

Then it hit me. To win, a team must score more on its possessions than the other team does on its (discounting things like return TDs). The "ground it out" (GIO) team generally has a worse offense than its opponent, usually because the opponent has a superstar QB and the other team doesn't. Take an individual drive by each offense, and you'd expect the QB-driven (QBD) team to do "better" (higher chance of TD or FG). Over a season's worth of drives, the QBD team will likely score more points than the GIO team, and the GIO's coach knows it.

But the GIT's coach realizes that, in a smaller sample size, his team can outperform the QBD team!

Here's a simple example. For those who've never played Dungeons & Dragons (yes, I am a complete geek), it uses dice of all kinds of shapes and potential values, including eight-sided dice (d8), which has values of, surprisingly, one through eight on its sides. Suppose I take a regular six-sided die (d6) and you take a d8. In any "who can roll higher" competition between us you clearly have the advantage. Even if you graciously allow me to win all ties, my chances of "winning" any individual roll is 43.75%.

Now, suppose we're going to put money on this. Clearly, this means I'm not too bright, but then suppose a third party gives us two odd numbers. These are the number of rolls we are to each make, and the person with the most "wins" gets $1,000. The catch is: I get to choose how many rolls we'll make, based on the numbers that third person gave us.

And I should always choose the lower number of rolls, because with fewer rolls, I have a better chance of getting lucky and beating you out. With just one roll, as mentioned, I win 43.75% of the time. With a best 2-out-of-3 contest, I win 32.30% of the time. 3-of-5 is 26.50%. 4-of-7 is 22.82%. And so on.

Now, back to football. If GIO's coach believes his offense is like a d6 and his opponent's is like a d8, the best strategy for him is to try and minimize the number of drives each team gets. If he could cut the game to one drive per team, he should. That's clearly impossible (and I'll admit I'm not taking defenses into account at all), but, failing that, he should try to limit the total number of drives in the game, by grinding out clock time with running plays and short passes.

Maybe.

I read somewhere that the typical NFL game has about 10 drives per team. I'm too lazy to do any real research on that, but it seems about right. Going back to my dice contest:

In a best 6-of-11 contest, the d6 beats the d8 18.24% of the time.
In a best 5-of-9 contest, the d6 beats the d8 20.22% of the time
In a best 4-of-7 contest, the d6 beats the d8 22.82% of the time.

If we assume that a typical game has around 9-11 drives and that a GIO coach can eliminate two drives with his clock-eating strategy, then he would appear to increase his winning percentage by only about 2%. This makes sense; as the number of trials increases, the difference in winning percentage by adding or subtracting a couple trials goes down. There's not much difference between a best 50-of-99 contest and a best 51-of-101 contest.

Now, a d8's average roll is 4.5; a d6 averages 3.5. 4.5/3.5 = 1.29, so the d8 is about 29% "better" than the d6. Thus, if QBD's offense is 29% better than GIO's, we might expect the percentages quoted above to be true. Considering that GIO typically has a better defense than QBD, thus reducing QBD's overall effectiveness on offense (sorry, they don't make seven-sided dice), you might even say that the differences are even smaller. For one example, Pittsburgh, the quintessential GIO team, scored 347 points in 2008. San Diego, which led the AFC in scoring, had 439. 439/347 = 1.27, which is close to the 29% difference between a d8 and a d6.

So, assuming that my relatively simplistic assumptions are true, the GIO team does increase its chances of winning using its GIO strategy, but only by a small amount. It would seem to me, then, that teams should cater more to their strengths than to a formula. In the case of the Steelers, with a questionable running game heading into next year, a strong-armed QB (Ben Roethlisberger), and three legitimate receiving threats (Hines Ward, Santonio Holmes, and Heath Miller), maybe they should air it out a bit more. That is, unless Jerome Bettis (or maybe Franco Harris) can come back from retirement.

Tuesday, July 14, 2009

Revisiting receivers

I knew something seemed a little amiss with yesterday's post, and it bugged me all afternoon. (I still need a life.) If I was trying to "prove" that the quality of a #2 receiver had no effect on a #1's yardage total, then I should start with the #2s and try to see what seasons their complementary #1s have.

So I made two changes to my initial analysis. First, I removed all instances of a #2 having a better season than a #1 on his same team, leaving me with 83 pairings. That way, I'll only be looking at #2s who were inferior (at least from a yardage standpoint) to their #1s. Second, I reversed the direction of my study by grouping the #2s together and seeing how their corresponding #1s performed.

As I did with #1s yesterday, I split the #2s into three groups, of 28, 27, and 28. The top 28 had the most yardage, the middle 27 the second most and the bottom 28 the least. If yesterday's "Situation A" is correct -- that, if you have a poor #2, the #1 will rack up great stats as the only viable receiver -- we'd expect that the low group should have the highest corresponding yardage for its #1s. If "Situation B" is correct -- that having multiple good receivers means that defenses can't concentrate on shutting down one or the other -- we'd expect the highest yardage to belong to the top group. Here's what we get:










Avg. #2 Yds.
Avg. #1 Yds.
Top 28
1,1851,362
Mid 27
873


1,358
Bot 28
584

1,381


These results seem to verify yesterday's results, namely that the quality of the #2 receiver has little to no effect on how many yards the #1 will rack up. With a very good #2, averaging nearly the same 1,200 yards I set as a minimum to qualify as a #1, #1s managed only 19 fewer yards on average than they did with a poor #2. The correlation between the two groups (which was changed only due to my removing the "#2 > #1" pairings and not by my sorting things differently) is 0.047, still small enough to be insignificant.

I read somewhere today that, with Terrell Owens gone, Jason Witten could have a huge season. Don't you believe it. Witten might very well have a great season, but it won't have the slightest thing to do with Terrell Owens, just as Lee Evans' 2009 won't have anything to do with Owens going to the Bills.

And besides, we all know the real reason the Cowboys passing game will improve this year...

Monday, July 13, 2009

Does a great fantasy receiver need a #2?

Like to the running back conundrum I questioned last year, another fantasy football paradox popped into my head over the weekend as I was wondering what effect the departure of T.J. Houshmanzadeh might have on Chad Johnson in 2009. (I really need a life.) Viking fans will also recall that, for all the hubbub on what effect losing Cris Carter would have on Randy Moss, Moss went on to have his best season, yardage-wise, in 2003. I've often heard two lines of argument about drafting wide receivers:

Situation A: A #1-caliber receiver has no good #2 on his team
"He's all they've got! They have to throw to him! He'll have a great year!"

Situation B: A #1-caliber receiver has a good #2 on his team
"Opposing defenses have to cover #2, as well! #1 will have a great year!"

Yeah, I'm gonna have to look into that.

So I spent an inordinate amount of time on pro-football-reference.com looking at all receivers for the last 10 years with at least 1,200 yards receiving and looking at their "#2" receivers in that season, so see if there was any correlation between great receiving seasons and particularly good (or bad) seasons by complementary receivers. There were 95 seasons that matched this criteria -- 93 by wide receivers and two by Tony Gonzalez. The reason I limited it to 10 years was because, frankly, I would have to look up each player's "#2" on his team's page for that season, which took long enough as it was. 95 seasons is probably enough to give us a reasonable sample size, and by limiting the study to the last 10 years, I keep it firmly rooted in the "modern" NFL with its oft-explosive passing game.

I use "#2" in quotation marks because, several times, a team had more than one player with 1,200 yards receiving, meaning that one player's "#2" receiver actually racked up more yards than him. Specifically, there were 24 such pairings (twice counting 12 different sets of players) from 1999 to 2008, from Torry Holt (1,635 yards) and Isaac Bruce (1,471 yards) in 2000 to Jimmy Smith (1,213 yards) and Keenan McCardell (1,207 yards), also in 2000.

I took the 95 pairs and sorted them by the total yards for the #1 receiver. I then split the #1 receivers' seasons into three parts, of 32, 31, and 32 players. The top 32 had the best seasons, the middle 31 had the second-best, and the bottom 32 had the third best. If there is a correlation between great seasons by #1 and great or not-great seasons by #2, we should see some sort of significant difference in their corresponding #2's yardage totals. Here's what I got:













Avg. #1 Yds.

Avg. #2 Yds.
Top 32
1,495958
Mid 31
1,336

932
Bot 32
1,241950


Doesn't seem like much of a difference between the three categories. The correlation between the two sets of numbers is -0.06, which also indicates that there's virtually no connection between yardage totals for #1 and yardage totals for #2.

A few other interesting stats...

Greatest difference between #1 and #2: 1,122 yards (Steve Smith/Ricky Proehl, 2005, 1,563 to 441)

Smallest difference between #1 and #2: 4 yards (Hines Ward/Plaxico Burress, 2002, 1,329 to 1,325)

Most frequent #1-#2 pairing: Torry Holt/Isaac Bruce (2000, 2001, 2002, 2004)

Most 1,200 yards seasons, 1999-2008: Randy Moss and Marvin Harrison (6 each)

And Terrell Owens, in his five 1,200 yard seasons, has had a different #2 in each one: Jerry Rice, J.J. Stokes, Tai Streets, Brian Westbrook, and Jason Witten.

Now, this study isn't perfect. Sometimes, a #2 puts up poor numbers not because of a lack of talent, but due to some other reason, such as injury. For 2008, Anquan Boldin is a perfect example. Had he not missed four games, he almost certainly would have cracked 1,200 yards in his own right (he had 1,038), thus altering not only Larry Fitzgerald's numbers in my data but adding a new point of his own. Calvin Johnson losing Roy Williams after just five games also might have had some impact on his numbers. (Shaun McDonald's 332 yards receiving for the 2008 Lions -- fittingly -- makes him the worst "#2" in my data.) Still, it could also be argued that Fitz and CJ put up their good numbers without a solid #2 for part of the season (and no, I don't count Steve Breaston), even if you could theoretically add together the yardage numbers for several receivers and paint a more accurate picture of their "#2 receiver."

Another thought is that a #1 receiver, especially one who's having a great season, is going to get a lot of balls thrown his way (similar to the argument in situation A) and the #2, by default, isn't going to get as many passes thrown his way and, therefore, have worse numbers. There might be something to that, but I think the effect is minimal.

Still, when it comes to trying to pick a wide receiver, at least for fantasy football, I believe it comes down to not thinking too hard: Pick the best guy, period. You can take QB and best offensive philosophy (a la the current Patriots or Cardinals or the early-2000s Rams) into account, but don't overly concern yourself with his other receiving teammates, either for the positive or the negative.

Wednesday, July 1, 2009

A non-statistical opinion on the great debate

Thinking about the rushing/passing correlation, I wondered why it might be the way it is. After all, eight in the box should be a better defense against the run than seven in the box, right? That's why teams do it, right?

What if it isn't better but is actually just a different step in the risk/reward ratio versus running plays?

Consider the goal-line defense. Except for maybe a couple of corners and (maybe) a safety, everyone's stacked up at the line, 8, 9, or maybe even 10 "in the box." The idea is to stop a very small gain by the offense, which is generally all you have to stop when you're backed up against your own goal line on defense.

But how many times have we seen a short-yardage defense in a non-goal line situation when the running back bursts through and there's nobody else to tackle him, so he goes for a bunch of yards? It's anecdotal, but we've all seen that at least a few times.

If you think about it, that's probably how an eight-in-the-box defense should work. That extra defender is there to stop the back for a short gain if he gets through the first seven defenders, but if the back makes him miss, there's not much left to stop him from making a huge gain. It's just like blitzing against the pass -- you increase your chance of a big negative play (sack) for the offense but increase your risk of giving up a big offensive play.

If you played against a defense like that a lot, you might expect your rushing carries to look something like:

1, -4, 1, 4, 3, 0, 3, 3, 67, -1, -1, 3, 7, 2, 6, 5, -2, 0, 7, 1, -2

or

-1, -1, 2, 2, 6, 2, 40, 6, 0, 6, 1, 3, 0, 2, 0, 3, 0, 5, 5, 2

Hey, I think we've seen those stat lines before! (And thanks to Pacifist Viking for writing them out and making for an easy cut-n-paste.)

It's not pure, statistical proof, and I'm certainly not an NFL coach, but from a layman's point of view, it might be true that eight-in-the-box is good at stopping short gains but is vulnerable to the long gain. It's sort of like a less aggressive form of run blitzing. It may not have any effect on average rushing yards or total rushing yards, but would be more susceptible to the wild fluctuations and inconsistency we see in Adrian Peterson's numbers.

I'm not the first person to come up with this idea. I can remember, in my old Strat-o-Matic Football, that there were "zones" you would line up your defensive players in. There was one just behind the interior defensive line where the middle linebacker(s) typically lined up. You could "move up" those linebackers to linemen's zones to pass or run blitz and that could help you stuff the play. You could then move the free safety up to occupy the linebackers' original zone -- thus putting "eight in the box." However, there was a result on the cards for running plays that said if the linebackers' original zone had only one guy in it, you would add 10 yards to the result of the run. If there was nobody there, you'd add 20 yards. Interesting, that.

So maybe there is some credibility to the notion that a strong passing game would help Adrian Peterson become more consistent on a carry-by-carry basis. And maybe eight-in-the-box isn't actually a "better" defense than a seven-in-the-box strategy, no more than blitzing is a "better" pass defense than dropping into coverage. It probably just fits somewhere between full-on run blitzing and 7 ITB in terms of risk vs. reward.

Tuesday, June 30, 2009

You can't un-learn things

Now that I've done a fair job of establishing that there's no link (or maybe a slight negative link) between a team's passing proficiency and their running game's yards per carry, I can't help but notice claims to the exact opposite all over the place. And by "all over the place," I mean two places where I generally go for better-than-average football analysis and commentary. I still like these sources and I don't really blame them for holding to a thought process that I would also have believed just a few months ago, but it's difficult for me to pass them by without dying a little on the inside.

Yesterday, Daily Norseman posited that:

I think it's fair to say Peterson would see a bump in his yards per carry average with Favre as the team's starter


While the free pdf download of FootballGuys' fantasy football magazine (a great deal, and only 21 MB!), when discussing Matt Forte and the effect Jay Cutler will have on his numbers, asks, on page 111:

1. Will the running game improve with Jay Cutler under center?


And responds with only the following information:

Yards per carry average for all seven Denver RBs last year = 5.17
Yards per carry for Matt Forte last year = 3.9


This, to me, is an egregious oversight by a group of people who should know better. Putting aside the question of whether it's an erroneous assumption, it's a classic case of small sample size. If the Vikings traded for Drew Brees, I could just as easily ask:

Will the Vikings' running game decline with Drew Brees under center?


And respond:

Yards per carry average for all New Orleans RBs last year = 4.15
Yards per carry for Adrian Peterson last year = 4.85


So, clearly, adding Drew Brees will make Adrian Peterson's YPC worse, just as Cutler will make Forte's better. It has nothing to do with any of the involved teams' offensive line, quality of their backs, play calling, run-blocking scheme -- which, it should be noted, Denver has been superb at for years, long before Jay Cutler took over at quarterback -- nope, it's entirely because Drew Brees/Jay Cutler was at quarterback. End of discussion.

(Note that both of these analyses use straight yards per carry as the measuring stick, not the consistency of the back from carry to carry, which is still possibly related to quarterbacking.)

And while I haven't looked over every player's description, the entry for Ryan Grant (page 113) leaves me scratching my head in a number of ways:

There is always a chance that the Grant we saw in 2007 was the anomaly. Without a Hall of Fame quarterback in the backfield, defenses were able to concentrate more on Grant and lessened his impact.


No, not only do we have the "quarterback affects running back's performance" myth to deal with, but there's also the "Brett Favre makes everything better" myth. Aaron Rodgers threw for 4,000 yards and 28 TDs last year. Maybe for the first part of the year, teams concentrated on shutting down Grant, because they didn't know what they'd be getting from Rodgers. By about midseason, though, if you weren't paying attention to Rodgers, he was going to kill you. Grant's disappointing season might also have had something to do with the Packers' defense being so bad as to necessitate the abandonment of the running game earlier than they would have liked.

There's probably more like this in the FG analysis of running backs, and maybe for other positions. It's still a great resource that I heartily recommend, but don't buy into the notion that Matt Forte, or any other back, will have a great season because of improved quarterbacking.

Thursday, June 18, 2009

Can Brett help Adrian?

One of the persistent reasons I've heard for bringing Brett Favre -- or any quality quarterback -- to the Vikings is that the threat of an improved passing game will open up more holes for Adrian Peterson (and Chester Taylor) to run through, thus improving the running game as well as the passing game. With teams forced to respect the pass more, Peterson will face fewer eight-man fronts and be poised for a spectacular year.

Putting aside the question of whether Brett Favre will improve the Vikings' passing game, I've always found this type of reasoning questionable. Certainly, on the surface, it makes sense, and you hear this logic frequently espoused by commentators and even coaches. But does a better passing game actually improve the running game? And here's another thought: Adrian Peterson has averaged 101.3 yards per game and 5.2 yards per carry during his professional career. Is it really realistic to expect that any improvement to the team will push him much over a 1,600-yard season on a consistent basis?

When you think about it, in terms of raw numbers, an improved passing game should probably decrease overall rushing numbers. After all, if your passing game is good, you should be using it a fair amount, and that's going to take carries away from your running back. The 2008 Vikings threw the ball 452 times (with 43 sacks) and an unknown number of QB scrambles, for about 500 total pass dropbacks. If they'd dropped back to pass 600 times, that would have been 100 or so fewer possibilities for Peterson and Taylor to carry the ball. That'll subtract from your rushing yards, no doubt.

I took a few stabs at this concept last year, and my admittedly amateurish results showed that a good passing game does not help the running game (or vice versa). Brian Burke over at Advanced NFL Stats has talked about a similar topic a few times, and uses what I like to call "The Princess Bride Paradox":

* Your team is very good at rushing. Thus, it should rush the ball.
* The defense knows you're good at rushing. Thus, they'll play a stout run defense.
* You know that the defense knows you're good at rushing. Thus, you'll surprise them by passing!
* The defense knows that you know that they know that you're going to try and stop the run. So you'll outsmart them by playing the pass!
* You know....

Y'know?

Brian's follow-up post used the 2007 Vikings as a specific example by asking what their optimal run-pass mix was, given their talent. His conclusion was that talent on one side of the offense (running or passing) shouldn't affect play-calling on the other side, though he did acknowledge that the passing game improved slightly when Peterson was added in 2007.

But all this still doesn't answer the basic question: Would an improved passing game help Adrian Peterson? Since he ran for 1,760 yards last year at a 4.8 yards-per-carry clip, there doesn't seem to be much that could help him. So I got to wondering, in the best seasons ever by running backs, did those backs have a strong passing game to "open up lanes" for them?

The answer: not so much. I took the top 27 single-season rushing performances -- that's every season of 1,700 or more yards -- and checked out their passing games. Here are the results:


































YearRunning BackRush YdsPass AttPass YdsRating
1984Eric Dickerson2,1053582,38265.9
2003Jamal Lewis2,0664152,51764.7
1997Barry Sanders2,0535403,60575.4
1998Terrell Davis2,0084913,80893.5
1973OJ Simpson2,0032131,23642.7
1980Earl Campbell1,9344633,27170.4
1994Barry Sanders1,8834593,08580.2
2003Ahman Green1,8834733,37790.5
2005Shaun Alexander1,8804743,63296.8
1963Jim Brown1,8633222,44978.3
2005Tiki Barber1,8605583,76275.7
2002Ricky Williams1,8534553,06979.3
1977Walter Payton1,8523052,07061.8
1998Jamal Anderson1,8464243,74492.7
1986Eric Dickerson1,8214032,38063.7
1975OJ Simpson1,8173542,66180.2
2006LaDainian Tomlinson1,8154663,41293.0
1983Eric Dickerson1,8084893,41176.0
2006Larry Johnson1,7894503,24384.7
1995Emmitt Smith1,7734943,74191.7
2008Adrian Peterson1,7604523,21781.5
1985Marcus Allen1,7595063,48168.5
1997Terrell Davis1,7505133,70487.4
2005Larry Johnson1,7505074,01490.1
1985Gerald Riggs1,7194623,02566.5
1992Emmitt Smith1,7134913,59788.8
2000Edgerrin James1,7095714,41394.7
























1993Average1,8554483,19780.5

The five running backs with 2,000-yard seasons had Jeff Kemp, Kyle Boller, Scott Mitchell, John Elway, and Joe Ferguson as their primary quarterbacks. Apart from Elway (who missed four games with injury and was replaced by Bubby Brister), that's a pretty sorry group.

Only eight of the 27 seasons featured quarterbacks with passer ratings of 90 or higher and the overall weighted average is 80.5 and the better entries seem to be clumped down near the bottom, with the lower rushing totals. That's not real impressive, and only four of these 27 seasons occurred before the changes to open up the passing game in 1979.

Of course, what I should really be looking at is overall rushing totals for an entire team (minus quarterback rushing numbers), but that would require more effort than I'm willing to put in :P Still, consider the following additional two seasons:


































YearRunning BackRush YdsPass AttPass YdsRating
2007C. P.
1,7564322,93874.2
2008C. P.
1,727452

3,21781.5

Those are a couple more excellent 1700+ yard seasons that would be added to the list above if "C. P." were a real person. He's actually the combined rushing numbers of Adrian Peterson and Chester Taylor ("Chester Peterson"), multiplied by 80% to give a reasonable number of carries (316 and 372 for 2007 and 2008, respectively) and represent that C. P. probably wouldn't be actually able to carry the ball 859 times over consecutive seasons. Again, the quarterbacking for C. P.'s team has been mediocre to poor for two seasons, yet he's put up spectacular numbers. How would adding a better quarterback improve them?

Without question, better quarterback play would improve the Minnesota Vikings as a team, though whether it would improve the running game is, I think, questionable. Even if we could somehow acquire the in-their-prime versions of Joe Montana and Jerry Rice and plant them on the '09 Vikings, Adrian Peterson would be hard-pressed to improve on his stats from the previous two years. There are just only so many plays to go around, and if the passing game is that much more improved, we should use it more.

Pacifist Viking does raise a good point, though. It might be that an improved passing game will make Peterson more consistent on a per-play basis. I brought up Barry Sanders in the discussion, as a perfect example of a player who very rarely had a good passing game to support him and could be maddeningly inconsistent on a carry-by-carry basis. Without access to play-by-play data, it'll be tough to prove/disprove this, but it's a decent enough assumption. Personally, I think consistency is overrated, but that's a topic for another post.

Based on the evidence from NFL history, though, it's hard for me to believe that an outstanding rushing season is the result of good passing from the same team. Some of this might be because, with limited resources (i.e., cap room), teams can't invest in as much. So you have a great running back and so-so QB and receivers (and a line that's better at run blocking than pass protection); all that might result in some of the disparity we see here. But when it comes to improving your running game, the most important factor, I feel, is the talent of your running back and his blockers -- not your quarterback and receivers.

Thursday, June 11, 2009

The dubious importance of timing

(Don't worry, there's football contained herein.)

So I've been following a discussion on my favorite comedy/baseball site about the importance of RBIs. My views on the validity of RBIs as a stat pretty much boils down to a comment by Hossrex, in response to the original article writer, Patrick:

Patrick: “You seem to think that hits with men on base is luck, like a roll of the dice.

No. Getting a hit is skill. However, whether or not there are runners on base when he gets the hit is luck… like a roll of the dice.

That's about as good as I can put it. I've held the belief for a long time that when a player gets a hit -- or scores a touchdown or a goal or whatever -- is less important and very random when compared to his ability to get hits. A player who gets 200 hits in a season or who hits 40 HR in a season is going to do some of that with runners on base and do some of it with runners not on base. Whether he does so with men on or not is a factor almost completely beyond his control. His RBI total is a factor of his overall hitting ability and the skills of the batters hitting in front of him -- not in his ability to deliver "clutch" performances.

A few other people in the comment chain bring up Albert Belle, possibly the best RBI man of the 1990s. A look at his stats shows the following splits (listed as AVG/OBP/SLG/OPS):

Bases Empty: .296/.363/.571/.934
Men On Base: .293/.376/.557/.932

Take away Belle's intentional walks with men on base (reducing his OBP to .359 and his OPS to .916) and the fact that OPS rises when men are on base as a rule, and you can argue that Belle was worse with men on base! His OPS with runners in scoring position is a healthy .991 (dropping to .965 when you take out intentional walks), but those encompass less than 60% of his plate appearances with men on.

Two other noted "run producers" of the 90s fare similarly (removing intentional walks):

Juan Gonzalez
Bases Empty: .290/.332/.552/.884
Men On Base: .301/.340/.569/.909

Joe Carter
Bases Empty: .255/.294/.467/.761
Men On Base: .264/.306/.461/.767

So why did these men accumulate so many RBIs and gain a reputation as "RBI men"? Part of it is that they were, in actuality, good hitters. Guys who hit 30-40 HR a year are going to drive in runs, just by happenstance. Another factor is the guys hitting in front of them who get on base a lot and generally run well. Albert Belle had Kenny Lofton leading off for him. Gonzalez had Will Clark, Rusty Greer, and a few others.

But nobody had their table set better than Joe Carter, who almost certainly has the worst non-RBI numbers of anyone with his hitting line. How can you not drive in runs when you have, at various times in your career, Roberto Alomar, Paul Molitor, and even, for a few months, Rickey Henderson, batting ahead of you? Carter ranks 57th all time in RBI but 261st in slugging percentage and 625th in OPS, and, as demonstrated above, was no better at driving runners in than he was at hitting with nobody on base.

But this isn't supposed to be a "bash Joe Carter" post. The point is, instead, to try and illustrate that just because a player does what he's supposed to do -- such as get hits or score TDs -- he is probably not any better (or worse) at doing it at specific times than his overall skill level would indicate.

And that brings us to situational stats in football and this question. If you give the ball to Adrian Peterson (or Emmitt Smith, or Walter Payton, or Jim Brown) on 3rd-and-2 at your opponent's 20 down 7-3 with three minutes left in the first quarter, does he have any better of a chance of gaining that first down if it's 3rd-and-2 at the opponent's 20 down 7-3 with three minutes left in the fourth quarter?

I say "no."

Putting aside unquantifiable things, like player fatigue level, general play calling during the day, and so on -- and I arranged the example to make a run or pass pretty much equally likely in either scenario -- I don't believe that a player performs any better or worse in the same scenario at different points in the game and that his accumulation of "good stats" (whether they be RBIs or first downs or whatever) are due more to their being in a position to succeed more often than their peers and performing at a rate roughly identical to their established skill level.

In my baseball examples, a player is going to approach each at-bat pretty much the same way, whether there are runners on or not. In my football example, the running back is going to try to gain that first down (or more) with everythign he's got, whether it's the first quarter or the fourth (and, frankly, if he puts forth "extra effort" in the fourth, why can't he do that throughout the game?). Obviously, some game circumstances can change things, but, for the most part, baseball players are trying to hit the ball as hard/well as possible and football players are trying to gain as many yards as possible every chance they get.

You can approach this any number of ways. Does Peyton Manning pass well on third downs becuase he's good on third downs or just because he's good, period? Does Sidney Crosby have a lot of game-winning overtime goals because he "turns it up a notch" in overtime or becuase he's a great player overall? Alternatively, does Adam Dunn strike out a lot with the bases loaded because he's not "clutch" or because he just strikes out a lot?

And then there's my favorite "circumstantial" football stat, the fourth-quarter comeback. I've heard more than a few times the last few months that Brett Favre has 42 fourth quarter comebacks. Putting aside the notion that, in order to have to make a fourth-quarter comeback, you have to be behind (*coughinterceptionscough*), can anyone tell me how many times Brett Favre has taken his team into the fourth quarter while behind and not come back? Anyone?

Didn't think so.

Favre has 100 losses as a starter. His team trailed at some point in the fourth quarter in all those losses, so that makes Favre's "4th quarter comeback rate" 42/142, or about 30%. Seems rather "meh," I think. And yes, several of them might have been the other team scoring with little to no time left and Favre not having a realistic chance to come back, but I'd also wager some of the victories involved the Packers taking the lead (maybe on a run or a defensive score) with 14:50 to go in the fourth quarter and not relinquishing it. Hardly dramatic, but it still counts as one of the 42.

Then there's the question of whether Favre (or any other QB famous for his late-game heroics) is actually even better in the fourth quarter than he is in the other three. (Looking at split stats is deceiving, since offenses and defenses do radically change their approaches depending on the time of the game and the score. Still, his 79.6 career passer rating in the 4th isn't anything to get excited about.) If so, why doesn't he play that well throughout the game? Such monikers are usually the result of an overall high quality of play and a few legendary performances in big situations (Joe Montana, John Elway, and Reggie Jackson are all good examples) etching the player's name into posterity. Remember, just a few years ago, Peyton Manning wasn't "clutch." Then he won a Super Bowl.

But this isn't even a "bashing Brett Favre" post. (I've done plenty of that lately.) If you've gotten this far, I just hope you can open your mind to the thought that opportunity and overall skill level matter as much when evaluating a player as his counting stats and that when a player does something is less important, in the long run, than doing it consistently and getting plenty of opportunities for "big moments."

Sunday, May 10, 2009

The case for Tarvaris Jackson


Brett Favre. Sage Rosenfels. Tarvaris Jackson.

No, that's not just a naked attempt to SEO-itize this post (though if you want to click on it a few thousand times, I won't stop you). Those are the three men who could start at quarterback for the Minnesota Vikings in their first regular-season game of 2009, September 13 against Cleveland.

I have no idea which one will be the starter. Nobody does. There are plenty of opinions out there, though, about who should be the starter. A lot of Vikings fans want Brett Favre. Most of those who don't, or who don't think the Vikings will sign Favre, favor Sage Rosenfels.

Tarvaris Jackson, meanwhile, has been left out in the cold. That might be just, but then again, it might not. At the very least, Tarvaris Jackson should still be an unanswered question for the Vikings, not a foregone conclusion.

For most purposes of this discussion, I'm not going to include Brett Favre. First of all, he's not on the team. Also, people who want Favre want him, and people who don't want him don't, and no argument is going to change either side's opinions. It's no secret where I stand on the matter, but that's more due to the current state of Favre's play (and health) than an intrinsic hatred of #4. If we could get the 1999 Brett Favre instead of the 2009 Brett Favre, I'd fly down to Kiln, kidnap Deanna Favre, and hold her ransom until Brett joined to the team.

As a result, most of my discussion will be to compare Tarvaris Jackson to Sage Rosenfels, which are currently the two best options the Vikings have at quarterback. (Jay Cutler ain't walking through that door.) I have no particular dislike of Sage. My issue is that I feel too many people feel that he represents an automatic upgrade over Tarvaris Jackson, not so much because Rosenfels is that good -- even hardcore Sage-backers agree that he's not -- but that Jackson is that bad, that he's utterly useless to an NFL franchise and that Rosenfels is clearly the better option. People are more "anti-Jackson" than they are "pro-Rosenfels."

Why is that? Here are the main arguments against Jackson, as I see them:

1) Rosenfels is strong armed and more accurate than Jackson. I haven't watched enough of Rosenfels to really be able to judge his arm strength, though Jackson looks pretty good here. In terms of completion percentage, he does have an edge on T-Jack (62.5 to 58.4).

But look at the career splits for Rosenfels: 49.5% completion percentage as a Miami Dolphin, compared to 65.6% as a Texan. The sample size is small, admittedly (only 109 passes with the Dolphins), but he almost certainly got a boost moving from the moribund Dolphins offenses of 2002-2005 (which Gus Frerotte also had a hand in) to the Houston Texans and having Andre Johnson and Owen Daniels to throw to. Yes, I said Owen Daniels. Few things have the potential to drive up a quarterback's completion percentage like having a tight end who's caught 133 balls over the past two seasons. and Steve Slaton sucking up 50 balls in his 2008 rookie season didn't hurt either.

Compare that to the receivers Jackson has had during his starting career. For reference, look at that video again. And clearly, Jackson has had some very inaccurate days. But are those days in the past? Well...

2) Sure, Jackson looked good in December of 2008, but he did it against some poor defenses. True. For the record, T-Jack was 57 of 89 (64.0%) for 740 yards, 8 TDs, and 1 interception in effectively 3 1/2 games, for a passer rating of 115.4.

Obviously, that's really good. But his best games came against Detroit (in one half), Arizona, Atlanta, and the Giants. In terms of opposing passer rating in 2008, those teams were #32, #30, #18, and #7, respectively. And the Giants played their backups for most of the second half.

Regardless, this is a huge step up from Jackson's previous performances, whether against good or bad pass defenses. Given a full season against a wide variety of defenses, I wouldn't expect him to have a 115.4 rating, but, given the rest of the Vikings' strengths, 30 points lower than that would be acceptable. (Jay Cutler, FYI, had an 86.0 passer rating in 2008.) But you can't completely discount his strong finish to the season because they were against soft defenses. Good quarterbacks should carve those kind of teams up.

3) Jackson was awful against the Eagles in that playoff game. Get rid of him. Yes, he was awful. So were a lot of quarterbacks against the #4 defense, by opposing passer rating, in 2008. Eli Manning actually had a worse game against the Eagles the next week, but I haven't heard any calls for his ouster.

A related note is that Jackson clearly doesn't have "it," where "it" is defined as what it takes for a quarterback to "win the big one," "take his team to the next level," and so on, as evidenced by his poor play in that playoff game. At one point in their careers, Steve Young, Peyton Manning, Jim Kelly, and Warren Moon were all also given such labels. Jackson is almost certainly not as good as those quarterbacks, but history should have shown by now that applying such an all-encompassing label to a quarterback -- especially after just one career playoff game -- is ludicrous. Granted, Kelly and Moon never won a Super Bowl, but I wouldn't mind having either on my team.

(And it's not like Brett Favre has never had a bad playoff game.)

4) Rosenfels just looks better than Jackson as a quarterback. I've been over this before. I don't care if you're 6'4" tall with a rock-solid jaw, dashing good looks, and a physique like a god or if you're short, squat, have a deformed head, and only one leg. I care if you're a better player. That's all. Anyone who's read the first chapter of Moneyball should be familiar with that concept.

(OK, so maybe a one-legged QB would be ineffective. But he'd still be more mobile than Kelly Holcomb.)

Rosenfels looks more poised, looks more effective, looks more like a quarterback is supposed to look (and I guess he's not unhandsome, in a "good ol' boy" kind of way), but is he actually a better quarterback that Tarvaris Jackson? That's the only point that should matter. Gus Frerotte looked good, too, until he kept throwing one interception after another. (Speaking of which, if you want to persist that Frerotte should have regained his starting job because he was 8-3 as a starter, I remind you that Jackson was 8-4 as a starter in 2007.) Rosenfels has a ghastly 5.2% career interception percentage, compared to Jackson's 3.4% and even Favre's 3.3%, and that doesn't count the infamous Rosencopter. I hate to use the terms, but the Vikings need a "game manager" more than they need a "gunslinger."

But I think the bigger point is the debate of "scrambling QB" vs. "pocket QB." And, you know what? I think scrambling QBs are overrated and generally less effective that pocket QBs. But that doesn't mean every scrambling QB is worse than every pocket QB. The point is that a scrambler has to also be a good passer to be a good QB. I think Michael Vick, Vince Young, and (maybe) JaMarcus Russell have clearly shown that you can't just run around in the NFL and be effective; you also need to be a good passer.

Recent "scrambling QB" failures like that are why we're predisposed to think less of scrambling QBs nowadays. We've forgotten how good players like Donovan McNabb, Steve Young, Randall Cunningham, Steve McNair, and even (for some seasons, at least) Daunte Culpepper were, and how they generally had their best seasons, passing-wise, when they cut back a little on their running and learned how to pass.

5) Jackson will never learn to be a good QB. Here's the crazy idea: Maybe he already is.

Maybe sitting on the bench for two months, observing, learning, studying was good for him. Maybe he took everything he learned and applied it to have the best month of his career, even if he did stink it up against the Eagles. Frankly, I'll take a quarterback who's good for 4 out of 5 games.

Remember the Brian Billick quote that sparked this article? Billick said that he can determine whether a quarterback will be successful "between the 24th and 30th game" and that Jackson was right about in that vicinity.

Maybe he's right. Maybe what we saw out of T-Jack in December is the "real" T-Jack. Maybe that's the quarterback he can be, even if his performance will be mitigated against stronger opponents.

It's anecdotal, I know, but compare Drew Brees' first two seasons as a starter to the rest of his career. He was so bad those first two years that the Chargers drafted Phillip Rivers to replace him. Absolutely nobody could have predicted that Brees would explode and become one of the NFL's best quarterbacks.

For an example a little closer to home, remember loathing Visanthe Shiancoe? Around week 3, I think every Vikings fan was ready to trade him for the proverbial warm six-pack and a bag of used jock straps. He's maybe not a Pro Bowl-level talent now, but in the span of about two months, tight end went from a "need" for the Vikings to a "strength." And it wasn't because of Jimmy Kleinsasser.

Jackson's not likely to match Brees, but it's a sign that it can happen, that a mediocre player can suddenly and dramatically improve his game after a significant time off -- in Brees' case, it was between the 2003 and 2004 season. In Jackson's, it might have been his two-month hiatus from the starting job.

(It was also said that Brad Childress didn't talk to Jackson at all during his benching. Knowing Childress as we do, maybe that was the best thing that could have happened to Jackson...)

Hoping for such a transformation is often just that -- hope. After all, we'd all like for our mediocre three-year veteran to suddenly become a Pro Bowl-caliber player. Most of the time, though, he doesn't. But I'd hate to see Jackson case aside after the best stretch of his career and then (almost predictably for Vikings fans) go somewhere else and do really well.

There are a whole lot of "maybes" in this article. Maybe Rosenfels is good because of who he threw to. Maybe Jackson found what he needed to become a good quarterback. Maybe Jackson only did well because he was playing against poor defenses. They could all be wrong. Tarvaris Jackson might still be the same scatterbrained, low-accuracy, disaster of a QB I thought he was entering the 2008 season. If that's so, then he should be replaced.

My only point is that, since that Eagles game that ended the season, the sentiment among Vikings fans (not to mention the media) has generally been that Jackson must be replaced and that the quarterback position is the only thing holding the Vikings back from a run at the championship. I want to cast doubt into that surety. I want you to examine exactly why you don't think Tarvaris Jackson should be the Vikings' QB going forward and solidify your position using analysis and facts, not emotions and feelings. Maybe you'll be right anyway. I'm willing to accept I might be wrong about T-Jack.

Are you?

Tuesday, February 24, 2009

Do you have to draft a QB high?

One of the topics (or, more accurately, one of the many tangents) of the most recent Pro-Football-Reference.com podcast was about the subject of "star quarterbacks" predominantly coming from very high (i.e., primarily first-round) draft picks and how teams can best find their quarterbacks. Says JKL around the six-minute mark:

The best option, I think, is to go for the elite talent at the top of the draft...Yeah, there are busts, but the upside there is just too great.


Recent busts -- Ryan Leaf, Tim Couch, Cade McNown, et al -- are well known, as are the success stories, like Peyton Manning, Donovan McNabb, and Ben Roethlisberger. But are they absolutely necessary? Do you have to pick a QB at the top of the draft to succeed? After all, if you don't pick a QB with your top pick, you're picking another (probably very good) player. And even the most jaded QB-loving fan would probably admit that quarterbacks tend to be a touch overvalued and definitely overdrafted.

So, where do starting quarterbacks come from? I compiled a list of starting quarterbacks* in 2008, what round they were drafted in, and whether they were with their original teams -- in other words, a first-round pick playing for a team that he wasn't drafted by didn't help his original team in 2008, so, in a sense, that team's first pick was a "bust."

* Here's the rub, though...rather than try to pass off guys like Ryan Fitzpatrick and Ken Dorsey as "starting quarterbacks," I defined each team's "starting quarterback" by the following two rules. He is:

A) The guy the team would have started if there had been a week 18; and
B) The guy the team would have started if healthy.

Point A lets me not worry about subsequent free-agent moves, trades, retirements, and so on. Point B lets me take the guy who "should" be the starter for the team (like Tom Brady over Matt Cassel) rather than a guy forced into the role. Here's the list:





































TeamQuarterbackRoundOrig. Team?
Baltimore RavensJoe Flacco1Y
Oakland RaidersJaMarcus Russell1Y
Philadelphia EaglesDonovan McNabb1Y
Atlanta FalconsMatt Ryan1Y
Pittsburgh SteelersBen Roethlisberger1Y
New York GiantsEli Manning1Y
Denver BroncosJay Cutler1Y
Washington RedskinsJason Campbell1Y
Cleveland BrownsBrady Quinn1Y
San Diego ChargersPhillip Rivers1Y
Cincinnati BengalsCarson Palmer1Y
Green Bay PackersAaron Rodgers1Y
Indianapolis ColtsPeyton Manning1Y
Detroit LionsDaunte Culpepper1N
Miami DolphinsChad Pennington1N
Tennessee TitansKerry Collins1N
Minnesota VikingsTarvaris Jackson2Y
New York JetsBrett Favre2N
Houston TexansMatt Schaub2N
New Orleans SaintsDrew Brees2N
Buffalo BillsTrent Edwards3Y
Chicago BearsKyle Orton4Y
Jacksonville JaguarsDavid Garrard4Y
New England PatriotsTom Brady6Y
St. Louis RamsMarc Bulger6N
Seattle SeahawksMatt Hasselbeck6N
Kansas City ChiefsTyler Thigpen7N
Dallas CowboysTony RomoUY
San Francisco 49ersShaun HillUN
Carolina PanthersJake DelhommeUN
Arizona CardinalsKurt WarnerUN
Tampa Bay BuccaneersJeff GarciaUN


Of the starting quarterbacks for the 32 NFL teams:

16 were first-round draft picks
19 are with their original teams
13 are "1Y" players -- first-round picks with their original teams

So, that means that 13 of 32 teams in 2008, or about 41%, found their "starting quarterback" by drafting him in the first round. That's a solid percentage, but maybe not enough to be considered as the "only" way to do it.

What about the quality of these quarterbacks, at least as compared to the later-drafted quarterbacks? The only 1Y I see on the list who might have competition next year is JaMarcus Russell. Matt Ryan and Joe Flacco could be one-year wonders, granted, but everyone else is pretty firmly entrenched as their team's starters. The list of non-first-rounders includes a Hall-of-Famer (Brett Favre), a potential Hall-of-Famer (Tom Brady), two of the best quarterbacks of 2008 (Drew Brees and Kurt Warner), and a slew of former or current Pro Bowlers and overall above average QBs (Matt Hasselbeck, Jake Delhomme, Jeff Garcia, Marc Bulger, Tony Romo). Overall, if I had to choose who the best QBs are on the list -- the first-rounders or the non-first-rounders -- I'd probably give the first-rounders the edge, but only barely.

This ignores the fact that there are two more notable 1Y players (Matt Leinart and Vince Young) lurking around who could be their team's primary starters very soon, depending on how the former first-rounder (Kerry Collins) and undrafted free agent (Kurt Warner) ahead of them play out. I also haven't taken draft position into account -- there might be a difference between being the #1 overall pick and the #23 overall pick (Brady Quinn). And, admittedly, this is a one-year sample size, though I have conducted a similar exercise, just for fun, the last few years. The list of starting quarterbacks hasn't changed too much, so it's always been around 1/2 first-rounders. Maybe I'll glance back ten years or so in a future post.

In any case, my conclusion is that, while it's not a bad idea to take a Matthew Stafford or Mark Sanchez early in the draft if your team needs a franchise QB, I don't think it's absolutely vital either. As with any position, good -- even great -- players can be found later in the draft, and quarterbacks probably aren't an exception to that rule.

Wednesday, August 20, 2008

Dome, Sweet Dome

Since the only other Vikings stories right now seem to be Tarvaris Jackson's knee and Bernard Berrian's toe, let me try introducing a more juicy topic, sure to inspire some well-thought, rational debate:

The Vikings are better off playing their home games in the Metrodome than they would be playing them outdoors.

Yeah, nobody will argue with that, will they?

Let's start with a few simple facts, namely the pre-Dome and post-Dome Vikings' regular season winning percentages:







OverallPct.HomePct.RoadPct.
Outdoor Vikings (1961-1981).568.616.520
Indoor Vikings (1982-2007).534.644.424


The fact, that the "Indoor" Vikings have a better home winning percentage than the "Outdoor" Vikings is not, in itself, an indicator that the Metrodome is a better home field than an outdoor stadium would be. If Team A goes 12-4 with a 6-2 home record and Team B goes 8-8 with a 5-3 home record, it doesn't necessarily mean Team A has a better home field advantage; they're probably a better team, period, and play better than Team B at home, on the road, on Mars, or wherever.

On the other hand, what if Team B had the extreme case of going 8-0 at home and 0-8 on the road? Does that mean they're an awesome home team, even better than Team A? Or does it mean they're an awful road team? It's difficult to say, though it's probably a little of both.

This is, in a way, is what we face when comparing the two eras of Vikings play. The Outdoor Vikings were better overall (.568 to .534 winning percentage). Despite that, however, they had a worse home record than the Indoor Vikings (.616 to .644). What can we interpret from that?

Suppose we know, beyond a shadow of a doubt, that Team A is better than Team B. Now, suppose Team B has a better home record than Team A. This automatically means Team A must have a better road record than Team B (since A is better overall). From that, I can infer one, or possibly both of these things:

1) Team B has a bigger home field advantage than Team A; and/or
2) Team A is better at playing on the road than Team B

If the inverse of point #1 were true -- that Team A is a better home team than Team B -- then we would expect Team A to have a better home record than Team B, since they're better overall and better at home. Point #2 is probably correct, because A > B overall and A > B on the road.

Now, replace "Team A" with "Outdoor Vikings" and "Team B" with "Indoor Vikings." If playing outdoors was such an advantage for the clearly better team, why do they have a worse home record than the inferior team? If the Outdoor Vikings were a bad team, I could see them having a worse home record and still being better at home than the Indoor Vikings. But that's not the case here.

If you've read this far, you probably disagree with me, or at least did at the start of the post. Football is played by men! Indoor football in Minnesota is a travesty! The Metrodome makes Bud Grant weep! And of course it would be a huge advantage for the Vikings to play outdoors instead of cowering under a roof like sissies!

Not so fast.

Consider this: The Metrodome is an "active" home field advantage eight times a year. Last I heard, it was pretty loud in there and is definitely a hostile place to play, for all opponents. How many times a year would an outdoor stadium be a home field advantage for the Vikings? I'd say that maybe the last six games of the year -- from late November to (possibly) early January -- are the ones that are most likely to feature inclement winter weather. Half of those games, on average, will be at home, so that gives us three potential advantages. Toss in that the weather might not be bad or that we might be playing a team that's not "afraid" of the cold, like Green Bay or Chicago, and you get, I'd say, maybe 2.5 home games per year where we would have a definite advantage due to weather.

2.5 < 8

What about the playoffs, which are always in January, when it's cold? It's a small sample size, and you again have the "better team" question, but the Outdoor Vikings were 7-3 in home playoff games, and the Indoor Vikings are 5-3. Not exactly indicative, one way or the other.

And how great, really, is the home field advantage for "cold" teams? I don't have any numbers, but if a quarterback from Mississippi can be considered the best cold weather quarterback in history, how much of an adjustment can it really be for, say, a team from San Diego to play a team in Buffalo when there's snow on the ground? Players come from all around the nation and it's just as cold for the visitors as it is for the home team. Sure, LaDainian Tomlinson's from Texas, but Trent Edwards probably didn't face too many snowstorms growing up in the Bay Area of California, either. See also: Green Bay's home playoff performances this decade.

Now, what about the road issues? "Better team" arguments aside, it's hard to argue with a .520 vs. .424 winning percentage, and I'll say, with some confidence, that the Outdoor Vikings were a better road team than the Indoor Vikings. Maybe that is the result of their cushy dome, artificial turf, and lack of wind and rain. Perhaps, overall, the Vikings would be a better team if they ditched the dome. But the notion that the Minnesota cold is an insurmountable obstacle for visiting opponents is probably just part of Vikings nostalgia that bears only passing resemblance to reality.

Tuesday, May 6, 2008

The third-year wideout "myth"

Wide receivers need at least three years to be good. Running backs can excel right out of the gate. Quarterbacks need at least a year or two -- and often more -- to excel.

These are all known "facts," especially to fantasy football fanatics looking for that great draft pick in August. But how much of it is true? The "third-year wide receiver" opinion has been rebuffed by many in recent years, but it still persists, and why is that? Could there be a kernel of truth in the oft-stated belief that the third year is the year wide receivers "put it all together"? And just how good are first-year backs? And when does a quarterback start to show real, or at least fantasy-caliber, skill?

To answer this, I've gone back to the ever-popular Historical Data Dominator. I then searched for rookie wide receivers from 1978 (when the NFL adopted a 16-game schedule) to 2007 with 1,000 yards receiving, second-year WRs with 1,000 yards, third-year WRs with 1,000 yards, etc. I did the same for running backs, using rushing yards. For quarterbacks, I elected to go with 2,400 yards and 16 TDs, an average of 150 yards and 1 TD per game.

Before you think 3,000 passing yards would be a better total, note that even in the pass-happy 2007, the average team threw for 3,652 yards and ran for 1,775, a ratio of just over two to one. If anything, the yardage threshold should be lower, or the rush/receive threshold should be 1,200 yards, so as to be more in line with the 2,400 passing yards. I'll get to that later.

It's not a perfect comparison, but 1,000 yards rushing/receiving is generally considered to make a player "good," while a young QB who throws for 2,400 yards and 16 TDs is also considered to be relatively competent (provided he keeps his interceptions down). Here are the number of players, from 1978 to 2007, who meet these requirements:
















YearRBWRQB
146104
2523535
3585240
4575745
5555744
6435042
7294539
8273637
9153229
1082431


Without a doubt, third-year wide receivers, as a whole, significantly out-perform their second-year counterparts. However, it should be noted that the difference between third-year WRs and second-year WRs (17) is less than the difference between second-year WRs and rookie WRs (25). So, yes, those third-year guys are might excel, but don't be afraid to take a flier on a good-looking second-year wideout in your draft.

As for running backs, the common wisdom -- that first-year running backs are perfectly valid draft picks -- also seems to hold true, though, as with wide receivers, their peak years seem to be seasons three through five. There's a significant drop-off after year six though, and another precipitous drop after year eight. Again, no surprise there; running backs don't have the greatest shelf life. (LaDainian Tomlinson, it should be noted, will be entering his eighth season in 2008.)

You probably don't need to know that drafting rookie quarterbacks is a risky proposition, and this chart supports that. After that rookie year, though, you get nearly the same number of 2,400-yard, 16-TD quarterbacks every season through year 10. That's probably due in large part to teams not being willing to give rookie QBs playing time (even though it might not be a bad idea, long term) and throwing them into the fire their second or third year.

That said, 1,000 yards isn't that great. That's only 62.5 yards per game. These days, 1,200 yards is probably a better indicator of stardom, or at least a good fantasy player. Here are the number of RBs and WRs who managed 1,200 yards in seasons from 1978 to 2007:















YearRBWR
1192
22817
32920
43425
53824
62624
71522
81210
9107
1068


Those are some notably different results. For this level of production, first-year running backs are a relatively poor choice. There was only a difference of 6 (52 to 46) between first- and second-year RBs getting 1,000 yards, but the difference for 1,200 yards is 9 (28 to 19). And wide receivers? Only Randy Moss and Anquan Boldin have managed 1,200 yards as a rookie since 1978. (Bill Groman also did it in 1960.) But there's very little difference from season two on (until season seven, at least). I ran a similar study with QBs, using 3,000 yards and 24 TDs as a benchmark, but it didn't yield any notable results.

So what does it all mean? Yes, third-year wide receivers are actually a pretty good choice, but don't overlook promising second-year players, especially if you're looking for big (1,200+ yard) production. While you can get decent production from a first-year running back, you're not all that likely to find a #1-caliber guy among rookie runners (Adrian Peterson notwithstanding). And don't even think about drafting Matt Ryan, Joe Flacco, Brian Brohm, or John David Booty. Not this year, at least.

Thursday, May 1, 2008

Start the rookie QB!

With John David Booty under the Vikings' control (and, also of interest to Vikings fans, with Brian Brohm wearing a Packer uniform in 2008) the call will likely go out at some point during the season that the team put in the rookie QB to inject some life into the offense. Just as quickly, sports pundits will decry the team's use of the rookie QB, saying instead that rookie QBs should be brought along slowly, perhaps not even starting a game until their second season, at least.

It seems to me that the whole idea of not starting a rookie QB right out of the gate began when Steve McNair was drafted #3 overall by the then-Houston Oilers in 1995. Despite his high draft status, he played sparingly his first two seasons -- a seeming aberration at the time -- sitting behind Chris Chandler on the Oilers' depth chart. McNair experienced great success, once he finally got the chance to start, and is always cited as Exhibit A for why rookie quarterbacks shouldn't see playing time their first year. The late starts to the careers of Tom Brady and Brett Favre, and the disastrous careers of Ryan Leaf and Tim Couch, each of whom started several games as rookies, is Exhibit A1.

But what about Peyton Manning, who's started every game of his NFL career? And where does someone like Drew Bledsoe fit in? Is giving a quarterback lots of playing time as a rookie just setting him up to fail later in his career? Or is that just a bunch of hogwash perpetuated by sporadic evidence?

To answer this, I went to the Historical Data Dominator on footballguys.com (which, for whatever reason, seems to be working for free now; I thought it required a subscription fee). I searched for all QBs from 1978 to 2002 who had at least 240 passes in their rookie season. I chose 1978 as my starting point because that was the year the NFL went to a 16-game schedule, allowing me to use 15 passes a game (times 16 games = 240) as my definition of a QB who saw "significant" action. Cutting the search off at 2002 gives me a nice, tidy 25 years of data while also letting me properly evaluate players who were drafted more than five years ago, providing a reasonable snapshot of their careers.

That search, shown here and sorted by yards, yields 30 quarterbacks. Three of them threw for more than 3,000 yards, and all (Manning, Warren Moon, and Jim Kelly) are Hall-of-Fame talents. The bottom of the list is occupied by Ryan Leaf, Steve Fuller, and Chad Hutchinson, and in-between are quarterbacks good and bad and awful. I've divided the quarterbacks -- somewhat arbitrarily, but hey, it's my blog -- into three categories.

Category A includes the great quarterbacks, the Hall-of-Fame-caliber players or very nearly.
Category B includes players who fall short of greatness, but still had (or are having) solid careers.
Category C includes everyone else, the abject failures.

Here's how they fall out:

Category A: Manning, Kelly, Moon, Marino, Aikman, P. Simms(?), Elway
Category B: Collins, Garcia, Bledsoe, Plummer, Batch(?), George, O'Donnell, Kosar, Deberg
Category C: Weinke, Mirer, Brock(?), Carr(?), Banks, Couch, Harrington(?), Komlo, Trudeau, Shuler, Hutchinson, Fuller, Leaf

Phil Simms is the only member of the A group not to be in (or destined for) the Hall of Fame, but I thought his two Super Bowl rings should count for something. Charlie Batch wavers between B and C. He was lousy with the Lions, but has carved out a second career as a solid backup with the Steelers, and if anything happens to Ben Roethlisberger, the team would probably do well. Similarly, David Carr and Joey Harrington each seemed destined for C-land but are young enough that they might turn their careers around. As for Dieter Brock, he posted decent numbers as a rookie for the Rams in 1985 (2,600 yards, 16 TD, 13 Int.). I don't know why he never threw another pass in the NFL -- maybe a Rams fan can enlighten me?

(Of course, as with Ichiro Suzuki, "rookie" can be a relative term. Brock, Moon, and Jeff Garcia all played in the CFL before coming to the NFL, and Jim Kelly starred in the USFL. And Chris Weinke was 29 his rookie season.)

In any case, the breakdown is 7 "A" quarterbacks, 9 "B" quarterbacks, and 14 "C" quarterbacks. That qualifies better than 50% (16/30) of rookie quarterbacks who threw 240 or more passes their first season as at least "decent" throughout their careers. In any case, the careers of those 16 weren't "ruined" by their getting starts as rookies. In fact, 7 of the 30 went on to exceptional careers. Of course, all seven of those players, save Moon, were first-round draft picks, and three (Elway, Manning, and Aikman) were first overall. This means that, in all likelihood, they were a) good; and b) going to get the chance to start as rookies. I'm not going to grade out every quarterback on the above list by round; suffice to say there are high (Leaf, Carr, Collins) and mid-low (Batch, O'Donnell, Deberg) picks sprinkled throughout the B and C levels.

At the very least, when the inevitable talk of, say, putting Booty, Brohm, or especially first-round picks Matt Ryan and Joe Flacco in the starting lineups for their respective teams this year pops up, don't jump immediately onto the "Starting a rookie QB is bad for him" bandwagon. About half the time, that may be true (and the C-level quarterbacks might have been bad no matter when they first saw significant action); but you have just as good a chance of getting a solid player, or even a star, for years to come.