On Reinhart and Rogoff
As Nob deftly noted, there was big news on Tuesday as an influential 2010 study by professors Carmen Reinhart and Kenneth Rogoff was found, to put it lightly, to be deeply flawed. The paper’s conclusions were well received by the austerity camp — Paul Ryan, David Brooks, Joe Scarborough, Erskine Bowles, Alan Simpson — for finding a high GDP-to-debt ratio was associated with (not the cause of, its authors inconsistentlymaintained) low growth rates. The magic number was 90 percent; pass that, the paper implied, and your economy was toast.
The Roosevelt Institute’s Mike Konczal has the definitive post on the issue, but this Jeff Spross roundup at ThinkProgress is great if you just want the 101:
First, Reinhart and Rogoff excluded the post-war years for certain countries that enjoyed robust economic growth despite debt levels well over 90 percent. They also chose a skewed method of weighting the data: for example, New Zealand’s single year of terrible growth while over the 90 percent threshold wound up counting just as much as Britain’s 19 years of healthy growth. And they even incorrectly input at least one Excel spreadsheet formula, wrongly excluding several countries form their calculations.
I couldn’t help but notice that all of these wrong signs were pointing in the same direction (to-the-right, to-the-right), so I asked Spross what he made of Reinhart and Rogoff and how conscious they might be of the way their paper’s been turned into a political shibboleth. But in way fewer words because, y’know, Twitter. His response:
I guess I’m cynical because it’s hard for me to see the authors as such passive bystanders in this sudden farce. Again, if their mistakes were more varied, if some pointed toward Keynes while others toward Hayek, it’d be easier to imagine they were too intoxicated by the attention and praise to caution restraint.
As it looks to me now, the two of them made some very questionable decisions; and then they allowed themselves to be made the fig leaves for an austerity movement whose fundamental goals — cutting social services (and, in Europe, raising taxes) and breaking unions — were determined long, long before either professor made their first Excel fuck-up.
One of the key points at http://www.nextnewdeal.net/rortybomb/researchers-finally-replicated-reinhart-rogoff-and-there-are-serious-problems#.UW147o4A23o.twitter
“They find that three main issues stand out. First, Reinhart and Rogoff selectively exclude years of high debt and average growth. Second, they use a debatable method to weight the countries. Third, there also appears to be a coding error that excludes high-debt and average-growth countries. All three bias in favor of their result, and without them you don’t get their controversial result. Let’s investigate further:”
(bolding is mine)
So it looks like they selectively trimmed the data.
Second, their weighting was extreme. There’s a question on how to weight country A for 19 years vs. country B for 1 year. A 19:1 weight really assumes independence between successive years for country A, but treating 19 years as the same as 1 years assumes that there is no year-to-year variation. So on the scale of non-crazy weightings (1:1 or 19:1), they chose an extreme, which favored the same hypothesis.
At this point their works is really suspect, and not just for innocent Excel errors.Report
I think it’s important to keep in mind the Statistical Winner’s Curse or regression towards the mean phenomena the often happens in academic research. Imagine there’s some well-known law that many research groups are testing, with hundreds of independent experiments all coming up negative and therefore not being published (we already knew that!). One group, purely by chance, gets a result that has a less than 5% chance of having occurred under the law – statistically significant – and they go ahead and publish this result “refuting” the law. In reality, their result should be adjusted for the other hundreds of attempts that have been made and failed, but because those negative results are unpublished the group has done absolutely nothing wrong in interpreting their findings.
When you’re thinking about weather Reinhart and Rogoff made these mistakes by accident (or by chance) it’s not enough to say “What’s the chance that a researcher would make these three mistakes” but rather “What’s the chance that out of many researchers studying this, one would make these three mistakes“. As Barry points out up-thread, accidental coding errors happen, but it’s the selective exclusion of data that should have us concerned.Report
It’s the multiple ‘errors’, all in the same direction.Report
It’s the multiple ‘errors’, all in the same direction.
But that’s the only way it could go. Think of it as a kind of evolution: hundreds of groups are examining the data and making small errors, with most of them leading to a negative result and no publications; only those groups that accumulate small errors leading to a new finding will publish. So it will invariably look like the published results are biased towards controversy when in reality we just never see the stuff that’s equally biased away from controversy to even it out.Report
By the way, can we finally bury the idea that Paul Ryan is some kind of numbers wonk when he says things like this about an association:
Report
Paul Ryan was only a numbers guy in the sense that he could spout off numbers. Everything he ever said was shredded the minute anybody got out their calculator.
Journalists are easy to BS with numbers, and even easier to BS when the elites want them to be.Report
I’m not an economist. Not even close. But for whatever my own peculiar OCD reasons, I’m a stickler for both facts and, especially, unbiased analysis– inasmuch either are doable, and equally inasmuch as I’m capable of discernment. Muddy waters, that.
Anyhoo, I’ve long waded through all manner of internet detritus to find a few expert folks, of varying expertise, who seem to share my own OCD compulsions. Which more or less explains why I read the good folks over at The Monkey Cage, even though, as a rule, I avoid their kind like they were the plague because this stuff is so painfully foreign to me.
Gelman offers, imo, the sagest advice possible to R & R in this circumstance: admit you f**ked up, move on. Because really, if R & R don’t come clean, what chance do they have of holding on to any legitimacy?
Then again, if R & R are ultimately influenced by an ideological agenda that leaves them unconcerned with their standing vis legitimacy, especially among their peers. then I suppose we’ll come to learn that soon enough.Report
Yeah. And add to that “admit you fucked up about ‘holding the course'”. We can all be bullheaded (well, maybe me more than most). But you don’t hold onto credibility like that.Report
It seems to be that cred is the mitigating factor. I mean, you and I don’t care if our opinions carry any weight, but surely R & R care. I mean, it’s how they earn a living. No cred? No living.
Then again, there’s always Fox News. They’ve a growing list of discredited dunderheads who appear as “experts”.Report
As you’ve pointed out, right-wingers with opinions favorable to the elites will be taken care of. Also, this is macroeconomics – has any right-wing error *ever* been punished?Report
http://delong.typepad.com/sdj/
Yegads.Report