Pathological [Updated]
While the dust in the case of the fabricated data published in Science last December remains very much unsettled, the deadline by which the primary author, Michael LaCour, claimed he would respond to the allegations has passed [Correction: his self-imposed deadline has not yet passed; it is tomorrow, May 29. Note to self: try looking at a calendar sometime.], seemingly without a response, while more accusations of dishonesty on LaCour’s part have surfaced.
First, another fraud accusation has surfaced, this time concerning unpublished results (the results have, however, been presented at at least one conference) in a paper titled “The Echo Chambers are Empty: Direct Evidence of Balanced, Not Biased Exposure to Mass Media”. Economist Tim Groseclose looked into LaCour’s methods and writes:
The paper examines the news “diet” of voters. It concludes that the news diet of Republicans hardly differs from that of Democrats. In contrast to conventional wisdom, voters do not, primarily, get their news from “echo chambers.”
I could find no problem with the main results of the paper. However, to derive those results, LaCour writes a section describing a way to measure media bias, which he uses to classify a media outlet as “conservative,” “liberal,” or “centrist.” I found many problems with this section, and I am highly confident that LaCour faked the results for this section.
This is just an accusation at this point, and Groseclose has not yet seen the raw data or the code LaCour used to derive it, but he presents some strong statistical evidence for the accusation. However, we can be fairly certain about another example of LaCour’s dishonesty. Virginia Hughes of Buzzfeed discovered lies on his CV concerning his research funding, which she details in perhaps the best article I have read on the case:
In the study’s acknowledgements, LaCour states that he received funding from three organizations — the Ford Foundation, Williams Institute at UCLA, and the Evelyn and Walter Haas, Jr., Fund. But when contacted by BuzzFeed News, all three funders denied having any involvement with LaCour and his work. (In 2012, the Haas, Jr. Fund gave a grant to the Los Angeles LGBT Center related to their canvassing work, but the Center said that LaCour’s involvement did not begin until 2013.)
There are at least two CVs that were reportedly published on LaCour’s website but have since been taken down. Both list hundreds of thousands of dollars in grants for his work. One of these listings, a $160,000 grant in 2014 from the Jay and Rose Phillips Family Foundation of Minnesota, was made up, according to reporting by Jesse Singal at The Science of Us.
If you were thinking that things couldn’t get worse than made up data on one, possibly two papers, and made up funding sources, you were wrong. It turns out LaCour may also have lied about receiving a teaching award, as Jesse Singal of Science of Us discovered:
In that section [of LaCour’s CV], he lists as one of his awards: “Emerging Instructor Award, UCLA Office of Instructional Development, 2013-2014. One of three UCLA graduate student instructors selected for excellence in their first year of teaching” (formatting his). But a staffer in the office of instructional development told Science of Us that it does not give out an award of that name. “I don’t know if he either misnamed our department or if it’s from another department,” said the staffer, who only agreed to be quoted if I didn’t use her name. “I’m not clear on what happened.”
To make himself seem even sleazier, when Singal contacted him about the award, LaCour first asked Singal not to publish anything on the award until he released his official response, then took his CV off his webpage, then put a new version of the CV that does not list the award back up, after which he emailed Singal saying,
“I’m not sure which CV you are referring to, but the CV posted on my website has not had that information or the grants listed for at least a year.”
In my original post on the case, I speculated about how LaCour may have come to the decision to commit fraud of this magnitude, suggesting that he may have found himself between a professional rock and a hard place and chosen to lie his way out of it. As more and more examples of possible dishonesty have surfaced, I have started to believe that the real explanation may simply be that LaCour isn’t a very honest person.
UPDATE: It is offical! Science has retracted the paper. The statement just posted gives two confirmed (via LaCour’s attorney) cases of false information in the paper, the statistical irregularities first noted by Broockman and Kalla, and LaCour’s failure to produce his data as the reasons for the retraction. The statement ends, “Michael J. LaCour does not agree to this Retraction.”
It seems the rot in this bad apple goes all the way to LaCour.Report
I… just… Can we get the pun police in here?Report
Sounds like it was an in cider job to me.Report
Ummm cider.Report
I see we’re gonna need the patrol wagon.Report
In this case, Chris may endorse a “rough ride”.Report
Oh good lord.Report
Agreed Dave. Puns everywhere, disgraceful. -5 points to Ravenclaw, Slytherin and Hufflepuff.Report
I should warn you that these Harry Potter references just make me want to dumble down on the puns.Report
Aaaaaaaaaah!
Is capital punishment on the table with puns? Because if so, I hope the prosecutor requests death by lethal interjection.Report
Oi Gevalt!Report
Puns are considered a form of torture by somepeople. This explains why my friend the troll is fiendishly good at them.
I do quite like puns, so long as we aren’t talking Xanth.Report
Damn!Report
-10 Points!!!Report
Fishing betasReport
With fish, betas are alphas.Report
Relevant.Report
I want to lodge a complaint to whoever’s in charge at OT. I put off reading Chris’s post until and thread because I wanted to have enough free time savor the incisive discussion of the issue. Instead, the first 17 or so comments were about puns. 🙁
Please correct the error.Report
I am afraid we are going to pun-t the issue.Report
Arrgh!Report
I didn’t know you were a pirate.Report
More seriously (maybe because I can’t think of any puns), how common is this sort of stuff in science research?Report
@saul-degraw – not sure if you saw that io9 link I put on Chris’ earlier post this AM on how easy it is to perpetrate and disseminate bunk; but even if outright fraud is rare, I suspect that there is a lot – a LOT – of bad information being passed through.
Sometimes because the researchers are being careless (or dishonest); sometimes because the people reporting on it do not understand what the research really means.Report
A bad actor in a society that has a great deal of trust can get away with… well, maybe not “murder”, but a hell of a lot.
And since it’s not a toggle but a continuum, it’s easy to slide down the slippery slope of good action to bad action via lazy actions or greedy actions or what have you.
And since the only people who can police you are people who likely won’t ever see an upside to doing so, it’s a recipe for only the most egregious examples of bad action getting noticed by the rubes.Report
Oh, there’s TONS of upside in finding that everyone’s wrong! That’ll get you a big paper.
where there’s not upside is in documenting the non-consensual research that so much of the “peer-reviewed” and IRBed research rests on.Report
Glyph,
I continue to be less concerned about bad information, as bad facts that people take conclusions from.
http://www.theatlantic.com/magazine/archive/2010/11/lies-damned-lies-and-medical-science/308269/
http://www.economist.com/news/briefing/21588057-scientists-think-science-self-correcting-alarming-degree-it-not-trouble
Out of 53 landmark cancer studies, they can replicate six of them. THIS is your life we’re talking about.
The only well-replicated study on strokes is to give people asprin early. All other therapies haven’t been well verified.Report
57.2% percent of all studies use completely invented data.Report
I don’t think anyone really has any idea. I mean, we can look at how often fraud is discovered — maybe one high profile case a year, or thereabouts, and a few other smaller ones, plus likely more that never see the light of day because a grad student or post doc or whoever never publishes the fabricated data — but there are undoubtedly cases that go undiscovered, some of which will never be discovered because the findings simply aren’t important enough or the literature has already moved on, so no one’s going to go back and look at the numbers really closely. And then there is the almost certainly more common case of simply incorrect results due to lack of proper statistical controls (see the comments earlier today on the original post), which can be dishonest (researchers know perfectly what they’re doing when they run a bunch of studies and then only publish the one that “works”), but much of which is just the result of a lack of attention or statistical/methodological knowledge.
I don’t think I’ve ever seen a case quite like this. I mean, Hauser’s fraud may have been pretty widespread (and very nearly went undiscovered, and if it hadn’t been for grad students, might have been undiscoverable), and his research was widely reported in the popular press, so it was a high profile case, but his lying didn’t extend to every aspect of his CV, as this dude’s may have.Report
He does seem to be the Stephen Glass of ScienceReport
At some point in the last few years, Bayer announced that they were only able to reproduce the results in about one-third of the papers published in technical journals about potential new drug molecules. They believe that the practice of fudging the statistics by repeating trials until a positive result appears, then reporting that trial, has become very widespread.Report
You know the phenomenon where something is used as a signalling device is discovered to be used as a signalling device and, thereafter, becomes useless for signalling?
I think I’m still surprised when it manifests anew.
You’d think I’d have learned by now.Report
Goodhart’s Law: “When a measure becomes a target, it ceases to be a good measure.”Report
In further news, minimum wage laws uniformly lead to massive job losses.Report
LaCour must have a very good memory.Report
Oh, great essay.
Have I ever told you about my obsession with his essays? It is not a mild one.Report
Of all the things I was forced to read in high school, this is the only thing I do remember, which is of some wonder to me, since I know head injury has lead to lost memories and memories connected in all sorts of odd ways not necessarily reflective of actual events.
I don’t know that I’ve read any others, and perhaps I shall.
This whole thing reminds me of those folk who get fancy jobs only to have it revealed they made much of the credentials up, and weren’t as accomplished as they seem on paper. My favorite was local, guy named Warren Cook who ran Jackson Lab in Bar Harbor, ME — place that supplies genetically-tailored mice for research. At the time I had him in my rolodex, he was considered a hot shot and major success by a whole lot of folk around the state and in the bio-tech industries which consumes so many of those poor little mice. And boom. He’s made the stuff up on his resume years before and never purged it and out the door he goes.Report
That is, essentially, what LaCour did, and it got him a great job. I imagine he won’t be able to actually start that great job, though, as I assume the offer will be rescinded. Unless, perhaps, he reveals that all of this was his actual research, and we’re all now data points in his dissertation.
On Montaigne: when I was a sophomore in college, a bright-eyed and bushy-tailed philosophy major, a professor gave me a copy of the essays with a book mark on the one titled “That to Study Philosophy Is to Learn to Die.” I fell in love immediately.Report
I’d have loved to see the IRB committee discussions that led to approving that.Report
Gotta admit, if he pulled this maneuver it’d be pretty baller.Report
People HAVE gotten good journals to publish research as “stings.”
It IS a good way to evaluate how well the peer-review process works.
Other people have gotten journal articles out of trolling American Conservatives (they got a shiny ddos attack, which they were able to fend off, which was the point of the article).Report
There was a big scandal a few years ago when it was revealed that the Director of Admissions at MIT made up all of her credentials. She wasn’t even a college graduate.Report
There’s a wonderful book by Geoffrey Wolfe (Tobias’s brother) about their father, who was exactly that kind of con man.Report
[A]fter a tongue has once got the knack of lying, ’tis not to be imagined how impossible it is to reclaim it whence it comes to pass that we see some, who are otherwise very honest men, so subject and enslaved to this vice.Report
That’s not true.Report
I’ve added it in an update, but I’ll stick it here too, as it’s a rather big deal: Science has now officially retracted the paper. Statement here. Best part is the final sentence:
It’s like he’s a shovel and can’t help but view everything as an opportunity to dig his hole deeper.Report
Report
Oh dear Lord, the results in the second paper were almost certainly fabricated. My favorite part:
In other words, he’s not even a very good fabricator.
I don’t know how this could possibly get worse for the dude, but given its brief history it probably will, and dramatically so, within the next week.Report
If we are lucky, this will result in attempts to replicate results from the other things that Science has published over the last 3-5 years.
If they are lucky, the results will be replicated.Report
This is what has been on my mind regarding this.Report
I’d rather replications happen organically, as they will tend to for anything big enough to be published in Science. Even when attempts to replicate fail, they will tend to involve requesting the data from the original research, which is usually enough to catch fabrication, though not enough to solve the “run it a bunch of times until it comes out the way we want it” issue.Report
First Rule of Holes : when you find yourself in one, stop digging.
Second Rule of Holes : trying to dig out of the hole with heavy machinery & high explosives is ill advised.Report
Wow, this just keeps getting uglier. Thanks for keeping on top of it Chris.Report
LaCour issued his response last night.
Cynics called this a “Friday Night News Dump” but it was pointed out to me that, on Friday nights, Social Scientists were likely to be home and unoccupied.Report
Linky goodnessReport
http://www.documentcloud.org/documents/2090102-lacour-response-05-29-2015.html
I just finished reading it. It makes some accusations of its own, uses some vague claims with links that don’t support them, and concludes with a statistical argument that I can’t verify, but don’t find convincing. All in all, it seems like an attempt to undermine the accusers.
I’ll probably say more later.Report
Briefly: he admits lying about the funding and the compensation. He says he really used a raffle for Apple products, and links to receipts with some timing issues (some of the receipts come after data collection). He says he deleted the raw data per UCLA policy, but the policy, which he quotes, only requires deleting P.I.I.
He doesn’t challenge the claim that the survey company never ran, nor could it have run, the surveys. Instead, he provide details and evidence of unrelated conversations with Qualtrics.
He says another researcher has replicated his finding, but we do not yet know whether this is true.
And finally, the statistical argument (I should add he accuses Broikman et al of mistakes, a lack of ethics, and dishonesty throughout, culminating here in the data): he says they “manipulated” the other study data to make it look like his, used the wrong field anyway, and screwed up the reliability data.
The “manipulation” was simply recoding non-responses to 50. The wrong variable was the recoded version, rather than the raw one LaCout says they should have used. If you ask me, if merely recoding non-responses yields data identical to yours, your data is a copy of the other data. He is making a play for the public, not other researchers here, hoping that “manipulated” convinced the same folks who thought “hide the decline” indicated conspiracy.
The reliability stats ate a bit trickier, and will require verification with the data, but to me, by that point the fraud is demonstrated, so the reliability argument he makes is superfluous.
Added: I forgot to include this: the evidence he provides appears to indicate that he either never got IRB approval or got it after data was collected, which by itself likely would have cost him his dissertation and the Science publication. However, that is an even bigger failure, as some, perhaps many in his department should have been aware of this.Report
I read the thing too, and only – barely! – understood the parts written in English. But two things jumped out at me. He admits to lying about the funding; and he admits he destroyed all the raw data while strenuously claiming that doing so was not only the norm in his field, but required by the institution he works for. The first is just an admission of a straight up lie. The second, tho, seems like a more nuanced fabrication, based on interpreting an institutional policy self-servingly. He’s basically saying: “I have my conclusions, you guys have yours, and the only thing that could settle the matter has been destroyed as per institutional requirements.”
I have a hard time believing that institutional norms would require destroying data prior to a vetting of conclusions derived from that very data. But that’s not my world. Seems very counterintuitive to my understanding of scientific practices.Report
They don’t. He’s being purposefully deceptive: the policy, which he quotes, only requires that he delete the personally identifiable information: names, addresses, that sort of thing.Report
OMG OMG OMG! I just realized the most hilarious aspect of his reply, and a sure sign that he doesn’t know what he’s doing.
As I said in the previous comment, he accuses Brookman et al of using the wrong field in the other survey to compare to his. The most damning claim Brookman made was that LaCour’s data was indistinguishable from that of the other survey, so this is supposed to defeat that claim.
LaCour said they compared a field with recoded responses, when they should have compared the raw responses one. However, it’s important to note that the field with unrecoded responses isn’t really raw. It recodes all non-responses to 101 or a value over 100, on a measure from 0 to 100. This allows anyone analyzing the days to quickly and easily deal with the non-responses (missing values and non-responses can be handled a variety of ways).
The big difference that LaCour points out between his data and the raw field in the other survey? The modal response in the other one is 100. However, this is because it’s including the non-responses, 101, as 100 in the histogram. In other words, he’s just shown that his data includes recoded non-responses, and has inadvertently demonstrated that Brookman used the correct variable and he did just copy the other survey’s data.Report
Going through that, it seems like he’s running with the whole “deny everything” thing but I will need a translator anyway.
Edit: thanks, Chris.Report
Chris,
You’ve done a really good job of explaining this, both in your posts and in your comments. This fiasco is something I would have had only a very dim knowledge of, and it’s interesting to see what’s at play. (It’s also interesting to see how claims of fraud in the sciences compare to those that go down in history, my discipline.)Report
Thanks Gabriel, I appreciate that.Report