Dirty Sexy Science
Whew! We are not yet three weeks in and the LaCour case has all but run its course. As new information, at times shocking, at others infuriating, at still others utterly fascinating, has surfaced almost daily, the complete picture of the scope of Lacour’s fraud has become clearer and clearer, and we now have what we need t oconclude with a great degree of certainty that LaCour committed research fraud, lying about methods, fabricating results, and maintaining the charade for years. Only the conclusions of the internal investigations by the political science departments at UCLA and Princeton are left to officially seal his fate. Once that is done, this will surely go down in history as one of the strangest and most impressive academic cons in history.
With the verdict all but certain, all that is left for us to do now is to draw our conclusions. Of course, this is no small task, as much is at stake for the many different people and institutions implicated in all of this: LaCour’s co-author and his academic adviser, his department, the hiring process in Princeton’s political science department, political science as a discipline, the peer review process, replication standards, science journalism, and social science in general. There will surely be many reckonings, and since most of them will be within academia, filled as it is with opinionated people, there will be no shortage of conclusions drawn in the coming months. Here is mine.
First, for those of you who have not been keeping track, the weekend saw two new developments in the case. Late Friday night LaCour released his 23 page, rambling response to the charges against the research presented in his Science paper1. Recall that the major charges were these: (1) he made up his funding sources, (2) the survey company he claimed to use had no record of ever working with him, (3) the baseline data in his paper (that is, the initial opinions of his survey respondents) was actually taken from another survey, and (4) the follow-up survey data was entirely fabricated. In his response he admits to (1), and presents three arguments against (3) and (4): first, he provides screen shots purportedly showing when he loaded his survey into Qualtrics, a commonly used survey software package; second, he argues that the comparison of his results with the previous survey was flawed; and third, he claims that his results have been replicated by another researcher. He does not addresses (2), though in a New York Times interview he claims that he actually used a different, unnamed survey company. The rest of his 23 pages are devoted to impugning the honor and ethics of his critics.
By now it is all but universally agreed that his response missed its mark entirely. His evidence that the survey was real was most likely fabricated, and the statistical arguments he uses to show that the comparison of his baseline data with that of another survey may actually provide further evidence that the two datasets are identical. What’s more, the researcher whom LaCour claims has replicated his results has said that he merely ran an extension of that work that did not involve a replication. Unless he miraculously produces clear evidence that he actually ran his survey and that his data is original, his goose is cooked.
The second development is contained in a series of tweets from a University of Michigan graduate student who collaborated with LaCour on a pilot version of the project that ultimately resulted in the Science paper:
So, about the Umich Qualtrics link in the LaCour report. That’s to my accuont. I’ve taken down the survey. ML and I were collaborating in April/May 2013. We really sent out mail offering an iPad UCLA IRB approved it. We got some real data! 38 whole responses! So clearly that wasn’t going to work. ML came up with the idea for the uSamp panel. But once he allegedly got it running for wave 1 he stopped returning my calls and emails and kicked me off the project. Now we know why! His timeline misrepresents the pilot.
In other words, as soon as he realized he wasn’t going to be able to get the number of responses he needed, he dropped his collaborator so that he could make up the responses himself without anyone looking.
So, now that we can be fairly certain where this is going to end up, let us consider where it began. Up until a few weeks ago, Michael LaCour was considered a rising star in political science. His name well-known among academics, field workers, and activists, even the general public, all because he published a paper on gay marriage, one of the hottest topics in American and, as the recent referendum in Ireland shows, global politics. However, this wasn’t his only high profile research project. He was working and giving academic talks on new research he had supposedly conducted on abortion and persuasion, building on his work on gay marriage, and in the other paper in which he is now accused of fabricating data he presents studies on media bias, another frequently discussed topic in American politics. It is abundantly clear, then, that his research choices were guided at least as much by how hot the topics were outside of academic political science as they were by theoretical and empirical questions within the discipline.
And how well this strategy was working for him! Until his con was revealed, he was a few weeks from completing a PhD at a prestigious university while working under a well-known political scientist and consultant, he had published with one of the most well-respected senior political scientists in the world, and he had a tenure-track position at Princeton waiting for him later this summer. At a time when most social science PhD’s struggle to find work in the field, LaCour’s career was being fast-tracked. The sky was the limit.
What makes his level of career achievement even more impressive is that, aside from the now-retracted Science paper, LaCour’s curriculum vitae was pretty unimpressive, at least in terms of published research. The other projects on which he was the primary researcher — the aforementioned abortion and media bias studies — were either under review or had not yet been submitted for publication. His only other publication was on research methods, and did not present original empirical research. It is not clear that he was working on any other significant research projects (of the two papers he listed as under review, one was also on methods, not original research).
In disciplines built around empirical research, as political science is, departments looking to hire junior faculty are primarily concerned with whether graduate student or post doctoral candidates can demonstrate the existence of a productive research program. That is, candidates must be able to show that their existing research projects are capable of producing a large amount of original research for at least the next several years, if not for an entire career. Precisely what this means will differ from discipline to discipline, but it will usually mean a certain number of publications (in cognitive psychology the magicnumber is about 5 in the pipeline, that is either published or in press). While I do not know political science’s magic number, it is unlikely that many graduate students with one research publication, one methods publication, and one research paper in press are hired by political departments at schools like Princeton. LaCour simply did not have a CV that suggested a long-term productive research program.
Why, then, had a prestigious school like Princeton hired him? Only those on the hiring committee can say for sure, but it certainly seems that the nature of the topics used (note that his research was on persuasion; the gay marriage issue was a methodological choice!) in his work played a role. That is, though again we cannot be sure, it appears that the fact that he was studying such attention-grabbing topics as gay marriage, abortion, and media bias got him hired at Princeton.
And here we have perhaps find the true lesson of the LaCour case. Within the social and behavioral sciences, attention-grabbing topics and findings like LaCour’s are generally referred to as “sexy,” and while sexy research is certainly not new to the social and behavioral sciences, over the last several years it seems that more and more researchers in certain disciplines — political science, social psychology, cognitive and social neuroscience, to name a few — have focused their efforts on sexy research. It is not difficult to see why: as more and more people consume science journalism and science books, sexy research has increasingly led to fame and prestige for researchers who would otherwise have toiled in relative obscurity for their entire careers. And increasingly, that fame and prestige leads to better academic appointments, book deals, and a fair amount of money.
And it’s not just individual researchers who are increasingly focused on the sexiness of their research. Academic departments seem to be increasingly interested in relatively famous researchers as well, even if, as in LaCour’s case, those researchers have done very little actual research. Every time a researcher’s name is mentioned in the press, his or her institution is as well: from July forward, every mention of LaCour’s widely popular gay marriage study would have referred to him as “Princeton political scientist Michael Lacour” or some variant thereof. In other words, famous researchers make for famous departments, so that everyone in a department benefits from one member’s fame.
So far, sexiness seems like a good thing for social science — it gets attention, which means grants, which means more research — but there is one big downside: in the language of another social science, it creates perverse incentives for researchers. In order to build and maintain fame, a researcher has to continually produce sexy results. This creates problems for scientists because, as we well know, most research projects are a bust. That is, out of the many, many studies that a researcher will run, most will not yield publishable results. This is particularly true when researchers step out on the sorts of theoretical and methodological limbs that sexiness generally requires. If you want to make a splash, you have to produce something new, something either coolly counterintuitive or shamelessly prejudice-confirming, and as science generally works by the slow accumulation of evidence, sufficiently new counterintuitive or prejudice-confirming results will require diverging significantly from existing research.
How then might a researcher maintain a steady stream of sexy research when the vast majority of his or her projects fail to produce publishable results? LaCour shows us the most extreme possibility: make it all up. For most, however, the methods will be more mundane, though likely more damaging to science in the long run: pick potentially sexy research topics and run as many studies as you can until something comes out. If one were to pick a researcher whose work has consistently been reported in the popular press for years and visit his or her lab, one would undoubtedly find a myriad of potentially sexy research projects underway. One would likely see that as each version of a project fails, new, slightly different versions are run, with the process continuing until the desired results are achieved. As we discussed in a previous post, such a practice can easily produce many false results which, since all of the failed results are not mentioned, easily make their way into the scientific literature.
This is not how science is supposed to work. Instead of designing studies based on the sexiness of the topics and potential outcomes, scientists are supposed to use existing models and empirical knowledge to develop clear hypotheses, and then test those hypotheses carefully, covering as many alternative explanations as possible over multiple conditions or even multiple studies. While such a process will produce good science, it will not produce sexy results very often. Sure, some times sexiness will result, and if you follow science reporting at all you will occasionally find a new name, or names only mentioned a few times, because their projects, arrived at the right way, produced results that are really interesting to both researchers and lay people, but a career of sexy findings is highly unlikely this way.
In sum, then, this is the lesson I take away from LaCour: the increasing focus on sexiness in social science has come back to bite it in the ass in a very visible way, but the effects are actually deeper and mostly invisible. Granted, LaCour has shown himself to be habitually professionally dishonest, admitting to lying about awards, grants, and methods both on his CV and in his published work, so it is likely that he would have committed fraud even if political science were not as enamored with sexy research as it appears to be. However, if sexiness were not given the weight it is within his discipline, he would at the very least have had to fabricate much more research to achieve anything like the level of success he has actually achieved with one fake study, and surely with all of that extra lying he would have been caught more easily, and perhaps before he did so much damage to other researchers, his department, and his discipline.
More importantly to me, however, is the fact that LaCour is just the most extreme example of what researchers will do in the name of sexiness. The level of deception achieved by LaCour is staggering, not only in its length and scope, but in the time and effort that was required to build and maintain it. Most researchers who choose to focus their efforts on sexiness choose the much simpler, much less dangerous, and much less obviously unethical path of throwing everything at the research wall and seeing what sticks.
This is, I should say again, not just a problem in political science. Psychology is rife with researchers who care only about press releases, and I have no doubt that there are plenty departments like Princeton’s that have hired people based more on the sexiness of their research than its quality or future potential. This is a problem that can easily infect any discipline in which research is relatively inexpensive (relative to, say, building a super collider that is) and grant money relatively easy to come by.
And before we condemn the researchers, it must be said that we are all complicit: we are the ones who click on the links to articles about sexy research, and we are the ones who buy the books scientists write about their sexy research. Perhaps ironically, but entirely predictably, the increase in attention and consumption of science produced by the internet has in this way been harmful to the science. In a sense, Michael LaCour is of our own creation. And we are not just the cause, we are the ones who suffer as well, as more and more bad research replaces good. Science moves more slowly, and we know less about ourselves as a result.
1 It is important to make this clear, as there is now good evidence that LaCour fabricated the data in another paper as well.