62 thoughts on “Pathological [Updated]

              1. Aaaaaaaaaah!

                Is capital punishment on the table with puns? Because if so, I hope the prosecutor requests death by lethal interjection.Report

              2. Puns are considered a form of torture by somepeople. This explains why my friend the troll is fiendishly good at them.

                I do quite like puns, so long as we aren’t talking Xanth.Report

              3. I want to lodge a complaint to whoever’s in charge at OT. I put off reading Chris’s post until and thread because I wanted to have enough free time savor the incisive discussion of the issue. Instead, the first 17 or so comments were about puns. 🙁

                Please correct the error.Report

  1. More seriously (maybe because I can’t think of any puns), how common is this sort of stuff in science research?Report

    1. @saul-degraw – not sure if you saw that io9 link I put on Chris’ earlier post this AM on how easy it is to perpetrate and disseminate bunk; but even if outright fraud is rare, I suspect that there is a lot – a LOT – of bad information being passed through.

      Sometimes because the researchers are being careless (or dishonest); sometimes because the people reporting on it do not understand what the research really means.Report

      1. A bad actor in a society that has a great deal of trust can get away with… well, maybe not “murder”, but a hell of a lot.

        And since it’s not a toggle but a continuum, it’s easy to slide down the slippery slope of good action to bad action via lazy actions or greedy actions or what have you.

        And since the only people who can police you are people who likely won’t ever see an upside to doing so, it’s a recipe for only the most egregious examples of bad action getting noticed by the rubes.Report

        1. Oh, there’s TONS of upside in finding that everyone’s wrong! That’ll get you a big paper.

          where there’s not upside is in documenting the non-consensual research that so much of the “peer-reviewed” and IRBed research rests on.Report

      2. Glyph,
        I continue to be less concerned about bad information, as bad facts that people take conclusions from.
        http://www.theatlantic.com/magazine/archive/2010/11/lies-damned-lies-and-medical-science/308269/
        http://www.economist.com/news/briefing/21588057-scientists-think-science-self-correcting-alarming-degree-it-not-trouble

        Out of 53 landmark cancer studies, they can replicate six of them. THIS is your life we’re talking about.

        The only well-replicated study on strokes is to give people asprin early. All other therapies haven’t been well verified.Report

    2. I don’t think anyone really has any idea. I mean, we can look at how often fraud is discovered — maybe one high profile case a year, or thereabouts, and a few other smaller ones, plus likely more that never see the light of day because a grad student or post doc or whoever never publishes the fabricated data — but there are undoubtedly cases that go undiscovered, some of which will never be discovered because the findings simply aren’t important enough or the literature has already moved on, so no one’s going to go back and look at the numbers really closely. And then there is the almost certainly more common case of simply incorrect results due to lack of proper statistical controls (see the comments earlier today on the original post), which can be dishonest (researchers know perfectly what they’re doing when they run a bunch of studies and then only publish the one that “works”), but much of which is just the result of a lack of attention or statistical/methodological knowledge.

      I don’t think I’ve ever seen a case quite like this. I mean, Hauser’s fraud may have been pretty widespread (and very nearly went undiscovered, and if it hadn’t been for grad students, might have been undiscoverable), and his research was widely reported in the popular press, so it was a high profile case, but his lying didn’t extend to every aspect of his CV, as this dude’s may have.Report

    3. At some point in the last few years, Bayer announced that they were only able to reproduce the results in about one-third of the papers published in technical journals about potential new drug molecules. They believe that the practice of fudging the statistics by repeating trials until a positive result appears, then reporting that trial, has become very widespread.Report

      1. You know the phenomenon where something is used as a signalling device is discovered to be used as a signalling device and, thereafter, becomes useless for signalling?

        I think I’m still surprised when it manifests anew.

        You’d think I’d have learned by now.Report

      1. Of all the things I was forced to read in high school, this is the only thing I do remember, which is of some wonder to me, since I know head injury has lead to lost memories and memories connected in all sorts of odd ways not necessarily reflective of actual events.

        I don’t know that I’ve read any others, and perhaps I shall.

        This whole thing reminds me of those folk who get fancy jobs only to have it revealed they made much of the credentials up, and weren’t as accomplished as they seem on paper. My favorite was local, guy named Warren Cook who ran Jackson Lab in Bar Harbor, ME — place that supplies genetically-tailored mice for research. At the time I had him in my rolodex, he was considered a hot shot and major success by a whole lot of folk around the state and in the bio-tech industries which consumes so many of those poor little mice. And boom. He’s made the stuff up on his resume years before and never purged it and out the door he goes.Report

        1. That is, essentially, what LaCour did, and it got him a great job. I imagine he won’t be able to actually start that great job, though, as I assume the offer will be rescinded. Unless, perhaps, he reveals that all of this was his actual research, and we’re all now data points in his dissertation.

          On Montaigne: when I was a sophomore in college, a bright-eyed and bushy-tailed philosophy major, a professor gave me a copy of the essays with a book mark on the one titled “That to Study Philosophy Is to Learn to Die.” I fell in love immediately.Report

            1. People HAVE gotten good journals to publish research as “stings.”
              It IS a good way to evaluate how well the peer-review process works.

              Other people have gotten journal articles out of trolling American Conservatives (they got a shiny ddos attack, which they were able to fend off, which was the point of the article).Report

        2. There was a big scandal a few years ago when it was revealed that the Director of Admissions at MIT made up all of her credentials. She wasn’t even a college graduate.Report

    1. [A]fter a tongue has once got the knack of lying, ’tis not to be imagined how impossible it is to reclaim it whence it comes to pass that we see some, who are otherwise very honest men, so subject and enslaved to this vice.Report

  2. I’ve added it in an update, but I’ll stick it here too, as it’s a rather big deal: Science has now officially retracted the paper. Statement here. Best part is the final sentence:

    Michael J. LaCour does not agree to this Retraction.

    It’s like he’s a shovel and can’t help but view everything as an opportunity to dig his hole deeper.Report

    1. Oh dear Lord, the results in the second paper were almost certainly fabricated. My favorite part:

      First and most obviously, LaCour’s figure includes estimates of ideological positioning for a number of radio shows, such as The Radio Factor and The Savage Nation, that do not appear at all in the UCLA closed caption archive on which LaCour’s estimates are supposedly based.

      In other words, he’s not even a very good fabricator.

      I don’t know how this could possibly get worse for the dude, but given its brief history it probably will, and dramatically so, within the next week.Report

      1. If we are lucky, this will result in attempts to replicate results from the other things that Science has published over the last 3-5 years.

        If they are lucky, the results will be replicated.Report

        1. I’d rather replications happen organically, as they will tend to for anything big enough to be published in Science. Even when attempts to replicate fail, they will tend to involve requesting the data from the original research, which is usually enough to catch fabrication, though not enough to solve the “run it a bunch of times until it comes out the way we want it” issue.Report

    2. First Rule of Holes : when you find yourself in one, stop digging.
      Second Rule of Holes : trying to dig out of the hole with heavy machinery & high explosives is ill advised.Report

  3. LaCour issued his response last night.

    Cynics called this a “Friday Night News Dump” but it was pointed out to me that, on Friday nights, Social Scientists were likely to be home and unoccupied.Report

        1. Briefly: he admits lying about the funding and the compensation. He says he really used a raffle for Apple products, and links to receipts with some timing issues (some of the receipts come after data collection). He says he deleted the raw data per UCLA policy, but the policy, which he quotes, only requires deleting P.I.I.

          He doesn’t challenge the claim that the survey company never ran, nor could it have run, the surveys. Instead, he provide details and evidence of unrelated conversations with Qualtrics.

          He says another researcher has replicated his finding, but we do not yet know whether this is true.

          And finally, the statistical argument (I should add he accuses Broikman et al of mistakes, a lack of ethics, and dishonesty throughout, culminating here in the data): he says they “manipulated” the other study data to make it look like his, used the wrong field anyway, and screwed up the reliability data.

          The “manipulation” was simply recoding non-responses to 50. The wrong variable was the recoded version, rather than the raw one LaCout says they should have used. If you ask me, if merely recoding non-responses yields data identical to yours, your data is a copy of the other data. He is making a play for the public, not other researchers here, hoping that “manipulated” convinced the same folks who thought “hide the decline” indicated conspiracy.

          The reliability stats ate a bit trickier, and will require verification with the data, but to me, by that point the fraud is demonstrated, so the reliability argument he makes is superfluous.

          Added: I forgot to include this: the evidence he provides appears to indicate that he either never got IRB approval or got it after data was collected, which by itself likely would have cost him his dissertation and the Science publication. However, that is an even bigger failure, as some, perhaps many in his department should have been aware of this.Report

          1. I read the thing too, and only – barely! – understood the parts written in English. But two things jumped out at me. He admits to lying about the funding; and he admits he destroyed all the raw data while strenuously claiming that doing so was not only the norm in his field, but required by the institution he works for. The first is just an admission of a straight up lie. The second, tho, seems like a more nuanced fabrication, based on interpreting an institutional policy self-servingly. He’s basically saying: “I have my conclusions, you guys have yours, and the only thing that could settle the matter has been destroyed as per institutional requirements.”

            I have a hard time believing that institutional norms would require destroying data prior to a vetting of conclusions derived from that very data. But that’s not my world. Seems very counterintuitive to my understanding of scientific practices.Report

            1. They don’t. He’s being purposefully deceptive: the policy, which he quotes, only requires that he delete the personally identifiable information: names, addresses, that sort of thing.Report

          2. OMG OMG OMG! I just realized the most hilarious aspect of his reply, and a sure sign that he doesn’t know what he’s doing.

            As I said in the previous comment, he accuses Brookman et al of using the wrong field in the other survey to compare to his. The most damning claim Brookman made was that LaCour’s data was indistinguishable from that of the other survey, so this is supposed to defeat that claim.

            LaCour said they compared a field with recoded responses, when they should have compared the raw responses one. However, it’s important to note that the field with unrecoded responses isn’t really raw. It recodes all non-responses to 101 or a value over 100, on a measure from 0 to 100. This allows anyone analyzing the days to quickly and easily deal with the non-responses (missing values and non-responses can be handled a variety of ways).

            The big difference that LaCour points out between his data and the raw field in the other survey? The modal response in the other one is 100. However, this is because it’s including the non-responses, 101, as 100 in the histogram. In other words, he’s just shown that his data includes recoded non-responses, and has inadvertently demonstrated that Brookman used the correct variable and he did just copy the other survey’s data.Report

  4. Chris,

    You’ve done a really good job of explaining this, both in your posts and in your comments. This fiasco is something I would have had only a very dim knowledge of, and it’s interesting to see what’s at play. (It’s also interesting to see how claims of fraud in the sciences compare to those that go down in history, my discipline.)Report

Comments are closed.