I Can’t Do It Any More

Patrick

Patrick is a mid-40 year old geek with an undergraduate degree in mathematics and a master's degree in Information Systems. Nothing he says here has anything to do with the official position of his employer or any other institution.

Related Post Roulette

103 Responses

  1. Jaybird says:

    There’s a phenomenon that seriously helps when it comes to being one of the scientists working with numbers. It’s not acting the way that someone untrustworthy would act.

    If there was a study that was withheld from the public that does show larger numbers for African-American kids than for white kids?

    It doesn’t matter if it was done innocently or that it was done because most people out there don’t know how to do a statistical analysis.

    Now, of course, I don’t know that the CDC did, in fact, hide what the article claims it hid.

    But if it did?

    I don’t know how to properly place the blame on the ignorant skeptics instead of on the white coats themselves.Report

    • Patrick in reply to Jaybird says:

      The CDC explains, on their web site (linked to in the article, amusingly) why the kids who were excluded were excluded (lack of records means you can’t correct for confounding factors).

      “That looks fishy” is the judgment of the ignorant skeptic, not a problem with the white coats themselves, they’re just doing their job.

      The problem with large data sets is that – in order to get any sort of real analysis done – you have to establish parameters for what’s usable data and what isn’t. There are very good reasons to not use potentially spurious data, because it can hose the whole study by introducing variables you can’t control for.

      It wouldn’t surprise me if the description of the data set and why it was chosen is actually *part of the original paper*.Report

      • Jaybird in reply to Patrick says:

        Given the history of the treatment of any number of minority folks at the hands of the white coats, I suspect that “okay, this is going to be really, really tough, but I can try to simplify this down for the layperson and explain why this isn’t bad even though it looks bad and what the numbers mean even though, at first glance, the numbers look like they’re saying something that they’re not saying” is kind of a cross the science folks have to bear.

        If it makes you feel better, you can probably get away with saying that you only owe taking a deep breath and patiently explaining it again to members of these minority groups, though.Report

    • Patrick in reply to Jaybird says:

      Put another way, conspiracy theorists can find any reasonably plausible decision in a study design to be Evidence of the Conspiracy.Report

      • NobAkimoto in reply to Patrick says:

        Almost every single public health study of this sort will have a gigantic appendix talking about JUST the methodology and how the samples were selected, how they were excluded, why some were excluded, etc. etc. ad nauseum. It takes a certain level of know-nothing ignorance to simply blithely assert that experiment and dataset design is a conscious conspiracy rather than working with incomplete data. When people try to create complete datasets where they control everything to the extent that includes things like living conditions and nutrition and other factors, you usually end up with secret experiments on armed forces members like the Tuskegee experiments instead, which are, of course, much more horrendous for a completely different reason.Report

      • Patrick in reply to Patrick says:

        When people try to create complete datasets where they control everything to the extent that includes things like living conditions and nutrition and other factors, you usually end up with secret experiments on armed forces members like the Tuskegee experiments instead

        Yeah, that’s the truth.Report

      • Mad Rocket Scientist in reply to Patrick says:

        Question: How big is the original dataset versus the final/pared down set? If you know?

        I’m just curious how much was not used.Report

      • Chris in reply to Patrick says:

        They used an existing dataset with 987 children for the study group, but had to exclude 327 cases because they weren’t able to obtain immunization records for them. They then excluded 17 more children because they couldn’t match them to control cases, another 29 cases (14 of whom were in the control group) because their vaccination forms were missing, 5 children (1 in control group) for having incomplete vaccination forms, and 2 children (1 control) with religious/medical exemptions. This left them with a total of 624 in the study group and 1824 in the control group.

        What the idiots Patrick linked seem to believe is that black children were excluded because results were only reported by race for cases with Georgia birth certificates, which made race verifiable. They list a total of 230 black children in the study group and 636 in the control group, with 137 in the study and 384 in the control group having Georgia birth certificates. While they didn’t exclude children without birth certificates from all analyses, they did exclude them from the analysis by race, because several children without birth certificates did not have race information, making any analysis of race that used them less accurate. So they left out about 40% of the black children in both the study and control groups, and about 38% of the white children in the control group and 40% in the study group, from the analysis by race (but not the other demographic analyses).

        Basically, they did precisely what they should do to make sure they didn’t arrive at any incorrect conclusions about race by using bad race data. And some idiots decided this was a conspiracy.

        You can check my numbers here:

        http://www.bowdiges.org/documents/files/Age_of_MMR_exposure_comparison_study.pdf

        The excluded cases are described on page 260, and the number of cases with birth certificates and total, broken down by race, is in Table 2 on page 262.

        Even without a background in research methods and statistics, it wouldn’t have taken more than a quick skim of the methods and results sections, along with that table, to figure out that they weren’t excluding people by race. But I suppose once you get it in your head that people who disagree with you are deeply dishonest and will do anything to make it look like their view is correct, you’ll see anything as a nefarious trick.Report

      • Kim in reply to Patrick says:

        Patrick,
        when the people running the study don’t tell people that they’re actually in a research study, they are not helping. (def. not saying that’s what happened here, but it’s somewhat common in industrial research).Report

      • Kim in reply to Patrick says:

        yeah, I think Chris’ explanation is clear and cogent, and easily understandable with a high school background in statistics (or gambling. did you gamble in high school?)Report

      • Jaybird in reply to Patrick says:

        Chris, perhaps my problem is that I can’t help but see parallels between this conversation and the race one.

        I don’t know that “people who disagree with you are deeply dishonest and will do anything to make it look like their view is correct” given that “science” has a lot of skeletons in its closet with more than a few white coats who were, in fact, deeply dishonest and would have done anything to make it look like their view was correct.

        “But I’m not like those people!”, you may find yourself saying. Hell, you know what? I agree that you’re not.

        But, at the end of the day, you’ve still got people who are skeptical of you because of those people and you’ve got to pick between atoning for the sins of those people despite your relative sinlessness and, well, not atoning for them.Report

      • Chris in reply to Patrick says:

        Jay, there are parallels to the race discussion: with science, it’s important to be open to alternatives, ideas that you disagree with, findings that will change the way you look at the world. In some ways, the progress of science is from one mistake to the next, and too much certainty in the current consensus is a barrier to progress.

        However, what we have here is not openness to an alternative, but an absolute certainty that the alternative is true. And as absolute certainty usually does, it causes sloppiness in reasoning and methods, resulting in such a stupid mistake, one so easily avoided by simply reading the paper, that one can do little more than harm one’s cause with anyone who doesn’t already believe it.Report

      • Jaybird in reply to Patrick says:

        So people who feel that they’ve been harmed by the white coats should just stop living in the past and freaking get over it already?Report

      • Saul Degraw in reply to Patrick says:

        @jaybird

        There is a big leap from acknowledging that medical malpractice exists and doctors screw up sometimes to being an anti-vaxxer.

        Are you defending the anti-vaxxers?Report

      • Jaybird in reply to Patrick says:

        ARE YOU NOW OR HAVE YOU EVER BEEN

        Dude, I am defending the skeptics who retain their skepticism in the face of white coats who tell them to screw themselves if they aren’t willing to trust science instead taking a deep breath and explaining it again.

        The obligation that the white coats have is to take a deep breath and explain it again.

        Why? Because of the sins of their forefathers, which are not only legion but have many in living memory.Report

      • Patrick in reply to Patrick says:

        Dude, I am defending the skeptics who retain their skepticism in the face of white coats who tell them to screw themselves if they aren’t willing to trust science instead taking a deep breath and explaining it again.

        The obligation that the white coats have is to take a deep breath and explain it again.

        That’s fair enough, Jaybird. But…

        At what point is the obligation on the other folks to actually learn what they’re being taught? Do they not have any obligation in this exchange?Report

      • Chris in reply to Patrick says:

        So people who feel that they’ve been harmed by the white coats should just stop living in the past and freaking get over it already?

        Dude, I just said a healthy skepticism is a good thing.Report

      • Mad Rocket Scientist in reply to Patrick says:

        @chris

        Thanks. That all sounds like perfectly reasonable practice. As long as the study is open & honest about why X was excluded from Y, I have no trouble with that. If the data set is significantly reduced by the exclusions, I may question the conclusions reached, or at least question the certainty applied to said conclusions (if necessary, I haven’t read the study so this is all hypothetical).

        Re: @jaybird & skeptics

        In my Linky Friday a few weeks back, I linked to an interview with a rather significant figure in the study of genetics. One of the things he gripes about is the scientific publishing system/industry. A choice quote from the interviewer:

        Because publications have become a proxy for research quality, publications in high impact factor journals are the metric used by grant and promotion committees to assess individual researchers. The problem is that impact factor, which is based on the number of times papers are cited, does not necessarily correlate with good science. To maximize impact factor, journal editors seek out sensational papers, which boldly challenge norms or explore trendy topics, and ignore less spectacular, but equally important things like replication studies or negative results. As a consequence, academics are incentivised to produce research that caters to these demands.

        This is, IMHO, doing a great deal to damage the trust in science.Report

      • Kim in reply to Patrick says:

        MRS,
        just so you understand, that more means that studies get “replicated” using different means, over and over again.

        Eyetracking has had 40+ years of research attached to it, we pretty much understand pupil dilation as a proxy for mental activity and emotional responses. But that’s a ton of research looking at many different aspects.Report

      • Mad Rocket Scientist in reply to Patrick says:

        @kim

        The same research can happen even when the failures are published. All that publishing failures does is avoid people wasting time & resources making the same mistakes.Report

      • Patrick in reply to Patrick says:

        All that publishing failures does is avoid people wasting time & resources making the same mistakes.

        That’s not all it does. It depends upon the field of inquiry, really, but the impact of negative findings varies.Report

      • Kim in reply to Patrick says:

        MRS,
        maybe not drugs (which do gazillions of trials of chemicals that don’t work), but other researchers always publish failures. In fact, if you find solid contraindications of prevailing theories, it’s a good way to get famous. It’s much more often boring results that don’t get published, because you can collect data on 30 variables, get a cool result on 3, substantiate someone else’s study (replicate it even) on another 5, and let the rest of the data just sit. Then you write one paper on the cool result (the other is your backup, because if you spend money, you write papers).Report

      • Mad Rocket Scientist in reply to Patrick says:

        @kim

        Go back & read the quote…Report

      • Mad Rocket Scientist in reply to Patrick says:

        @patrick

        OK, at the very least, such publishing would allow for people to not waste effort & resources repeating mistakes. Or at least it would allow them to find novel ways to repeat those mistakes.Report

    • James K in reply to Jaybird says:

      @jaybird

      Many people think the purpose of quantitative analysis is to present all the facts. This is false. Normal people can’t deal with all the facts because human beings are terrible at statistics – a person’s natural inclinations will lead them to make horrible errors of statistical reasoning. A lot of what a quant does is work out what conclusions can be safely drawn from the data and then present those conclusions and nothing else. Analysis is at least as much about what to conceal as what to reveal, and that’s because it’s trivially easy to deceive someone by telling them nothing but facts.

      The truth is not always apparent to the untrained eye and the duty of an expert is to focus people’s attention on the valid points and not let them distract themselves with trivia. That may sound patronising, but this is a consequence of specialisation. I don’t think I know more about medicine than Doc Saunders, or truck driving than Road Scholar or US law than Burt or Saul and I would have to be mad with hubris to think I did. The job of an expert advisor is to help you not make an idiot of yourself, but for that to work you actually have to recognise that they really do know more than you about this one thing.Report

      • Road Scholar in reply to James K says:

        Indeed. Take a large data set with lots of independent variables, something not unlike the one used for this study, pick twenty variables at random and run an ANOVA on it. You’re practically guaranteed to get a “hit” and it’s almost certainly spurious.

        Statistics is an incredibly powerful tool but you really have to know what you’re doing.Report

      • James K in reply to James K says:

        @road-scholar

        Indeed, as Patrick notes the list of pitfalls is so long it takes years of study to get a grip on it.Report

      • Jaybird in reply to James K says:

        Dude, I absolutely agree. But, every now and again, a little factoid will bubble up and make the untrained person say “Wait, they did *WHAT*?” and it is the job of the trained person to say “okay, here’s what happened, I understand why that that looks bad but here’s why it isn’t bad despite what it looks like. And if you need me to explain it again, I can explain it again.”

        And if you think that that’s unfair, I suppose that I’ll shrug and agree that it is.

        But I’ll also point out that the Jenny McCarthy-types will never, ever tire of pointing to their own child and telling the story, over and over, of how their precious baby was perfectly normal until they got him vaccinated and, overnight, he changed. Never, ever.Report

      • Patrick in reply to James K says:

        But, every now and again, a little factoid will bubble up and make the untrained person say “Wait, they did *WHAT*?” and it is the job of the trained person to say “okay, here’s what happened, I understand why that that looks bad but here’s why it isn’t bad despite what it looks like. And if you need me to explain it again, I can explain it again.”

        I’d be fine if that’s what was expected, actually.

        The problem is that the little factoids aren’t “bubbling up”. The little factoids are being actively sought out by folks who very much have a specific agenda of convincing folks that these little factoids aren’t factoids but are in fact The Story.

        And there is a cultural movement among some “untrained persons” to not listen to the actual explanations put forth by the trained persons, but instead assume that some other untrained person is the Real Expert.

        You want me to teach people, I’m okay with that. You want me to teach people who aggressively don’t want to learn, but instead want to preach their ignorance to the world as truth and simultaneously tell everyone how I’m Josef Mengele, well, I can get how researchers can throw up their hands and say, “fuck that noise, you don’t pay me enough for that shit.”Report

      • Jaybird in reply to James K says:

        I can get how researchers can throw up their hands and say, “fuck that noise, you don’t pay me enough for that shit.”

        Sure.

        So can I.

        I’m not saying that I don’t understand that response. I 100% do.

        It’s that if you aren’t willing to take a deep breath and patiently explain what’s going on after acknowledging that Mengele screwed everything up for everybody, the Gods of the Copybook Headings will be more than happy and willing to give their own explanation.

        And the question becomes whether you’d rather you give it or them give it.

        For what it’s worth, I’m 100% understanding of saying “nah, let the Gods explain it.”

        But it also seems to me that the white coats have not earned the deference they seem to expect as their due. Not by a damn sight.Report

      • Patrick in reply to James K says:

        But it also seems to me that the white coats have not earned the deference they seem to expect as their due. Not by a damn sight.

        Jaybird, this is where I have to shake my head at the angle you’re approaching the whole gig from. There’s no White Coat Cabal giving you your White Coat and when you get it you get to sneer at anyone who dares argue against you. Tom didn’t get this either. He had a very weird idea of how things work in the Ivory Tower.

        See, I know enough egomaniacs here at my current place of employment to know that yes, a lot of folks expect deference they aren’t due. It’s not that this doesn’t happen, on an interpersonal level.

        But institutionally, you don’t get a paper into a journal if you don’t explain how you did your work. It’s kinda a necessary step.

        The thing is, it’s not “expecting deference” to “expect people to read the study that they’re ostensibly critiquing”. There’s no egomanicial bit there. “How dare they challenge me!” would be expecting deference. “Why are these people not actually reading the thing I wrote!?” isn’t expecting deference.

        Or is it?Report

      • Kim in reply to James K says:

        Patrick,
        by “explain” you mean reconfigure into the mathematical terminology that you don’t fully comprehend… [letting discrete mathematicians who do addition by using two’s complement write journal articles using Set Theory might not have been the wisest move.]

        //yes, I’m joking about someone I know.Report

      • Jaybird in reply to James K says:

        There’s no White Coat Cabal giving you your White Coat and when you get it you get to sneer at anyone who dares argue against you.

        Of course there isn’t.

        But given the history of, for example, medical science in this country, the assumption *MUST* be that the lay people will need this explained to them, and explained well, and more than once, and a question like “What? Do you really believe that there is a conspiracy out there?” is a question that is not taking Stuff That Happened into account.

        “Why are these people not actually reading the thing I wrote!?” isn’t expecting deference.

        Well, if the answer is “it looks like you moved a group of African-american children from this group to that group”, then you’ve got a choice. Will you take a deep breath and explain it again or will you accuse people who disagree with you of bullshit artistry?

        If it’s the latter, I hope you can see why some might see an expectation of deference there.Report

      • NobAkimoto in reply to James K says:

        That’s just it, though. In OECD countries at least, there’s a whole collection of bodies that make sure the white coats ARE behaving. To even get your study approved you have to go through a whole slew of grant processes and reviews. While there are times when Institution Review Boards (IRBs) fail to stop unethical studies or reports, for the most part they do a great job of stopping people from even considering experimental designs that are far from ethical.

        Further, the very nature of academia in the OECD is basically based on oneupsmanship. While this isn’t necessarily a good thing (and it certainly can be very, very frustrating to people dealing with the system), it does provide several levels of institutional scrutiny by people who are trained in the techniques being used.

        The point isn’t so much “shut up and trust us we’re white coats”, it’s that white coats are probably the most likely group of people to create real, substantive critiques of what’s being done. The fact that all academic journals require you to submit your methodology so it can be reviewed and reproduced and critiqued is important. There are very, very few examples of studies where the methodology is so flawed that a layman with no statistical knowledge can find flaws where professional editors and critics can’t.

        It’s not that skepticism is bad, but there’s a also a level of wisdom in acknowledging that you may not be seeing all there is to see. The latter is highly absent in people who present most of the critics to “modern medicine” or whatever other conspiracy they allege is happening.Report

      • Michael Cain in reply to James K says:

        Behaving in part, Nob, in part. There’s the recent studies of replication of preliminary drug results. Bayer couldn’t replicate almost two-thirds of the published results. Speculation is that researchers are playing statistical gamesmanship, conducting trials multiple times until they get a data set that shows efficacy. Then they write up the paper based on, eg, the tenth data set, but don’t tell you that there were nine previous ones that showed no effect.Report

  2. Tod Kelly says:

    “I know, that sounds like an Appeal to Authority.”

    One of the things I’ve noticed in the quasi-populist Land o’ the Internet and social media is that people often mistakenly confuse the Appeal to Authority with Actual Expert Knowledge, and thus assume that their limited knowledge base in technical matters carries equal weight with the knowledge base of those who have actually put in the time to study and learn about those technical matters.Report

    • Dave in reply to Tod Kelly says:

      The Internet is sort of a grad proof of the Dunning-Kruger effect. Every person knows a handful of terms like cognitive bias, appeal to authority, etc and just bandies them about at any real or perceived opponent.

      To quote Abraham Lincoln: “Never believe everything you read on the Internet”Report

    • Jim Heffman in reply to Tod Kelly says:

      I remember when “we know more than you do, so you’ll just have to trust us when we say that this is the appropriate course of action” was obviously fallacious bullshit.

      And it got us into the Iraq War, so maybe it kind of was?Report

      • Patrick in reply to Jim Heffman says:

        You’re really just bad at understanding what fallacies are and what experts are, Jim.

        I don’t know if you’re just doing it as a schtick to try and show how clever you are in some sort of semantic ninjitsu, but it doesn’t work.Report

      • Jim Heffman in reply to Jim Heffman says:

        I know you want everyone to just shut up and do what we’re told by the people who know best, but most of human history has been about cleaning up the mess created when we shut up and did what we were told by the people who knew best.

        It’s actually rather insulting that you consider people to be too stupid, on average, to understand things.Report

      • Patrick in reply to Jim Heffman says:

        I know you want everyone to just shut up and do what we’re told by the people who know best

        Jim, the next time you try and tell me what I think, if you get it this totally wrong, I’m going to eject you.

        So either (a) get better telepathy; (b) stop assuming you understand a thing about me; (c) start adding uncertainty to your declarations of what a bastard I am and have better manners; or (d) get ready to have your ass handed to you.

        Because I’m seriously done with your snide fucking bullshit.

        but most of human history has been about cleaning up the mess created when we shut up and did what we were told by the people who knew best.

        That’s a very interesting view of human history. In my estimation, the biggest messes were created by people who had *power*, not *knowledge*, but hey, to each their own.

        It’s actually rather insulting that you consider people to be too stupid, on average, to understand things.

        Remember when I said you don’t understand me well enough to tell me what I think? Yeah, that again.Report

  3. NobAkimoto says:

    This is more or less how I feel whenever a conversation about ‘s grievances relative to the rest of (predominantly white) society.Report

  4. Chris says:

    Clearly they were hiding the decline.Report

  5. Saul Degraw says:

    I am going to agree with Tod. A lot of people think that Appeals to Authority and Actual Expert Knowledge are synonymous but they are not. Now of course I don’t have any solutions for getting people to make the distinction.

    The Internet is sort of a grad proof of the Dunning-Kruger effect. Every person knows a handful of terms like cognitive bias, appeal to authority, etc and just bandies them about at any real or perceived opponent.

    Now the question is whether there were always a large number of cranks but the Internet just merely gives them an easy and cheap microphone. A Hyde Park street corner amplified.Report

    • Jim Heffman in reply to Saul Degraw says:

      “I’m right because I’m an expert” is still an appeal to authority. Even if you are an expert. Even if you are right.Report

      • Murali in reply to Jim Heffman says:

        but it is a legitimate appeal. An expert, by definition, is more likely to be right about the subject of his expertise than a non-expert. It follows that we should afford his opinions more epistemic weight than our own when they differ.Report

      • Jim Heffman in reply to Jim Heffman says:

        “more likely” is not “automatically is”.

        I’ll agree that an expert in a subject has a deeper understanding of that subject. What should happen next is that the expert uses that deeper understanding to explain their reasoning to the layman.

        When an expert says “I have a special understanding of this subject that would take years to achieve and it’s too much to expect of me to give you that understanding in this short conversation”, that sounds to me like a priest saying that he has a special relationship with God that takes years to achieve and it’s too much to expect of him to convey that understanding to me in a short conversation about whether gay marriage ought to be allowed.Report

      • Kim in reply to Jim Heffman says:

        Jim,
        When the expert starts drawing mustaches on buttocks… then you know the algorithm needs tweaking. Again.Report

      • Patrick in reply to Jim Heffman says:

        “I’m right because I’m an expert” is still an appeal to authority. Even if you are an expert. Even if you are right.

        No, it isn’t.

        Or rather, it’s an “appeal to authority”, but it’s not “Appeal to Authority”. When you use the term “Appeal to Authority”, you are implicitly saying that the argument is fallacious because the authority isn’t a true authority.

        That’s what the *fallacy* is.Report

      • Jim Heffman in reply to Jim Heffman says:

        “When you use the term “Appeal to Authority”, you are implicitly saying that the argument is fallacious because the authority isn’t a true authority.”

        “I’m right because I’m an expert” is still an appeal to authority. Even if you are an expert. Even if you are right.

        I’m not denying the existence of expertise, I’m saying that it doesn’t confer a special Rightness Field that lets you say things without having to explain them.Report

      • Patrick in reply to Jim Heffman says:

        I’m not denying the existence of expertise, I’m saying that it doesn’t confer a special Rightness Field that lets you say things without having to explain them.

        Have I ever claimed that?

        Again, the explanation for why a data set is chosen is typically put right there in the original goddamn paper, as pointed out by Nob and Chris. It’s called “standard practice” in science. If you read more than one research paper on any topic you’ll find a section dedicated to methodology.

        PEOPLE IN SCIENCE EXPLAIN THESE THINGS. It’s part of the gig.

        You saying, “well, you need to explain things” is a baseless critique. It’s like saying, “Well, you need to do that thing that you’re already doing”.Report

      • Kim in reply to Jim Heffman says:

        Patrick,
        it would be one thing if this was PCA, or something massively multivariate. There the papers descend into jargon (partially because not all psychologists understand the math, even if they do get the science right)…

        That I could see — “here is what we’re actually doing, in layman’s terms”. what chris writes above, though, is plain English.Report

      • Jim Heffman in reply to Jim Heffman says:

        “Have I ever claimed that?”

        In the OP. “You can gain the experience. It’s not easy. It requires about six years of serious postgraduate work. I know. That sucks. It would be so much easier if you could just look up how to do this on the Internet. You can’t.”

        In other words, “I don’t have to explain this. You’re incapable of understanding the explanation unless you’ve gone to college for ten years, in a specific course of study, and so I’m not even going to bother to try. Just trust me when I say that I’m an expert in this and that experts aren’t wrong.”Report

      • Patrick in reply to Jim Heffman says:

        Well, at least your reading comprehension is also terrible.Report

      • Glyph in reply to Jim Heffman says:

        Kim, even for you, this was an odd comment.

        I’ve seen things you people wouldn’t believe….mustaches on buttocks….cookies on dowels…time to die.Report

      • Chris in reply to Jim Heffman says:

        Mustaches on Buttocks is going to be the name of my debut album.Report

      • Kim in reply to Jim Heffman says:

        Glyph,
        Oh, you haven’t heard the half of it (that was facial recognition software gone wrong.).
        Remember, the truth is always stranger than fiction.
        [Particularly when you’ve got investigative reporters on your heels.
        You can lie all you like, but you have to make it a plausible lie,
        you know, with evidence.]Report

      • Chris in reply to Jim Heffman says:

        that was facial recognition software gone wrong

        And this will be the basis for the plot of my first screenplay.Report

  6. Mad Rocket Scientist says:

    I feel your pain, Patrick, I really do.Report

  7. Murali says:

    Can we all finally say that Mill was wrong when he said that truth will win out in the marketplace of ideas? There are other good reasons for protecting freedom of expression and religious practice. It is high time we retired the bad arguments.Report

    • James Hanley in reply to Murali says:

      But maybe the bad ones will do the best job of selling folks on the value of free speech?Report

      • Murali in reply to James Hanley says:

        While I fully admit that that could be a possibility, there is a large part of me that would rather people believed the wrong thing for good reasons than the right thing for bad ones.Report

      • James Hanley in reply to James Hanley says:

        I’ve never been able to make up my mind on that issue, @murali.Report

      • Kim in reply to James Hanley says:

        Murali,
        believing the right things for bad reasons is probably better. At least if you get human nature right, you’re not building your entire belief system on sand. greed may not be good, but it’s human, dammit. So are plenty of other things we like to pretend don’t exist (ask a perfume manufacturer if you don’t believe me.) Humans are weird, mixed up things that don’t work half as well as you’d like to think.Report

    • j r in reply to Murali says:

      In the long run, I still believe that truth wins out more often than not.Report

      • Murali in reply to j r says:

        except that when we apply the insights from public choice theory to deliberation in the public square, it shows that truth is not likely to win out if it is complicated.Report

      • j r in reply to j r says:

        Sounds like a very good reason to remove a bunch of those deliberation from the public square and put them in the hands of individuals making decisions for themselves. People become much better at making decisions when they are exposed to the consequences and putting them back in the private square limits everyone else’s exposure to bad decisions.Report

      • Murali in reply to j r says:

        Sounds like a very good reason to remove a bunch of those deliberation from the public square and put them in the hands of individuals making decisions for themselves

        I agree with you, mostly, except some of these are macro policy decisions which means removing it from the public square and putting it in the hands of intelligent technocrats.

        Either way, removing some set of questions from the public square suffers from a bootstrapping problem. In a democratic society in which most people incorrectly believe X about P after publicly deliberating about it, it is impossible to convince them that ~X. It is therefore, also impossible to convince them that P shouldn’t be subject to public deliberation. It is therefore impossible to remove P from public deliberation once it is already there.Report

    • Jim Heffman in reply to Murali says:

      The issue is that some people figure they already know the truth, and everything else either confirms that truth or is a malicious falsehood.Report

    • Kim in reply to Murali says:

      Truth rarely makes people rich. And the truth winning just means that most people aren’t falling for the lies. The liars are still winning in the free market of Capitalism! (where there’s always someone stupid enough to fall for it).Report

    • James K in reply to Murali says:

      @murali

      Yeah, I don’t think Mill was right. Markets are aggregators of preference, not of knowledge (with a couple of important exceptions). Unless there is a preference for truth over falsehood (not a preference for the feeling of being right, but for actually being right) then a market will happily generate falsehoods for public consumption. Mill’s premise fails because he assumed that people want to know the truth, when generally they don’t.

      Now, I don’t know what we could do about that, but I have no illusions that people are going to start believing things just because they’re true.Report

  8. North says:

    If only it was just their own children they were maiming.Report

  9. Jim Heffman says:

    So basically this is the CDC’s “hide the decline” moment.Report

  10. Michael Cain says:

    Reading Chris’ comment above, but not going and looking at the actual reference, it certainly sounds like there’s a possibility of this being one of those “If you torture the data long enough, it will tell you anything you want” situations. That’s not an attack on the researchers — often times the data you can get is not the data you would like, but it’s all that you have. In such a situation, Patrick’s “six years of serious postgraduate work” doesn’t seem too far off the mark, since we’re talking both statistical methods as well as a deep understanding of exactly how the data sets are unrepresentative of the general population. In the medical field, is “624 in the study group and 1824 in the control group” really a large data set? Particularly when looking for something as subtle as autism is turning out to be?

    I am reminded of the heated arguments about polling data leading up to the 2012 elections. On the one side there were people like Nate Silver and Sam Wang who said, effectively, “With some known exceptions, the pollsters are well aware of the shortcomings of their sampling techniques, and are the ones who have spent years figuring out how to make accurate corrections.” On the other side were mostly conservative pundits who said, effectively, “The pollsters have no idea what they’re doing, but I know how to correct the data.” Unsurprisingly, the professional pollsters with a deep knowledge of their own methodologies were right. With Rasmussen being a notable exception to that, releasing polls that were systematically wrong in one direction.

    For the record, I’m enough of a statistician to be dangerous. But at least I know that I’m dangerous, and when I’ve gone beyond my expertise.Report

    • Chris in reply to Michael Cain says:

      I assume medical researchers do power analyses, particularly since they have more data available for such an analysis than just about any field I can think of. I mean, we can have the incidence of autism by week if we want it, and we can have an idea of what effect sizes we’d take seriously in determining whether there’s a difference in the incidence between two groups, so it’s pretty easy to figure out a good sample size.Report

      • Michael Cain in reply to Chris says:

        I also assume that they’re competent at a high level. The phrase “used an existing dataset with 987 children for the study group” is a minor warning flag, since the implication is that they didn’t have control over all of the experimental design. Having to toss 40% of the observations for a non-random reason always creates the possibility of statistical bias. All of which can be controlled for, and all of which reduce the power and sensitivity of the result.

        Medical statistics have been getting a bad rap lately, so it’s not surprising that people’s trust in them is down. Recently there’s been the stuff in the popular science press about how many peer-reviewed published preliminary drug results are not reproducible, due to researchers conducting the test multiple times and publishing the tenth data set that supports what they want to show, but not showing the nine sets that don’t. My working assumption is not that there are a mountain of autism studies available and these — or any other — researchers chose a particular data set from among many in order to obtain results that they had decided on in advance.

        But there is a problem, and I really hope the field can clean up its act, because we’re all the poorer if there’s enough public perception of “of course you can’t believe the statistics.”Report

      • Chris in reply to Chris says:

        What I find most interesting about the anti-vaxxers choosing this case is that it actually showed a difference in the age of vaccination for autistic children compared to non-spectrum children. It seems like the anti-vaxxers would try to dig into the explanation (pre-school programs for autistic children require vaccines, so they tend to get them earlier on average) rather than some non-mystery about the data itself.

        I agree that having to exclude a non-trivial % of the cases because the data was less than perfect (in their defense, they excluded 40% because 3-5% of the cases had missing race information, and the race information for the other 95-97% wasn’t as verified as the cases with birth certificates, so they really were being extra-cautious), but in this case they only excluded those cases from one analysis, the comparison by race, which was hardly the main object of the study in the first place. It’s the sort of thing that, if it had shown even a hint of a relevant difference between the two races, would probably have led to immediate followup studies.

        I should be more specific about what they used. They used an existing sample (autistic children in the Metropolitan Atlanta Developmental Disabilities Surveillance Program), for which the data they used was readily available. Since the information they were using is basically public information: school records, medical records, and birth certificates, along with autism diagnoses (which was the reason they used the existing sample, I assume, rather than wait for 1000 diagnoses in a selected study population), and since they have plenty of information about how the children were selected and how the data was collected, it doesn’t strike me as an issue for this type of study. What’s more, as the birth certificate issue shows, they verified much of the information independently. I’m not a medical researcher, though, so I could be wrong.Report

    • Mad Rocket Scientist in reply to Michael Cain says:

      Back I worked for Ginormous Aerospace Company, one of my tasks was to take wind tunnel data, which is horribly noisy & had large chunks that were discarded, and make a nice set of smooth curves out of that could be used to predict certain aerodynamic qualities of the airframe. We did this with a tool that had been written in house, by engineers (not developers), back while I was still in grade school. The code was barely maintained, certainly never updated (partly because of time, partly ignorance, & partly because of the FAA) & I had no access to the source code.

      In short, I had to feed this tool noisy data sets, which it would then, via the magic of the black box, allow me to interpolate the data to smooth out the curves, and extrapolate out to conditions not explored in the wind tunnel. The final shapes of those curves were almost entirely up to me & my engineering experience, since I could apply an array of methods & weights to the algorithms.

      To say that this made me anxious is an understatement. Basically I spent weeks torturing the data until my lead engineer decided he liked the look of the curves. Luckily the only reason the plots even existed was because the FAA demanded that they exist, and flight test would have the final say on what we would use internally for those predictions. But it was quite the lesson is just how much one can legitimately torture data.Report

  11. KatherineMW says:

    I think there’s two parts to this problem, Patrick.

    One is that it’s nigh-impossible to convince a conspiracy theorist they’re wrong. They’ve already decided they’re right, and the actual balance of evidence is immaterial. Trying is, yes, about as enjoyable and useful as slamming your head into a brick wall.

    The other is the challenges laypeople face in understanding complex science, and that scientists face in communicating with laypeople. This is a genuine problem that needs to be addressed; if scientists default to telling laypeople “look, you just have to trust us on this, because we know more than you, and explaining why is too complicated for you to understand,” then no, plenty of laypeople – not just conspiracy theorists – aren’t going to trust them. We need to be able to explain science – including our statistical techniques – in ways that are comprehensible to people without one or more university degrees in the subject.

    I know next to nothing about law, but when Burt posts one of his lengthy discussions of a case, I can generally get some kind of handle on what he’s saying and what the different arguments are, and it’s a lot more useful than a few sentences saying “the case should be decided [x] way, I know because I’m a lawyer, you just have to trust me on this”.Report

    • Patrick in reply to KatherineMW says:

      We need to be able to explain science – including our statistical techniques – in ways that are comprehensible to people without one or more university degrees in the subject.

      The link I just posted below was one of the things that the Universe conspired to bring to my attention.

      It’s a pretty terrible story on all ends. Really depressing.

      It’s been clear for a while that there’s a severe disjoin between how scientists communicate risk, how risk managers communicate risk, and how laypeople infer real risk, but the implications of that are sometimes pretty severe.

      (here Jim can jump back in and accuse me of thinking people are stupid again)Report

      • Kim in reply to Patrick says:

        Mrfl. 10% chance of the world ending (throw in some more weight on “lose all life on the eastern seaboard”)
        Yeah, that was a fun weekend.
        Everyone who had any clue was watching on tenterhooks.

        Dailykos had some experts on, giving a blow by blow.
        Wild times, eh?Report

      • Patrick in reply to Patrick says:

        You need to tweak your algorithm again.Report

      • Glyph in reply to Patrick says:

        Buttock mustaches.Report

      • Kim in reply to Patrick says:

        Patrick,
        The algorithm is named Donkey’s Ears.
        (not a shakespeare ref).Report

      • Jim Heffman in reply to Patrick says:

        The whole article is about incomplete or insufficient communication between experts and laymen, which is what I’ve been going on about, and so I’m not sure why you’re name-checking me here.

        You’re right that there comes a point at which all you can say is “I’ve explained the risks inherent to what you’re doing, if you keep going then it’s on you”. This doesn’t absolve you of the obligation to respond to further butwhatabouts, though.Report

    • We need to be able to explain science – including our statistical techniques – in ways that are comprehensible to people without one or more university degrees in the subject.

      How about one or two university courses in the subject? Probability and statistics are a language so that we can talk clearly about uncertainty. If the question is at all complicated, trying to talk about it without some math will eventually degenerate into “trust me”. Consider the Monty Hall problem, which pops up here from time to time. Explaining it assumes at least some knowledge of fractions and basic rules about discrete distributions (eg, the sum of the probabilities of all possible outcomes must be one).

      A vast array of statistical testing is some variation on “I’m estimating how likely/unlikely it is that a sample was drawn from a population with a certain distribution.” Once you have that basic concept down, so that you understand what it means when someone says “the probability that this sample was drawn from a population with those characteristics is less than 5%,” you can understand a lot without knowing the details. Absent that understanding, it’s going to be just hand-waving.Report

      • Mad Rocket Scientist in reply to Michael Cain says:

        I only bring this up because I saw this recently.

        In climate science, homogenization of temperature records is something that seems to really irk people who don’t understand it. Here is a nice primer on it. The quick & dirty is homogenization is the statistical process by which data is corrected for artificial variance. The most common reason temperature records are homogenized is the urban heat island effect (urban areas grow around the recording station & cause a spike in the local temperature). It’s a perfectly valid exercise & of itself nothing to fret about. Still, one must be completely transparent with regard to how the homogenization is taking place & what references are being used.

        Especially when the homogenization changes the sign on the slope of the trend line. If the raw data shows a slight cooling, and the homogenized data shows dramatic warming, you’d better be ready to show your work.Report

  12. Patrick says:

    The Universe has conspired to remind me over the last few days that “complaining that you feel like Sisyphus” doesn’t change the fact that “You are Sisyphus. Here’s your rock.”

    So I formally retract the “I give up” part of this post, anyway.Report