Two Politicians Who Didn’t Say Quite The Reprehensible Things Said Of Them


Burt Likko

Pseudonymous Portlander. Homebrewer. Atheist. Recovering Republican. Recovering Catholic. Recovering divorcé. Editor-in-Chief Emeritus of Ordinary Times. Relapsed Lawyer, admitted to practice law (under his real name) in California and Oregon. On Twitter, to his frequent regret, at @burtlikko. House Likko's Words: Scite Verum. Colite Iusticia. Vivere Con Gaudium.

Related Post Roulette

129 Responses

  1. Avatar zic says:

    Well, there is a difference: Walker is talking about fetal ultrasounds as a result of actions that limit women’s control of their bodies. (I read a few days ago that it cost, on average, an additional $150 or so for women seeking abortions in state’s with wait periods; I presume they have to pay for the ultrasounds, as well, and that’s not including lost wages.) Sanders, on the other hand, was addressing some of the weird stuff that a lot of people hoped would work its kinks out when women were given more control.

    Perhaps they’re sides of the same coin, but I don’t see Sander’s seeking to limit rape porn on the internet or otherwise control individual decisions the way I see Walker doing.Report

    • Avatar LeeEsq says:

      It turns out that many very feminist women also have kinks that are contrary to feminism. People are very strange sometimes.Report

      • Avatar zic says:

        I think you’ll find the kinks in the closet of the supposedly chaste.Report

      • Avatar Brandon Berg says:

        I once had a girl tell me that the way she felt about me made her feel like a bad feminist.

        That was a good day.Report

      • Avatar veronica d says:

        @leeesq — Just curious, which kinks are “contrary” to feminism, but that are reasonably common among feminist women?Report

        • Avatar Kim says:

          To the people who aren’t “kink positive” (that is to say, it’s your mind, and you’re stuck with it, so why not have some Freaking Fun)… probably all the girls who get off on rape and bondage.

          It’s not feminist to not have a choice about sex, after all.

          Me, I say it’s just a freaking fantasy, and have fun with it. Encourages creativity and problem-solving. Try not to get locked in a rut, and you’re good.

          (Oh, and some people are downright disturbed about vore. But I don’t think that’s explicitly antifeminist, mostly because nobody’s bothered to label it such)Report

        • Avatar LeeEsq says:

          @veronica-d, I guess what Kim said. A lot of feminist, in the broadest possible definition of the ideology, women seem to consume a lot of romances and erotica where the leading man is an arrogant, misogynistic but really handsome guy. I know this is just a fantasy but it always seemed at least a little hypocritical to me and a lot of other men.Report

          • Avatar veronica d says:

            @leeesq — Do you have evidence of this?

            Note that woman and feminist are not synonyms. The feminists I know tend to ignore the trashier side of the bookstore, and we pretty much mock stuff such as 50 Shades of Misogyny. Even when we read that stuff, it is mostly to provide critique.

            The feminists I know push hard for better representation of women in the media, with more stories by and for woman. We want women who are, in the public sphere, equal to men in influence and power.

            And in the bedroom — we want it hot. This bothers you?

            I don’t deny that I’m kinky, and that includes some rather explicit fantasies. But so what? If a man is resentful of that, well too fucking bad for him. That’s not the same as wanting to be the saccharine heroine of a trashy pot boiler.

            By the way, this does not mean we want our men to be spineless sadsacks. Those are not the only forms of masculinity available.

            Not to be too personal, but this sounds like it is less about feminists falling short of our values and more about your resentment of women’s choices. You should work on that.Report

            • Avatar LeeEsq says:

              Lets just say that I have a lot of sympathy with women who are bothered by the depictions of hot women in media. Based on media descriptions of what I hot man is, I am the near total opposite physically.

              As to feminist, I am using it the broadest possible definition as the belief that women should be legally and socially equal to men with the same rights and opportunities. Most, but not all women, in the developed world are at least feminist by that definition even if they aren’t adherents to a more specialized form of feminism.Report

              • Avatar zic says:

                So you don’t think hot men should be depicted in movies, etc?

                I object.Report

              • Avatar LeeEsq says:

                I did not say this at all.Report

              • Avatar zic says:

                I know, but it was fun to tease you, because you were doing that thing of conflating some individual or group of individuals for an entire group (what some women read/some feminist believe for all women).

                Feminist means equal access to things; so, for example, women have just as much right to kinks as men. More importantly, as much right to have their kinks gratified as men without shame, so long as those kinks are gratified consensually.

                There is a swath of feminism that is very prudish in response to the constant use of women’s sexuality to sell stuff. Personally, what’s missing here is the female gaze — the fact that men can also be objects, which is why I so artlessly put words in your mouth. Please forgive my humor at your expense, @leeesqReport

              • Avatar veronica d says:

                It’s hard to talk about male versus female desire outside of the matrix of male social control. So yes, some feminists object to all male-gaze portrayals of sexuality, but that seems wrong to me. Men are entitled to their sexual feelings — provided their sexual practice is consent based. But when men control so much of the media (and this is a measurable fact) then problems arise.

                When women demand equal say, men (#notallmen) don’t like it. It’s almost as if sexism is a real thing.Report

              • Avatar Kim says:

                Are you bothered more by gay guys selecting who to put in media, or straight guys? (Hollywood is still so sexist that talking about women selecting people is … difficult).Report

              • Avatar LeeEsq says:

                Like I noted in a previous thread, the entertainment industry is still sexist in ways that would invoke a legal stomping in other industries.Report

  2. Avatar Will Truman says:

    For what it’s worth, I found nothing odd or objectionable about Walker’s (actual) statement about the coolness of ultrasounds and/or showing them off. I kept a scan of the ultrasound on my phone. I didn’t show it off to everybody, but I posted it here.Report

  3. Avatar Chris says:

    I have had people show me ultrasound pictures, though not that often. Then, I don’t know Walker.

    What irks me about that statement is the other part.Report

  4. Avatar Saul Degraw says:

    1. FWIW, I have seen many people put ultrasound pictures on social media as a kind of “We are having a baby” announcement. I have also had people share them in social situations on their phones.

    2. The 1970s seemed to be a very different time. He also probably never imagined that he would be a U.S. Senator.Report

  5. Avatar Jaybird says:

    I have seen fetal ultrasound pictures being passed around, for what it’s worth. Like Chris said, the irritating part of his statement was the implication that if they really knew what they were doing, they would be doing something else. This whole “the only reason you don’t agree with me is because you don’t have all of the information” thing is pernicious.

    As for Sanders, it was the 70’s.Report

    • Avatar Michael Drew says:

      +1 on Sanders. Not that it’s not weird – to put those thoughts down on paper, and, as Burt says, send them in for publication, that is.Report

    • Avatar Morat20 says:

      It’s very paternalistic, but unfortunately Judge Kennedy agrees.

      A lot of these laws have this weird implicit belief that women just don’t know what’s happening, hence the ultrasounds and laws and requirements, because otherwise the poor dears might just trip into an abortion thinking they were just getting a checkup! Kennedy’s actually pretty awful with it. It’s one thing the term ‘mansplaining’ really seems to fit.

      I’ve known several women who had abortions. Not one of them was in any way confused as to what was happening.Report

      • Avatar Kim says:

        Yanno, if we wanted to, we could actually Fix The System, so that girls could have kids relatively consequence free… (and then give them up for adoption).

        But we Don’t Want That, of course.Report

    • Avatar Murali says:

      “The only reason you don’t agree with me is because you don’t have all the information” is the only* consistent position you can take if you want to continue to maintain your beliefs.

      Otherwise you are forced into fairly widespread scepticism, given the range and extent of things that people disagree about.

      *Ok, not the only one. You could also say that the other person is too stupid/irrational to make the correct inference even if you shared all information.Report

      • Avatar Jaybird says:

        Lotta axioms out there, Murali. Lotta matters of taste, too.

        And, for that matter, lotta reasons to conclude that fairly widespread skepticism is a better working position than dogmatic certainty.

        Depends on your axioms, though.Report

        • Avatar Murali says:

          I’m fairly certain that if two people had identical evidence regarding any given issue but reached different conclusions about that issue, at least one of those people’s reasoning is irrational in that particular instance.

          Let me shamelessly plug my own paper here.

          • Avatar Jaybird says:

            Even if they come from different cultures? Even if some of their thinking revolved around preferences?Report

            • Avatar Guy says:

              In that case they’re probably not working from truly identical evidence. Your culture and preferences influence how you reason about evidence such that they effectively constitute their own categories of evidence.Report

              • Avatar Jaybird says:

                Without getting into the tartanic “truly” there, I’m not seeing how conceding the point will get me away from the conclusion of “fairly widespread skepticism is a better working position than dogmatic certainty.”Report

          • Avatar Kim says:

            yes, but we LIKE irrationality. We LIKE optimism, even if it’s stupid. We don’t want to be rational, because rational is fucking depressing.Report

          • Avatar veronica d says:

            I’m pretty sure that no one will ever explain what “rational” even is, except in a way that is completely self-serving.

            My deal is, I actually know a fair amount of logic, as in I can follow Gödel’s proofs (for the most part), and I have a decent handle on things like formal semantic and model theory. Knowing all this, the idea that logic provides the core of what is “rational” — well, it seems a bit silly.

            And I know enough statistics and bayesian stuff and smatterings of CogSci and linguistics, and all of this adds up to our being pretty complicated on how we think about stuff.

            For example, I‘ve been working my way through this book.

            Our words don’t even work the way that people think they do, which is all to say, if you want to speak logically, you have to use language in an incredibly stilted way. And how do you know you’re really-really-really nailing down the full denotations of all your terms in precisely the way you need to be perfectly “rational”? Furthermore, why would you suppose your listener has accepted your model in full measure, each finely grained edge of your system of denotation? Really? You think that happens? Every axiom? Even the vast majority of them you’ve never written down and couldn’t if you tried?

            Blah. This isn’t even wrong.Report

            • Avatar Murali says:

              @veronica-d @jaybird

              Epistemic rationality is not a matter of preferences. Epistemic rationality is about truth conduciveness. An inference is epistemically rational just in case the confidence in the conclusion given the evidence is equal to the likelihood that that the conclusion is true given that the evidence is true. Neither am I claiming that people are, in general rational, or even that they ought to be. I am saying that this is what epistemic rationality consists of, at least in part.Report

              • Avatar veronica d says:

                @murali — Right, but I’m a computer scientist, so when you say this, I look for the actual computation to be performed, and I bet you cannot provide it. And I bet if you have something you think provides it, it is either not Turing computable or (if it is) it’s way out in EXPSPACE and as a practical matter not-computable.

                You might be saying “square circle” or “set of integers a, b, c, n, each greater than 2, such that a^n + b^n = c^n.”

                You can say those words, but they denote the empty set.Report

              • Avatar Kim says:

                I agree with v, I want the math behind this. Can you provide?Report

              • Avatar Murali says:

                I think the distinction between not turing computable and turing computable but not practically computable is very important. Its like the difference between atheism and agnosticism, or nihilism and scepticism. Simply as a matter of getting it right, the distinction is just good epistemic hygiene. But, there are some pragmatic upshots. For at least some things which are not practically computable even if turing computable, partial solutions may still be useful. Keeping our conceptual house in order also allows us to know when our reasons for belief have to do with factors that make our belief more likely to be true and when we are believing something because it is convenient or makes some practical problems more tractable.Report

              • Avatar Jaybird says:

                It’s a continuum, not a toggle. It’s certainly better to think about things in such a way that someone else can see how you reached your conclusion from your premises and say “assuming your premises are true, your conclusion follows”, sure.

                But it’s a hell of a leap to get from there to “not only do I have all of the premises I need and no more than those, my premises are true and people who don’t agree either don’t have all of the information or they’re thinking illogically.”Report

              • Avatar KatherineMW says:

                Come again, without sounding like you swallowed a dictionary, please?Report

              • Avatar Murali says:


                We don’t just believe in an all or nothing sense, our beliefs can come in degrees. For any given proposition, the attitude we take towards that proposition could range from complete disbelief, full belief or anything in between. We can describe these degrees of belief by a probability. The degree of belief you have in a proposition is the likelihood of truth you assign to that proposition.

                We can also describe an inference in the following way: An inference is a process by which you move from a body of evidence (lets call it E) to some degree of belief about a proposition (lets call that proposition B)

                E –> p(B)

                But, p(B) is just p(B|E) x p(E) + p(B|not-E) x p(not-E)

                However, to possess evidence just is to set p(E) == 1. Of course, when we do this, we build the probability of erroneous observation etc into E (at least insofar as we are aware of it).

                Thus, p(B) = p(B|E)

                But p(B|E)* just is our evaluation of the extent to which E supports B.

                All this means is that once we possess some evidence E that bears on B, consistency requires that our confidence in B correspond with our evaluation of the evidential support that E provides for B.

                Now, you might say that all this is well and good, but what if there are multiple, equally good interpretations of the evidence?

                Well, we can deal with this too. Consider the case when there are three equally good interpretations of the evidence. Call this E1, E2, and E3.

                If you think that E1, E2 and E3 are equally good, that means you assign them equal likelihood.

                So, p(E1) = p(E2) = p(E3)

                Assuming that E1, E2 and E3 are the only valid ways of interpreting the evidence,

                p(E1) + p(E2) + p(E3) = p(E) = 1

                p(E1) = 1/3

                Now, let’s generalise the result that we earlier got.

                Recall that p(B) = p(B|E)

                However, per the objection, the value of p(B|E) depends on which interpretation of E we adopt (E1, E2 or E3)

                In that case,
                p(B|E) = p(B|E1) x p(E1) + p(B|E2) x p(E2) + p(B|E3) x p(E3)
                = [p(B|E1) + p(B|E2) + p(B|E3)]/3

                That is, given that you believe that each way of interpreting the evidence is equally good, you are logically required to give equal weight to each way of interpreting the evidence. In order to pick one way of interpreting the evidence over another, you have to think that those other ways are invalid. Thinking that they have any validity (even if less than yours) will require that you adjust your degrees of belief accordingly to some extent proportional to the weight you give those other ways of interpreting the evidence.

                Since consistency requires that you split the difference, picking any of E1-E3 when you think that they are equally good would be inconsistent and thus irrational. Since splitting the difference results in one unique answer, there is always one uniquely rational response to a given body of evidence.

                *or alternatively one may want to describe the evidential support relation as p(B|E)/p(B|not-E)Report

              • Avatar Stillwater says:

                And that last comment opened up for me a whole other canaworms.

                Consider AGW denialism or acceptancism. None of us (me, certainly) understands the science behind these models, the data, the theories and whatnot with enough confidence to assign any probability to the conclusion that AGW is (whatever). Instead, what we do is attribute a level of epistemic rationality and technical expertise to the scientists working in the field such that one of two things happen: a) I feel justified, based on the claims expressed by scientists in the field, in ascribing a high-level of confidence to the belief that human activity is altering the climate in disastrous ways, or b) I feel justified, based on an evaluation of the scientists working in the field, in ascribing a low level of confidence to AGW because scientists are looking for data to fit a theory (or whatever).

                So even the confidence I attribute to others depends on a bunch of preconditions, some of which – maybe all of which! – are based on preferences.

                In other words, I’m not sure how you can define epistemic rationality in any substantive way without circularly defining “preferences” as an instance of epistemic irrationality. That seems like a step too far, to me.Report

              • Avatar Murali says:


                I’m not seeing how preferences figure in the “alleged” counterexample that you provided.

                Its also not clear how the AGW issue (or expert testimony issue) contradicts anything I say. Are you suggesting that disregarding expert opinion is just as good a response to expert testimony as deferring to it?Report

              • Avatar Jaybird says:

                In order to pick one way of interpreting the evidence over another, you have to think that those other ways are invalid.

                Not at all. You can have a preference based on aesthetics.

                Since Vanilla, Chocolate, and Strawberry are all valid ice cream flavors, picking one over the other implies that you think that the other two are invalid?


              • Avatar Murali says:

                Sure, you could pick beliefs that are aesthetically pleasing to you, but if you do that, you cannot consistently claim that your beliefs are formed in a way that is responsive to factors that indicate the truth of the belief. i.e. you cannot consistently claim to be epistemically rational.Report

              • Avatar Jaybird says:

                Sure, you could pick beliefs that are aesthetically pleasing to you, but if you do that, you cannot consistently claim that your beliefs are formed in a way that is responsive to factors that indicate the truth of the belief. i.e. you cannot consistently claim to be epistemically rational.

                This counter-counter-argument fails to take into account the premises in the argument to which the counter-counter-argument the counter-argument was countering. That is: “what if there are multiple, equally good interpretations of the evidence” and “In order to pick one way of interpreting the evidence over another, you have to think that those other ways are invalid.”

                Again: I don’t have to think that those other ways are invalid.

                I am radically free.

                I can do whatever I want.Report

              • Avatar veronica d says:

                @murali — Have you read any of Judea Pearl’s work?

                The thing is, Bayesian reasoning is computationally intractable, which is to say that the complexity increases exponentially as you increase the dimensionality of the problem. Furthermore, the method presupposes we have agreed on what variables to measure.

                Two agents can agree on a full set of measures, but they have to fit those measures into a model in order to do anything with the data.

                Pearl’s work gives us a method to take the set of measures and from them construct a causal graph, which is to say a graph of conditional independence. The problem is that Pearl’s approach delivers an underdetermined model, which is to say, for any particular collection of data, various candidate models can fit. Furthermore, Pearl’s techniques can sometimes suggest the existence of hidden causal variables. However, it gives no help in identifying those variables. In real life problems, these things are often key points of disagreement.

                But more, there is no evidence that people actually reason this way. Nor is there reason to suppose that agents who did so would out perform those who continued to rely on normal human cognitive short cuts. Primarily the reason for this is action in the world is time sensitive, and reasoning itself consumes resources that could be otherwise spent executing plans.

                Which is why military command and control theories emphasize rapid response. A correct decision made too late is worthless. A decision that lacks the resources to execute is also useless.

                Current reasoning systems, such as those based on Q-Learning, will make different trade-offs between speed, computational investment, and statistical power. In the case of learning video games (a recent application of these ideas [pdf]), the time constraints are given as part of the problem. For general reasoning, however, that is not the case.

                It is evident that human “system-1 versus system-2” reasoning is making a similar trade-off, and it appears this is not a flaw in our evolution, but a fundamental limitation of computational intelligence. We can no doubt codify better approaches to rationality, but I suspect that our search for better reasoning tools will be within a complex fitness landscape with many local maxima. There is perhaps no achievable true optimum.


                The way I approach complex topics, such as dating or math, uses the full battery of my life experience. I could agree with you on a thousand stated premises and axioms, but still disagree with your conclusion. Cuz reasons.Report

              • Avatar Murali says:


                Interesting stuff, I haven’t had the time to read all ( or for that matter most) of it yet. It seems that for any situation where multiple models will fit, we could always do better by splitting the difference in output between the different models. Does Pearl say anything about why I cannot do this?

                I am fully prepared to concede that Bayesian reasoning is computationally intractable and that we don’t actually reason this way. All that means is that we are not perfectly rational. But this should be a banal point. All I’m trying to show is that errorless
                (i.e. perfect) reasoning provides a single output given a single set of evidence.Report

              • Avatar Stillwater says:

                All I’m trying to show is that errorless (i.e. perfect) reasoning provides a single output given a single set of evidence.

                One problem with that, as I mentioned earlier, is this: given that we can’t consider all the evidence (either because we’re not omniscient or we lack the brain power to do so) coupled with the realization that I haven’t considered all the evidence, entails that my conclusion will likely be held with less certainty than my premises (insofar as we’re talking about empirical evidence and conclusions). So at that point I’m epistemically constrained to an argumentative structure in which I assign a probability to evidence E, assign an probability to a set of unknowns which may (for all I know!) defeat E, and conclude pC given the two of them.

                My point is that given, at this moment in time, that it’s in principle impossible for me to include all the evidence relevant to determining C, C will likely be held with less confidence than E.

                On the other hand, perhaps you’re more interested in a subjective account of perfect rationality, in which case preferences and aesthetics and seemingly arbitrary confidence assignments (eg, belief in God =1) strike me as perfectly reasonable. In that case, you’d get a perfect tracking from pE to pC every time.Report

              • Avatar Stillwater says:

                Maybe this is the best way to say the point I’m trying to make: it seems to me you can assert that perfect reasoning is defined as perfect tracking from pE to pC, but I don’t think you can demonstrate that, or logically prove it, without begging questions.Report

              • Avatar veronica d says:

                @murali — It is hard to “split the difference” when one model says “A causes B” and the other says “B causes A.”

                And yes, this can happen, given a sufficiently complex set of cofounders.

                Furthermore, we haven’t even mentioned the assumptions that go into decision theory, such as the idea that “rational agents” must follow the Von Neumann-Morgenstern axioms, but why should this be the case? It is clear that human values do not in general follow these axioms (cite: pretty much all of behavioral economics), thus it is special pleading to say “Well that’s by definition rational.”

                It certainly seems true that to model rationality through decision theory, some sort of total order must be imposed. However, here you have done all your work in defining “rational” that way. Partial orders over dynamic fitness landscapes seem to describe how we actually value stuff, which is again utterly intractable.

                It’s all heuristics and feedback loops, all the way down.Report

              • It is hard to “split the difference” when one model says “A causes B” and the other says “B causes A.”

                “There is a feedback loop between A and B, so that they reinforce each other.”Report

              • Avatar Stillwater says:

                veronica d,

                It is hard to “split the difference” when one model says “A causes B” and the other says “B causes A.”

                Given that those two models can’t both be true (let’s suppose that’s the case), it constitutes a reason to discount our probability assignments of each model on the assumption at least one, if not both, contain an error. That is, the fact that both models can’t be true doesn’t mean I’m irrational in believing each of them individually is likely to be true. Seems to me, anyway. ANd that will certainly be the case with theoretically-based beliefs. (I mean, on Murali’s view this stuff is gonna get messy. I think he knows that. Whether or not it’s in principle impossible to provide a rational assessment of layered, perhaps recursive, probability assignments would have to be shown.)Report

              • Avatar veronica d says:

                @stillwater — What you are missing is these are the Bayesian models, the things over which you are computing probabilities and conditional independence. Thus saying “we’ll assign a probability to the model” requires that you “model the models.”

                This is a meta-problem and theoretically unstable. After all, how do you know you’ve even begun to model the problem correctly. After all, Bayes’ rules gives us:

                P(a|b) = [ p(b|a)*p(a) ] / p(b)

                Which is all fine, but what do “p(a)” and “p(b)” even mean when a and b are models and not measurable facts?

                Some people propose to rate models according to Kolmogorov Complexity, under a variation of Occam’s razor, which says a lower Kolmogorov Complexity represents a higher probability of truth.

                Except of course in general Kolmogorov Complexity is computationally intractable — in fact in the general case it is not even Turing computable — so one must select some proxy. How do you make this selection? I hope you now see saying “Bayes’ Rule” is kicking the can down the road.

                (Do you see it? By using some proxy for Kolmogorov Complexity, we are modeling the space of possible models. But what is the probability that Kolmogorov Complexity is the right approach? How do you model that? What do you call “evidence” in such a case?)


                The neat thing about Bayesian reasoning is that it is provably optimal. The is clearly desirable. However, we should assume rational agents, whatever we take that term to mean, should be things that could exist in the world, and thus computability and complexity theory should guide our insight.

                Bayesian reasoning is provably optimal, but only after a model has been selected. It does nothing to help you select models, nor does it promise that the correct models with be computationally tractable. In general they are not.

                Reasoning is hard.Report

              • Avatar Stillwater says:

                Do you see it?

                I’m not sure I do. (Edited to add I’m reading up on Kolmogorov Complexity right now.) I mean, I get the basic concepts in play, I’m just not sure I understand how what you’re saying constitutes a problem for Murali’s view or what I wrote in defense of it.

                So let me take a shot at it. In lay terms, are you saying that, given that a meta-model won’t give us the answers we’re looking for (without begging questions, say), choosing any particular model to represent rationality is itself either arbitrary or circular (hence, irrational)?Report

              • Avatar Stillwater says:

                veronica d,

                One other thing I’m confused about: if we’re talking about theoretical models here – that is, models that exist at least one level above an account of first order facts – then how can these models justify (or entail, whatever) conclusions about the arrows of causality (which are first order facts)? When you say “A causes B” upthread, are you referring to meta-causality or first order causality?Report

              • Avatar veronica d says:

                @stillwater — To use Bayes, we have to reason over a probability space. However, any useful probability space will have far too many dimensions to actually perform the computations. Moreover, in essays like this we skip that issue, insofar we discuss toy problems, which completely obscure the actual complexity. It’s like pretending that learning that basic one-dimensional gravity equation in a textbook will let you design a rocket to fly to Pluto. Like, there are some details there you’ll need to address. For example, what kind of fuel will you use? What will your engine be? And then you’ll start looking at Lagrangian/Hamiltonian mechanics, and from that applications to orbital dynamics and modern models of the solar system. Etc.

                Drawing grand conclusions from toy problems is not Science. It hardly deserves to be called Philosophy. (Although I could be smug at this point.)

                So in practice we abstract away and limit the structure of the probability space. Which fine. We have to, to make the problem tractable. But how do we know we’ve made good choices? Well, that’s a hard problem, right. In fact, that’s the big problem.

                If you say, “We can apply Bayes to model selection,” you’ve kind of missed the difficulty. The model is the context inside of which we apply Bayes. You cannot apply Bayes as bare truth. Instead, you apply Bayes within a model.

                To use it to select models, you need a bigger model, one level removed from bare observation.


                “Rational Agents” (whatever they are) not only observe, they not only report, they also act. We model the world, and in turn gather evidence to support or modify our models, in order to guide action.

                Not acting is a kind of action. If we gather data and are uncertain of a causal arrow, no probability measure can split the difference. The reason is this: Bayes’ Rule is applied along the causal arrow.

                (Pearl has a summary of his thoughts here: . Note, his is not the only possible approach to this issue, but his math is solid and any other approach will be addressing the same mathematical facts with different structure and notation.)

                They key here is this: noticing correlation does not always help, since the knowledge that two variables are correlated does not tell us what an effective intervention would be. We model the world so we can act in the world, thus we need to know causality.

                The toy problems you see discussed in these conversations always present isolated facts with simple basic probabilities. They do not discuss confounding variables, nor how one decides which confounders to analyze. Nor do they suggest what to do if the data is uncertain about confounding factors. Bayes does not help with this, not at all. P(a|b) is showing a correlation. You need to understand a causation.

                You can try to model the “probability of causation” as a second-order fact, but the data you gather is first-order data being analyzed in a first-order model. If you see that A is correlated with B, you can analyze causation within a graphical model, but if the model is underdetermined, you’ll need more.

                (As an aside, do you understand how Gödel’s incompleteness proof worked? This is an analogous issue. Bayes gives you tools to work within some model, but you cannot ipso facto then apply Bayes the same way to the model itself, not without an extra batch of theory.

                You might develop that theory. Or @murali might. Show your work.)Report

              • Avatar Stillwater says:

                An inference is epistemically rational just in case the confidence in the conclusion given the evidence is equal to the likelihood that that the conclusion is true given that the evidence is true.

                Maybe you’re appealing to a more technical definition or set of conditions than I’m aware of when saying this (certainly could be the case!), but I don’t think that can be correct. Epistemic rationality, insofar as we’re talking about inferences here, wold be constrained at the limiting end by the conclusion cannot be held with any more certainty than, but could be held with less certainty than, the premises (or as you say, evidence).

                That is, a conclusion rationally follows from the premises for logical reasons, and given that, the confidence in the conclusion cannot be greater than the confidence in the premises. But none of that has anything to do with confidence assignments to specific premises (and hence, assignments to conclusions as well).

                If we’re talking about epistemic rationality in the context of empirical beliefs, then almost as a matter of definition, our probability assignments to the premises will be less than 1, irrespective of the actual evidence presented. And all that may be very interesting from a philosophical pov (and it is!). But from a pragmatic pov, or perhaps a functional (in the non-philosophical sense of that term), my confidence assignment that the bathroom floor exists behind the door sch that I can walk into that room without fear actually is 1!

                The other thing is that a specific person’s probability assignments, in practice, will not and cannot include all the evidence available (since that ould not only require omniscience, but from a functional pov, considerations of alternate and skepticism-inducing accounts of that evidence, just aren’t at all relevant to getting thru your day).

                Hume talked about this, yes?Report

              • Avatar Stillwater says:

                Alsotoo, consider the case of theists and that particular person’s belief in god. Let’s say that such a person assigns a probability of 1 to the belief that god exists. The fact that there exist in this wide ole world very good reasons for either philosophically or empirically doubting that level of certainty is irrelevant to that person’s decision calculus. So we’re left with a situation in which you claim they’re epistemically irrational, and they say they aren’t. And the reason for the conundrum is their preference, and your lack of a preference, to assign a probability of 1 to the belief that god exists.Report

              • Avatar Murali says:


                I think the theist should just admit that when they believe things on faith, they believe those things in a way which is not responsive to the evidence available to them. That’s just what faith is about isn’t it?

                Imagine that the ontological argument for the existence of God was actually successful and believers believed in the existence of God on the basis of that argument. Then, we would not call that belief faith. We would instead call it belief based on a sound argument. Faith is one way of believing things while knowing that they are inadequately supported by epistemic reasons.Report

              • Avatar Stillwater says:

                I think the theist should just admit that when they believe things on faith, they believe those things in a way which is not responsive to the evidence available to them.

                Maybe they should do that. 🙂

                But they don’t. THey believe they have evidence (an intuition, an apprehension, acceptance of a “self-evident truth”, whatever) justifying not only the belief in god, but holding that belief with certainty. Now, it’s cool to go-achangin the semantics of the word “evidence” in a situation like this, just as long as we’re clear about making the change. But my guess is that a Christian (and a bunch of natural law types!) would be unwilling to give up “self-evident” as a legitimate form of evidence.Report

              • Avatar Stillwater says:

                One other thought on this:

                Insofar as we think of epistemic rationality as an imperative of some sort, something that rational people (non-circular use of the word “rational” I hope!) ought to pursue, and I hear a person express wildly outside-the-norm views with a tangible, visible confidence in the certainty of the views expressed, I might be inclined to attribute a higher probability to those views being true, yes? I mean, who am I to say?

                So my value-assignments will necessarily be influenced by not only how much confidence I attribute to the “rationality” of folks and their arguments, but also my confidence in the confidence of their own probability assignments.


              • Avatar Stillwater says:

                Following up on that last comment (dagnabit this is innerstin stuff!), suppose I find out about special relativity from a guy I know who talked to another person who read a book by a guy who interviewed scientists.

                My confidence assignments in the belief that “special relativity is true” might be (.5)*(.6)*(.8)*(.9)= .216. But it isn’t, yeah? I mean, people who don’t have any understanding at all of those issues attribute a high level of confidence to the theory being true, yes?

                Part of that might be background noise, of course. But it’s not like background noise admits of a probability assignment. Some Christians think the background noise known as “geology” is a test of believers faith in god. He apparently works in mysterious ways…Report

              • Avatar DavidTC says:

                My confidence assignments in the belief that “special relativity is true” might be (.5)*(.6)*(.8)*(.9)= .216. But it isn’t, yeah? I mean, people who don’t have any understanding at all of those issues attribute a high level of confidence to the theory being true, yes?

                That example is not very good. No one’s going to believe a scientific theory that they just heard some guy say he talked to someone who read in a book somewhere. At least not any scientific theory that’s *important* enough to alter their behavior.

                But you guys have slipped into the ‘Rationality’ nonsense, cleverly arguing about adding probabilities and how that works. But that is all complete mental masturbation that assumes we can figure out the probabilities of the starting *pieces* of the problem. People sit there and argue how to add things humans *cannot know the starting values of*.

                Watching people talk about ‘Rationality’ (And I’ll admit I like to occasionally visit Less Wrong.) is like watching a TV show where a ‘genius’ can ‘calculate’, on the fly, the exact distance a random car will travel jumping over a gap. To the inch.

                I can watch such a show, but I’m deliberately ignoring the fact that such a calculation is based on quite a lot of unknowable things, and, pretending it ever has been calculated, has been done by carefully measuring an assload of variables in advance. And the calculation also requires a lot of information that *is* knowable in advance but people would be extremely unlikely to know. It’s simply too much information for people to know that about everything.

                But no…*handwave* humans can do all that on the fly, if they just know the math. So says ‘Rationality’.

                No. No one can have those facts and make a decision based on it. As @veronica-d said, not only do people not make decisions this way, logic suggests that trying to make decisions that way would actually hinder humans.

                Now, humans often make inferences about math problems that aren’t correct. You can ask people about the odds of one thing, and then it’s modified to something else, they can say it became less likely when it actually became more, or vis versa. And by all means we should correct that, teach probability better.

                But the idea we’re going to start making decisions using math and probabilities is…wrong. Not only is that not actually possibly, it wouldn’t even work any better than what we use currently. (Very few of our decisions actually involve weighing likely-good against likely-bad *at all*.)Report

              • Avatar Stillwater says:


                Well, it’s a bit more complicated than that, seems to me.

                First, my relativity example was an attempt to reveal pretty much what you just said: most of us (me included!) assign a high probability in the truth of special relativity based on a bunch of inputs that are divorced from the justifications which actually confirm the theory. I mean, I can talk about how there was an experiment in 1918 (or whatever) which demonstrated the light from a star bending as it approached a large gravitational field (but – see! – I can’t even do that!) but at the end of the day, I believe it because a bunch of people have said that it’s true. So while my certainty about that theory approaches 1, I can’t account for why my assignment is so high.

                On the other hand, what Murali’s interested in here – Bayesian reasoning – is just a fact of the world. And he’s attempting to account for human reasoning via that methodology. Nothing wrong with the pursuit, seems to me. In fact, lots of our reasoning can be accounted for via a probabilistic analysis and the resulting inferences. I just don’t think we can reduce human reasoning – and certainly not rationality – to that framework. (Which is a different critique than the one VD offered.)Report

              • Avatar Stillwater says:


                But no…*handwave* humans can do all that on the fly, if they just know the math. So says ‘Rationality’.

                Maybe this will help clarify what Murali’s interested in: He’s not saying that people laboriously determine their probability assignments in advance of acting on a belief. He’s saying 1) let’s assume that people are (or are at least capable of being) epistemically rational; and 2) here’s a description of what epistemic rationality actually is (it’s when, and only when, pE = pC).

                Well, the first claim strikes me as pretty banal, but some folks might wanna get off the bus at that point. The second claim is doing all the work. And the more I think about it, given his responses in this thread, I don’t think it’s all that objectionable, to be honest. At least in principle. My problem with it is that if the description is correct, then pretty much all of us are either massively epistemically irrational (cuz we can’t know all the relevant E), or we’re trivially epistemically rational (cuz subjectively it’s super easy to get both sides of the equation to balance).Report

              • Avatar Jaybird says:

                It strikes me as a fairly subtle begging of the question.Report

              • Avatar Stillwater says:

                In what way? By discounting preferences as a form of irrationality?

                If so, I think I agree. I mean, you have a preference for less-rather-than-more gummint, for example. Personally I don’t think in your case that’s just an aesthetic choice, but if it were merely an aesthetic choice I don’t see how discounting it as irrational could be non-circularly justified.Report

              • Avatar Jaybird says:

                By discounting preferences as a form of irrationality?

                Yeah, exactly.Report

              • Avatar Murali says:


                You’re missing my point. I don’t think epistemic rationality is an imperative of any sort (unless purely in an instrumental kind of way, in which case it is only useful if it achieves the goals in question). My project is to develop a purely descriptive account of epistemic rationality (and arguing for revising our way of talking about epistemic rationality in order to fit with this new descriptive notion)Report

              • Avatar Murali says:


                You make 2 points here. Let me deal with them separately.

                1. Epistemic rationality, insofar as we’re talking about inferences here, wold be constrained at the limiting end by the conclusion cannot be held with any more certainty than, but could be held with less certainty than, the premises (or as you say, evidence).

                You misunderstand what I’m saying here.

                Suppose you are 90% confident that P. You are also 90% confident that if P then Q. Rationality requires not only that your confidence in Q cannot be greater than 81%. It also requires that it cannot be less than 81%.

                Here is another example. Suppose you are 100% confident that the objective probability of an Event A occurring is 0.85. Suppose that if and only if A, then B. Your confidence that B is the case has to be 85%.

                My point here is that the content of the premises also contributes to the confidence in the conclusion. Together, both the content of the premises and the confidence in those premises is sufficient to fix the confidence in those conclusions.

                2. Let’s grant that for pragmatic reasons, it is reasonable to fix the confidence that my bathroom floor is real at 1. Given the situation that you have described, you are already conceding my point, namely that your surety that the floor is real is driven by pragmatic and not epistemic considerations. Now, this need not be bad in any substantive sense, but since an inference is epistemically rational only when the conclusion is determined only by all the evidence a person possesses, you are epistemically irrational (though not necessarily practically irrational or immoral) to the extent that your beliefs are not driven by epistemic considerations.Report

              • Avatar Stillwater says:

                Ahh. Good. Got it (I think). That response helped narrow things down a bit.Report

      • Avatar Kim says:

        god, that’s dumb. Who ever said that humans were in any shape or form rational, anyway?
        Let’s take one issue: sexual preferences. That changes on a week to week basis based on freaking HORMONES.

        NEXT issue: sexual chemistry — that changes not just based on the subject’s immune system, but based on the evaluator’s as well.

        I’m pretty certain that people’s morality changes DRASTICALLY as they get stressed. If you’re dehydrated enough, you’ll drink a man’s blood to survive–literally bite in and drink.

        Expecting humans to act rationally is silly.Report

        • Avatar Burt Likko says:


          I’m pretty certain that people’s morality changes DRASTICALLY as they get stressed.

          I agree with this sentence. Indeed, lots of folks would be surprised-to-alarmed at how little stress it takes for a person’s morals to morph into something that the subject person would previously have condemned.

          Getting to observe that happening is a particularly unsavory treat, regularly served up to practitioners of my profession (as well as several others).Report

          • Avatar DensityDuck says:

            It’s like that old joke about how civilization is two missed meals and a cold night away from barbaric tribalism.Report

      • Avatar Michael Drew says:

        *Ok, not the only one. You could also say that the other person is too stupid/irrational to make the correct inference even if you shared all information.

        First, why does stupid need to be in there at all? All that’s necessary is the irrationality. If they were stupid but rational enough to come to the correct conclusion in that case, their stupidity would be irrelevant. And if they were not stupid but too irrational in that case to be right, their lack of stupidity would likewise be relevant.

        Then, you missed a whole category of belief maintenance: you can also maintain your beliefs by saying that they are morally correct (or correct for some other reason), while being irrational or not necessarily rational. They’re just right. People can and do maintain beliefs irrespective of the rationality of the beliefs all the time – both unconsciously and consciously.Report

  6. Avatar Stillwater says:

    TPM had this story up this morning (I think) with the headline “Scott Walker: Mandatory Ultrasounds Are ‘Just A Cool Thing’ For Women”.


    So I read the article, and – of course – he said no such thing. My take on it was that he thinks ultrasounds are cool (not necessarily sharing them), but that’s incidental. He certainly never said what whacked-out lefties claim he said. (And this was at Josh Marshall’s place, which really bums me out…)

    “If you construct a big enough myth NO ONE can tear it down!”

    (Which isn’t to say the War on Women isn’t – or at least wasn’t – a real thing.)Report

    • Avatar Stillwater says:

      And points to Tod for his theory that lefties are increasingly following righties into Full Metal Jackass news mode. The fact that this story was at Marshall’s blog – a guy who is even keeled and rational even as he openly admits to a liberal bias – is sorta shocking.Report

    • Avatar zic says:

      wait a minute; walker just passed signed a law mandating an unnecessary medical procedure, and defended it by deflecting to ‘ultrasounds are cool.’

      Now I realize that being adept at deflecting questions you don’t want to answer is a mandatory political skill if one desires to be politically successful, but I also think that the deflection stands as the answer to the question; and this defense, Stillwater, is really taking his answer out of context. He went on about how cool they are when asked specifically about the law he signed.

      If we can’t hold Walker to his answer, when can we hold a politician to their deflection-instead-of-answer? And when do all the women forced to not only get an ultrasound but to listen to non-medical advice during their enforced wait times the option of saying if they think it’s cool or crass?Report

      • Avatar Stillwater says:

        Hey, just a few minutes ago I said that the War on Women is a real thing, didn’t I?

        Personally speaking, zic, I don’t give a rats ass what he said in defense of the law he just signed cuz I don’t agree with it. But let’s be clear about what he did and didn’t say. ANd he never said that “mandatory ultrasounds for women are a cool thing”. What he did say is that giving more information to pregnant women (in the form of mandatory ultrasounds) will curb their enthusiasm for getting one.

        And he may be right about that, actually. I just don’t agree with the premise.Report

        • Avatar zic says:

          And he may be right about that, actually. I just don’t agree with the premise.

          This is the tired, old myth that women don’t carefully consider things, and just run off for an abortion on a lark.

          I can live with nitpicking the difference here between implied and said. But he implied by deflection.

          Lemme ask you: he’s totes against any public funding for abortion. So who’s gonna pay for that ultrasound? And then there’s the wait after the ultra sound.

          I’ve had those; they aren’t cheap. Two days off work ain’t cheap. It’s not just making their motherly hearts awaken and swell with love when they see the little heart beating; it’s make sure they can’t afford it.Report

          • Avatar Stillwater says:

            Dude, I’ve never been a pregnant woman!

            Why do you give a s*** what he says? You disagree with him, yes? So why focus on how he’s not answering the question? Do you think you’ll change his mind? Someone else’s mind who believes whatever Walker says?

            Really, zic, I have no idea what the hell you’re talking about. Walker said some stupid shit for political purposes. So there’s two things going on. (1) He said some stupid shit, for (2) political purposes. What more than that is there to discuss?Report

          • Avatar Will Truman says:

            The theory that ultrasounds change minds has been tested. When tested, the result is that it only does in very few cases, and then only cases where the woman was on the fence. So the underlying premise of the law is wrong.

            That is true right along side the fact that the headlines and characterizations of what Walker said were mischaracterized rather deliberately.

            There’s not much conflict between these truths.Report

            • Avatar Kim says:

              One could offer optional (and free!) ultrasounds to women who are actually on the edge. If the guvmint’s gonna pay for it, and women can choose, I’m a hell of a lot less pissed at the whole thing.Report

              • Avatar Will Truman says:

                Some anti-abortion groups are trying to get their own ultrasound machines to offer free ones to women who want them. The one in Arapaho was holding a fundraiser for one while we were out there. I find it pretty hard to object to that, though some people do.Report

              • Avatar NoPublic says:

                I don’t find it hard to object to that at all.
                Ultrasound machines are a medical imaging device, to be used only in medically appropriate circumstances at the prescription of of a medical professional and in the hands of a trained sonographer. The images they produce should be interpreted by a trained sonographer or radiologist.
                None of which you’re likely to find in the “crisis center” where they will undoubtedly be providing this service.
                Don’t even get me started on the Keepsake Ultrasound business either.Report

        • Avatar veronica d says:

          @stillwater — The headline could have been better. In fact, I wish we were all better about this.

          There is plenty of nonsense in the media. I understand why you want to talk about that. Fair enough.

          However, what Walker is suggesting is pretty fucking terrible, and we can criticize the headline and at the same time find Walker’s response about “cool ultrasounds” to be utterly reprehensible.

          The fact is, he did try to minimize his abusive nonsense by saying how “cool” the procedure was and how we little gals should like it so much. That’s really fucking gross, and honestly I don’t think the headline is too far off from describing why women are offended, even if it misses the mark on technical truth.

          Like, for an over-the-top example, if I were charged with beating someone severely, like seriously hurting them, and when questioned about it in the media, if I began to rhapsodize about my boxing days and the smell of leather and the sounds of fists hitting bags and all of that —

          While the family of my victim is listening, knowing the severity of their loved-one’s injuries. Let’s imagine that my victim will never walk again.

          That would seem off to people. It’s not that I’m wrong to love boxing gyms; they’re really neat places. However, it is callous to talk about that when the topic of the conversation is a person who I severely hurt.

          So it goes for Walker. When being questioned about his brutal, invasive policy positions, he starts talking about how lovely and ultrasound really is —

          Except he’ll never be forced to get one against his will. And consent and bodily integrity are about choice.

          After all, sex is also a lovely thing, but I get to chose when and with whom. Women get to control their bodies.

          For an even more over-the-top example, let’s imagine the subject was rape, and the rapist began to explain how nice sex is.

          The rapist is not wrong. But still, what a fucking thing to say at that time. It’s missing the point in an egregious way.

          Honestly, I think the headline was fair. The man is a fucking creep.Report

          • Avatar DavidTC says:


            I been sitting there rewriting a post trying to explain why his comments, while completely fine and correct if he was just randomly asked about ultrasounds, or if he’d just signed a bill making ultrasounds free or something, are nevertheless completely offensive in *this* context. But I couldn’t figure out how really to say it.

            It’s actually astonishing how often right-wing pols literally seem to have no idea of consent. Like, at all. People will talk about something happening *without* consent, and they will make an analogy to something *with* consent. Or vis versa.

            It’s like they think consent is the difference between properly- and under-inflated tires…of course they *want* properly inflated tires, but, in practice, the car basically operates the same way. It’s not some huge difference, and people who constantly run around demanding constant air pressure checks are being silly. A completely flat tire, sure, can’t drive on that, at least not very far, but this entire thing just isn’t *important*.

            A lot of time this is hidden behind pro-life platitudes, where the claim is that they don’t *not* care about consent, they do care about it but additionally care about the life of the baby. But the problem is, remove the life of the baby from the equation, and they act *exactly the same way*, both in laws, and how they actually talk about…everything.

            I mean, I’m almost forced to just *assume* that most of them, hopefully, have some sort of actual ‘I should not have sex with women actively saying no’ switch in their head, but that’s about as charitable as I can get there.Report

          • Avatar zic says:

            Thank you, @veronica-d

            this is exactly what’s wrong with picking on the technicalities of the headline instead of the spirt of Walker’s response.

            It’s like answering the charge of child slavery and forced labor with, well, but cheap clothes and food are cool or responding to concerns of police brutality by pointing out how awesome cop cars are.Report

          • Avatar Doctor Jay says:

            We can debate whether or not the headline was fair, I guess. I don’t know that I agree. However, what you have written here is a much, much better description of what was wrong with what he said than any headline.

            Really, it’s miles better. It comes from pain, as Joss Whedon says.

            There’s no headline version of what you wrote. That’s ok for me. I don’t like headlines, and, not coincidentally, I don’t like Twitter.Report

          • Avatar Stillwater says:

            veronica d,

            That’s one thing to talk about, for sure. In my mind, however, it begs all sorts of questions against your interlocutor (eg, “why does he shift the topic to the coolness of ultrasounds? Cuz he’s slightly sociopathic, that’s why!”) but even more importantly – for me, at least – what you wrote (and zic wrote above) amounts to rejecting his position on abortion and ultrasounds because of a personal judgment of his character. Not his beliefs, seems to me, but who he is as a person (eg, exactly the kind of guy who’d talk about how awesome boxing is when questioned about seriously injuring someone).

            What you’re effectively saying is that he’s at least slightly sociopathic. But how can that view actually address any of the actual topics in play without begging not only the substantive questions but the moral ones as well?

            Seems too convenient to me to be considered a complete answer to the issues we’re discussing.

            And just to be clear, I’m not saying you’re wrong to view it as you do. I’m just expressing why I do not view it that way.

            Adding: I also want to clarify my earlier comment to zic – one that sounds more antagonistic than I intended. When I asked her why she cares what Walker says, given that she disagrees with him, I was actually being serious. Seems to me that disagreement is enough in discussions like this, and crawling in someone’s head to attribute nefarious motives to them not only misses the real point in play, but begs all the questions in play as well.Report

            • Avatar zic says:

              @stillwater he signed a freakin’ law that limits women’s rights to control their own body parts at some great expense to those women, and he justified it with how cool it is that people who want babies show their ultrasounds around.

              That’s freakin’ cruel.Report

              • Avatar Stillwater says:


                I know. He’s opposed to abortion.Report

              • Avatar Stillwater says:

                I was responding only to the first paragraph, zic. Not the second. My above argument explained why I won’t endorse the contents expressed there.Report

              • Avatar zic says:

                @stillwater I don’t care what Walker says, I do care what bills he signs into law.

                More to the weirdness of this discussion, I care that you let him pass on a technicality and criticize a headline (like they are always accurate) as a problem when it better reflected what he said than your dissing of the same headline does.

                Headlines paint with a broad brush, and they paint impressions; this headline painted an accurate one, even if it wasn’t the exact words Walker spoke, it certainly paraphrased his meaning quite nicely.

                Coulda been better; but we’ve all seen much worse, too.Report

              • Avatar Stillwater says:

                I responded to the headline because that’s what this post was about. I agree with Burt. Walker never said what lefties are attributing to him.

                Does that mean I defend his two year old bill mandating ultrasounds for pregnant women?

                I hope not.

                “Between the truth and the emotion falls the shadow”.

                (Heh. I kid.)Report

              • Avatar Stillwater says:


                even if it wasn’t the exact words Walker spoke, it certainly paraphrased his meaning quite nicely.

                In the same way that I “let [Walker] pass on a technicality and criticize a headline”?

                See, that’s the thing about reading beyond the words expressed. From my pov, I was just commenting on (what I understood to be) exactly the point Burt was getting at.

                Was I wrong to think that? Was I wrong to do that even if I was right? Was I just plain ole wrong wrong wrong cuz I’m the type of guy who’s inclined to give the Walker’s of the world a pass?Report

        • Avatar Michael Drew says:

          he never said that “mandatory ultrasounds for women are a cool thing”

          Can we establish who said he did? I don’t think that TPM article dd.

          What I saw being said is that he said ultrasounds are just a cool thing that’s out there.

          Which is what he said.Report

          • Avatar Michael Drew says:

            My mistake. “Mandatory” in the headline.Report

            • Avatar Stillwater says:

              MD, yeah, my beef is with the headline, primarily, but also with the “argument” establishing why the headline is an accurate representation of Walker’s views. Just shoddy – SHODDY! – journalism, seems to me.Report

              • Avatar Michael Drew says:

                Yeah, I had read the quote and just flat-out gave TPM too much credit. I had looked at the piece previously and genuinely didn’t remember “mandatory” being in the headline.Report

              • Avatar Stillwater says:

                And to Mike’s point about TPM descending into click-bait status, I woulda never clicked on the article if the headline accurately conveyed what Walker in fact said. The only reason I clicked on it was cuz I didn’t actually believe Walker said what they attributed to him.

                Et tu, Josh Marshall?Report

    • TPM is at least 90% clickbait these days. I still check it out of habit, but I can’t recall the last time they ran a piece that was of any real value. And yes, that’s horrible: these are the guys that broke the Bush US Attorneys scandal.Report

  7. Avatar Kazzy says:

    FWIW, I posted both of our ultrasound pics to Facebook and Instagram and fairly regularly showed them to people on my phone.Report

  8. Avatar Will H. says:

    I don’t think I’ll be eating when I get home.

    I suppose there are a fair amount of people out there who view “pretty effing creepy” as a desirable quality in a President.

    Personally, I admire the Aussies for electing the record holder for skulling a yard.Report

    • Avatar Kim says:

      Nah, the recordholder is Putin, for kissing a 6 year old boy’s belly button.

      Dude, that’s just freaking WEIRD.Report