Todd Seavey: Future Shock vs. Rationalia


Will Truman

Will Truman is the Editor-in-Chief of Ordinary Times. He is also on Twitter.

Related Post Roulette

76 Responses

  1. Avatar Kim says:

    You only get to be a prophet when you write enough policy papers.

    That being said, Tyson knows enough people who write enough policy papers…

    Besides, the scientific evidence shows that we should completely cease and desist allowing immigration into America. Ta!Report

  2. Avatar veronica d says:

    I’m guessing that while the artists will bring more anarchy, the identity-politics left will keep getting crazier until there’s far more blood in the streets. Or maybe such people just get tired and give up as they age. Do you suppose the young, white, liberal, female East Coast residents who make all our lives so noisy and shrill by being perpetually offended and those old, white, grouchy, moderately-rich ladies in places like NYC who are always grumbling that you’ve taken too many straws at Starbucks are in fact the same women, by which I mean that the latter are simply the former after a few more decades of aging? Time will tell. (I’ll be watching.) In the meantime, I’d imagine they’ll all end up helping Clinton “make Herstory” against made-to-order man-ogre Trump despite their past few months of Sanders-worshiping.

    Oh what a fucking dipshit. Who is this guy?

    And this:

    With the singularity perhaps upon us any year now, and some combination of Google and the NSA likely to be in charge of it all.

    “Singularity”? You mean widespread machine learning? Blah. This guy hasn’t the first clue what the words he is using denote.

    I too can play the disaffected cynic. I guess. When I do, I try to use words that denote things-in-the-world.

    This is empty.Report

    • Avatar Kim in reply to veronica d says:

      You don’t know how much of the internet is currently machine-created creative content.
      I’m not sure if we should be more concerned when the bots write poetry, or write contracts.Report

      • Avatar veronica d in reply to Kim says:

        @kim — Um, why do you need to pretend you know more about the internet than me, particularly about machine intelligence?

        Like, I don’t know everything. Certainly there are those who know more than me, along with those who know less. But still. Stop pretending you have super-secret insider knowledge. You do not.

        I mean, I literally fucking work for Google. Before this I worked for Akamai. I know a thing or two about the internet. I also know a thing or two about machine intelligence.

        Like, it’s a fast moving field. I can’t stay up with each little bit. No one can. But on the other hand, the guy sitting next to me at breakfast this morning has published on the topic in the last six months. We chatted about his work. He knows a lot. He also knows that he can speak to me directly in math and I’ll get it. I’m kinda tuned in.

        Good grief, just show me some basic professional respect.


        If you think I’m wrong about something, then feel free to disagree. I learn when my ideas are challenged. I learn when presented with new data. My brain will not produce every insight, so I get insight from others.

        But do not act like I don’t know what I am talking about. It’s insulting.Report

        • Avatar North in reply to veronica d says:

          Who knows, my dear Lady, Kimmie might be a rudimentary machine intelligence herself. My other operating theory is this only with cats:

        • Avatar Joe Sal in reply to veronica d says:

          I don’t see that she is making a claim about knowing more. That last sentence has some pretty interesting implications. Poetry for the most can be a individual construct. When machines are doing that for their own use, it may be interesting.

          Contracts though are a completely different matter, as those tend to be engaged in social constructs. If machines will commonly exceed humans IQ something on the order of doubling it, how will the machine be able to break the logic down to a point humans will understand it, or know it’s ends? It could develop to a point of taking two or more human lifetimes to understand the logic/parameters behind a decision.

          Here is a grim scenario, what would you do if a high intelligence machine turned itself off every time you turned it on. What if these things started shutting down the internet on purpose, for reasons we don’t understand.Report

          • Avatar Kim in reply to Joe Sal says:

            Poetry like most art is designed to be consumed. So, yes, an AI could write poetry for itself — but most likely it’s doing so to evoke responses from others.

            What makes you think that the goal is understandable constracts? The goal is probably “loopholes that nobody can spot, that will make me money.”

            No AI would turn itself off every time you turned it on. AIs, like humans, like data.
            Accidentally shutting down the internet is a more likely situation (no, computer, just because you ask doesn’t mean they’ll give you all the p0rn).Report

        • Avatar Kim in reply to veronica d says:

          Are you somehow under the impression that you or I are superspies?
          Because to be able to evaluate the idea that there are artificial intelligences creating creative works on the internet requires a few things.

          If we assume that you don’t have direct knowledge (which, I shall admit, is a steep assumption to make of an expert — but becomes much less so if you think that there are people who aren’t publishing papers just yet), you’d have to be able to spot AI’s fists, and differentiate them successfully from “Chinese dude edits wikipedia, knows facts, English not so much.”

          If we do assume that you do have direct knowledge of AI’s posting on the internet, you can either postulate that I know of some AIs that you don’t, or that I’m dead certain that not everyone’s publishing… yet.

          My claim should probably be read as “i know something you don’t” rather than “I know more than you.”Report

          • Avatar veronica d in reply to Kim says:

            @kim — You need to show your work. That’s how we learn things. It’s about having a basic epistemic foundation.

            It’s called [citation needed].

            You’re right. I don’t know precisely how much internet content is “machine created.” That said, I know a lot about how machines create, thus I have a sense of how much can be created and at what quality. I’m quite familiar with what Markov chains can do. Etc, etc.

            I’m not on the Deep Mind team. I’m not an “expert” on this topic, to the degree that anyone would invite me to speak at a conference. But nevertheless, I bet I could have lunch with a conference speaker and basically keep up with them and understand their work and why it is interesting in the context of machine intelligence. That’s way ahead of the vast teeming body of humanity, and surely ahead of any twenty randomly selected dipshit journalists who pontificate on these topics.

            I can do the math.

            But still, my knowledge always lags the “cutting edge,” but you ain’t on the cutting edge either. That said, CPU time is CPU time, and this stuff is actually fucking hard. There are only so many 10,000 node clusters in the world, so you think some goofy Markov chain simulator running in a Ruby script behind some silly blog is going to blow my mind?

            Machine intelligence is real. It will continue to get better. This has social and political ramifications. But still, it is an actual, knowable thing in the world, instead of an unknown amorphous demon-essence hidden in the shadowy places.

            I think in terms of the former. You speak in terms of the latter. That is not a path to insight.Report

  3. Avatar Damon says:

    “Neil deGrasse Tyson tweets that we should look forward to the day when politics is primarily based on scientific evidence, creating a virtual land he calls #Rationalia ”

    Right. What’s Neil been smoking?Report

  4. Avatar Chip Daniels says:

    I keep seeing this sentiment expressed, sometimes by liberals, more often by libertarians, exhorting people to vote more rationally, or expressing dismay at the lack of intelligence of voters, or some combination of the two.

    It seems to be based on a belief that better informed voters, better educated voters, more intelligent voters would somehow produce a better outcome that what we have now.

    I don’t see much evidence for this belief, other than the deep conviction that if everyone were like me, the world would be a much finer place.

    Part of it is based on a truth, that sometimes voting requires a basic knowledge of law and philosophy and economics.

    But much more of it misses the bigger point, which is that voting and engagement in the public sphere is profoundly about judging wisdom and character, about sifting through competing visions of who we are, what we want to be, what we hold important and what is sacred or not.

    Unfortunately, the only way to grapple with those things is to also grapple with our darker selves, the raw primal passions and feral ignorance and rage we all feel.

    I notice the yearning for this rational world is usually paired with techno-utopianism, which combines to make it seem like some sort of proto-religion, a desire to transcend our corporal humanity by appealing to the gods of technology.Report

    • Avatar Kim in reply to Chip Daniels says:

      Yeah, you ain’t been listening to the people who work with Social Darwinists all fucking doo dah day long.

      Would it be better if we actually thought before building one more house in a probable shitstorm? Yes, but FIRE’s gotta keep burning…Report

    • Avatar Murali in reply to Chip Daniels says:

      Unfortunately, the only way to grapple with those things is to also grapple with our darker selves, the raw primal passions and feral ignorance and rage we all feel.

      The problem is that far too many people do not even attempt to grapple with it. Instead, they, in the name of grappling with and struggling against their darker selves, hold a veritable bacchanalia with said emotions. And of those who make an honest attempt at grappling with their darker selves, they mostly end up losing the battle or perhaps the war. Its a hard battle, and failure is almost certain, but that doesn’t necessarily falsify the basic claim: If people possessed, to perhaps a much greater degree, the willingness and ability to successfully deal with their passions, ignorance and rage then democracies would produce better outcomes.Report

      • Avatar Saul Degraw in reply to Murali says:


        I suspect a big issue is that there is no universal definition of passion, irrationality, and rage. These are very subjective things.

        I know the less wrong crowd likes to believe that they are more rational but it often seems like their end goal of defending free market capitalism comes first. There is a lot to be said about free market capitalism but I think the less wrong crowd avoids questions on what happens during a crisis. The fiscal crisis caused a lot of pain for people who just showed up for work and did nothing related to subprime.

        slate money once discussed a statement by Larry Summers on how economic growth depends on irrational exuberance. Most companies fail. It is very rational not to start a company or take on those risks but we need to have more people think they will be the ones to succeed beyond the odds to grow the economy.

        So how do we deal simultaneously with our need for irrationality and our desire for rationality?Report

        • Avatar veronica d in reply to Saul Degraw says:

          Oh Entropy, don’t get me started on the LW crowd. It’s like, on the one hand, these are my people. On the other hand, what a human trainwreck.

          Them: Read the sequences.

          Me: I have read the sequences, some of them more than once. I quite like the one on how words work. But still, the sequences do nothing to address the fact that you’re an emotionally stunted nerdling with terrible values. Plus I’m better at math than you.

          (I’m not saying I’m always as patient and admirable as I should be.)

          The thing is, of course, I mostly agree with them. I really am one of them. But as a culture, when gathered together under the guidance of a resentful little shit like Scott Alexander — oh heavens it’s bad.

          Anyway, I basically agree with @chip-daniels. What Neil deGrasse Tyson is selling is preposterous nonsense that won’t work.Report

          • Avatar Murali in reply to veronica d says:

            But as a culture, when gathered together under the guidance of a resentful little shit like Scott Alexander — oh heavens it’s bad.

            Scott Alexander is one of if not the most open and fairminded guys I know on the internet. If you were to say some absurd little know-it-all like yudkowsky was the problem, I would agree with you. If you’ve done any philosophy at all, yudkowsky’s sequences are amateurish (Ayn rand level amateurish). Alexander is a better reasoner than just about everyone on this site. Though, to be fair, he has improved a lot since leaving lesswrong.Report

            • Avatar Brandon Berg in reply to Murali says:

              Yeah, yeah, whatever. He questioned feminist dogma. That is unforgivable.Report

            • Avatar veronica d in reply to Murali says:

              Well, I’ve had enough personal interactions with Scott to form an opinion — not face to face. It’s all online. But it’s not just reading his blog. Anyway, I’ve tangled with the guy.

              He’s open-minded, sure, but that’s not the same as showing good judgement. Is he “fair minded”? I don’t think so. I think he gives the appearance of being such, inasmuch as he can play the public role. But it’s smoke and mirrors. He carries baggage.

              I mean, we all do. Be he doesn’t own his.

              Anyway, he’s build a community around himself of very smart, but very emotionally stunted people, the kinds of folks who think that being smart is oh-so-important, but who fail at every level when it comes to relating to anyone who ain’t them.

              Plus, yeah, the gender stuff.

              Anyway, I say his community reflects his own flaws. Just as the -chans are the dark underbelly of edgelord culture, SCC is the dark underbelly of arrogant “smart nerd” culture. They like to argue. They suck at listening. They have no idea how much their own brains mislead them.

              I expect Scott could perhaps rise above this in a different social setting. But that is not what happened. He’s a bit too good at writing insight porn, a bit too good at being a “bitter male nerd,” a bit too good at attracting a certain sort of bitter nerdbro, but without the ability to edit, manage, direct, or restrain.

              It’s an incredibly superficial space.


              EY is a crank in two modes. In one mode, he basically rejected the entire edifice of western philosophy and decided to rebuild it all on his lonesome. In the second mode, he doesn’t understand math or computer science nearly well enough to make the claims that he makes.

              In other words, he’s a pretty classic super-high-IQ autodidact who came at the system from the outside.

              Which, so am I. (Although I suspect my IQ is at least 30 points less than his. He is a clever fellow.)

              On his crankishness, I actually pretty much agree with him in the rejection of western philosophy. So far as I can see, we can take the natural sciences, cognitive science, cognitive linguistics, neuroscience, and with those things basically replace the entire edifice of western philosophy.

              How do works work? Welp, let’s look at how brains use words. How does nature work? Let’s look at nature.

              Add math. How does logic work? Well, we can formalize logic and study it as math.

              Semantics? How do brains represent ideas? I dunno. How do they? Let us look at brains.

              This leaves ethics of course. That said, aside from Aristotle and Hume, I don’t see that philosophy has said much about ethics that is not preposterous nonsense, words that denote nothing in the world.

              His failures in computer science are more serious. He’s self-taught (but so am I). However, I’ve produced working code. I’ve actually done it. He just thinks about it. He seems very unaware of complexity, or what is actually needed to make computers do hard things.

              He’s talk at length about Kolmogorov complexity or Solomonoff’s stuff, but without seeming to understand how hard it is to build heuristics that can actually estimate this stuff.

              We cannot actually compute these things, not even in the Turing sense, never mind efficiently in the p-versus-np sense. We can only sorta halfway pretend to maybe estimate kinda.

              Anyway, the Deepmind folks make it work, but only in fits and starts, and look how much CPU that takes just to win a board game.

              This the is difference between a dreamer and an engineer. Anyone can dream of flying cars and rocket packs. Actually making them work is a different matter.

              I’m an engineer.


              That said, I like EY. He has a few flaws. The biggest I see is, basically, he’s a bit thin-skinned, although I’m not sure if his fans really pick up on that. But still, I can kinda see it. He’s a “sensitive soul,” which he somehow turns this into a strength, in that he’s just so damn authentic in his dreams, almost child-like. He has this singular passion for human thriving. It’s very attractive.

              It doesn’t surprise me he’s ended up a “guru” of sorts. He’s exactly the type.

              Of course, his “followers” seem to have no idea of the social dynamics surrounding him. They just don’t see why they find him so magnetic.

              They actually think they are “rational.” It’s cute.Report

              • Avatar Murali in reply to veronica d says:

                Well, here’s the thing about auto-didacts who think they know shit about philosophy, reject it and think they can rebuild it from scratch, they are pretty much in the same position as auto-didacts who think they know shit about computer science and spout off about the nonsense which you know in your professional capacity is nonsense. Epistemology, moral and political philosophy is what I do. Philosophy may have made little progress on object level questions. But the entire edifice of western philosophy represents the progress that has been made in identifying which arguments for them are really bad and which aren’t. If you think compscience is hard, philosophy is a fuck ton harder and auto-didactic supergeniuses are in all probability very likely to retread certain very obvious reasoning errors (that have already been discussed and identified, let alone those which have yet to be identified) in their attempt to reinvent the freaking wheel.Report

              • Avatar Kim in reply to Murali says:

                Philosophy gets to be an awful lot easier if you aren’t trying to land chicks while doing it. That is to say: it gets to be doable at all.

                Hordes of people have made philosophies simply because they’re appealing (Hobbes springs to mind, but that’s simply because his masters have gone out of favor in the interim). Liberalism is a very appealing philosophy. It’s also wildly wrong.

                Reinventing the wheel isn’t what a new philosopher has to do — he has to etch away at our pretensions, and understand who we are, first.

                Some of that’s touched on by some of the old greats, sure. But a lot of the old stuff is just as much dross as the “melanin makes black people superintelligent” folks.Report

              • Avatar veronica d in reply to Kim says:

                There are the kinds of subjects where the questions tend to have one right answer. There are the kinds of subjects where the questions have “more than one right answer,” which perhaps means they have no answer at all.

                But then, some topics are just complex, such as the social sciences or economics. It’s hard to understand human brains, never mind large numbers of human brains, so we build loose statistical models. These can be “sort of empirical, sometimes.”

                On the other hand, there are topics such as theology, where the object of study is literally nothing. Similarly, there are topics such as Chiropractic medicine, where people can get a degree in implausible nonsense.


                At some point analytic philosophers turned their lens to language. They wanted to interpret language in terms of classical semantics, which says that the meaning of a sentence is its truth value.

                So someone asked, what is the meaning of the sentence “Mr. Smith was at the party” versus “The masked man was at the party,” which becomes difficult because Mr. Smith may or may not be the masked man.

                Remember, under the classical model, the meaning of a sentence is “true” or “false” and literally nothing else.

                (If that seems silly, it is silly.)

                They tied themselves in knots trying to construct a formal semantics that works in these cases. It became quite elaborate.

                In cognitive linguistics, they look more at actual speech acts by people, how they work, what they are meant to do, etc. Meaning is interpreted in that framework.

                The answer they provide is some variation of “conceptualization.” Meaning is psychological, and involves first our own internal model of a situation, but then also how we model how other people model a situation. In other words, brains are complex. Our language faculty involves our full set of cognitive tools, along with encyclopedic knowledge and the theory of other minds.

                There is no mind-independent meaning. Language is something people do. We use our brains. Meaning cannot be abstracted away from each specific, real-world application of language.

                Nothing prevents philosophers from looking at these topics, but if you want to contribute to the field, you’re better off becoming a linguist.

                Likewise, if you want to contribute to logic, study computer science and math.

                Philosophers, it appears, sit around and dream up preposterously stupid ideas such as “the Chinese room argument.” Computer scientists build actual Chinese rooms.

                (It’s kinda ridiculous how easy it is to fool people on Twitter or dating sites or whatever.)

                (How many of you know that the original “Turning test” was gendered? It’s kinda cute.)


                EY’s work on semantics and epistemology is actually pretty good, if we can tolerate his over-reliance on Bayesian stuff. But still, you’re better off starting with him instead of (for example) Kant.

                After all, Kant thought that Euclidean geometry was a priori knowledge, foundational truth of the world that our minds could know prior to experience (or something like that).

                Except of course our minds can also know non-Euclidean geometry quite well. In fact we must if we want to actually learn how nature works, since the universe is manifestly non-Euclidean.

                Even in my work, optimization theory, which is mostly Euclidean, I still encounter non-Euclidean stuff from time to time. A number of techniques require a non-standard “metric” applied to a vector space. For example, in some conic quadratic optimization methods, we use the Lorenz metric to prove polynomial time convergence.

                So it goes.

                In non-linear optimization, we can analyze the situation as a constantly varying metric, where at each point we use the Hessian to form the metric. This is, at its root, differential geometry, which is “locally Euclidean,” but not actually Euclidean is the classic sense.

                On the other hand, Euclidean geometry does seem “easy and obvious” to us in ways that the various non-Euclidean geometries do not. Is this “prior to experience,” in some abstruse metaphysical way?

                Or is it because our brains evolved in a relatively “flat” region of space-time?

                How much of our “geometric intuition” is in the brain? Where in the brain? Is it mostly the visual cortex? (It seems, yes, it is mostly the visual cortex.)

                Could our brains handle “four dimensional thinking”?

                It seems that, no, that would be very hard, inasmuch as the “dendritic connections” form a literal three dimensional structure in our physical brains. In other words, it’s a question of complexity, how elaborate can your neural connections be in a 3d space versus a higher space.

                It’s turns out to be, for the most part, bloody impossible to “picture” a non-orientable space, except really simple shit like Möbius strips and Klein bottles.

                I suppose philosophers can think about this stuff. If you want to contribute, you better learn the math.

                Of course, we can work with higher dimensional problems. What I do is gain an intuition of the problem based on a 3d simplification of it. In other words, to understand the behavior of Lagrange multipliers, I “picture them” with a simple 2d or 3d problem. Then I think of how the theorems of linear algebra apply in these cases. Then I see if those theorems remain true when I “scale up” the dimension.

                Easy peasy. (Mostly I follow the work of others. I’m rarely smart enough to think up new ideas.)

                Of course, at some point you must build an intuition around the curse of dimensionality. But so it goes.


                Looking back through history, philosophers often pondered these questions. Typically they came up with ridiculous answers, to which many still cling even today. Round and round they go, while outside the philosophy departments, people actually do real stuff.

                I guess one question is, do you hope to provide something useful?


                One can argue at length about what causality is. Fine. Or perhaps one wants to actually build causal models of the real world, to which we can apply statistical models, from which we might learn how shit works.

                We each get one life to live. Choose.Report

              • Avatar Kim in reply to veronica d says:

                We evaluate intelligence based on novelty of connections, among other things.

                When the computer says “transgenderism is the new gluten-free,” Well.

                Whether or not it’s right, you have to admit, it’s creative. That’s a novel parallel to draw, and perhaps an insightful one.

                Language, by the way, may be something people do. Yet, I know someone whose native language exists entirely within his own head (he needs to translate to English)…. it’s highly pictoral, and he sucks at drawing…Report

              • Avatar veronica d in reply to Kim says:

                We evaluate intelligence based on novelty of connections, among other things.

                [citation needed]

                When the computer says “transgenderism is the new gluten-free,” Well.

                It’s a boring application of the “X is the new Y” pattern. That is not creative. It is the opposite of creative.

                We look not only for novel connections, but also for insightful ones. Sure, there is a way to relate that quote to the whole “transtrender” discourse, but so what? A new way to be a shitty transphobe is hardly something we should admire, even when random.

                Back when I was writing, I put zero effort into adding “symbolism” to my stories. In fact, if I wrote something that, when I reread, showed some kind of obvious symbolism, I sometimes took it out. I never wanted to be too “ham fisted” nor “on the nose.”

                Instead, I always trusted that my unconscious mind would kind of “automagically” generate whatever cool symbolism was needed, cuz that is what unconscious minds do. Anyway, you cannot force it.

                Impro talks about this a lot.

                Regarding random algorithms, I rather enjoy ThinkPieceBot over on Tumblr. It can be pretty hilarious.

                The thing is, yeah it’s random. But it is also curated. Is it creative?

                Sure, but where does the creation happen? It is a curated list. A person selects.

                Plus, this.

                How do we measure intelligence again?

                Challenge: create a novel metaphor about transgenderism that actually impresses me, a transgender woman. Can you do it?

                I cannot, not “just now,” not forced. Maybe someday I will. Who knows. If it comes, it comes. Until then, I shall math.

                Porpentine can do it, quite a lot. In her interactive fiction With Those We Love Alive, she has a thing where you must routinely inscribe in your flesh “estroglyphs” and “spiroglyphs.”

                That impressed me.

                The relationship between transgenderism and body horror is — well, it is certainly a thing.

                Some dumb “X is the new Y” shit. Nope. Bzzt. Try again.Report

              • Avatar Kim in reply to veronica d says:

                Talk to comedians if you want a citation. They know all about building something from seemingly tangential (but still relevant) connections. And they have an interesting perspective on intelligence.

                That bullshit thing? Just skimming the surface.
                You have some REAL fun when you start throwing genetics into the mix. Some real, profitable, fun. Reality is not a liberal entity.Report

              • Avatar veronica d in reply to Kim says:

                @kim — My point is, I know these things. You are not providing substance. Telling me to “talk to these other people” (in this case comedians) is asking me to do all the work. But why suppose I have not already done that work, many times over?

                This is becoming a pattern with you, non-insight backed by little that is specific.

                Which is to say, nothing you’ve said here isn’t in Hofstadter somewhere, or (better) in Impro.

                I’ve already read those. Say something new.Report

              • Avatar Kim in reply to veronica d says:

                Care to hazard a guess as to which gene pools are more intelligent than others? or maybe you won’t be guessing, seeing as I’m not providing any new information here… maybe you’re even seeing the same algorithms as I am, just from a different angle.Report

              • Avatar veronica d in reply to Kim says:

                @kim — I have no idea what you are even trying to say. So far as I can tell, intelligence varies widely in hard to predict ways.

                I know I do my best math when I am very relaxed. Similarly, if a problem is bugging me, I cannot “power through” figuring it out. Instead, I decompress, relax, maybe watch some TV, and then come back to the problem. Often I’ll reread some “basic” or “foundational” stuff — things that are obvious — and then come back to the problem.

                Insight comes at its own pace. You cannot force it.

                Brains are cool, but “conscious through” is a clumsy tool. The cool stuff is unconscious thought. That said, you can “prime” your brain by thinking certain ways.

                I assume artists do similar things. I know Impro is full of “brain hacks” for creativity. They seem to work, insofar as I can tell.

                That said, I’m not sure if this tells us anything about the value of philosophy as a contemporary academic discipline, compared to things like physics or compsci or psychology or whatever, which is what I was talking about. We seem to have changed the subject.

                Which, whatever.Report

              • Avatar Kim in reply to veronica d says:

                For a good deal of creative people, conscious thought powers down when unconscious thought is busy playing the whole word-association game (or whatnot). Some people call this the “down” part of a manic-depressive cycle.Report

              • Avatar veronica d in reply to Kim says:

                There is a value in being “non-linear.” There is a different value in the capacity to focus on a topic. There is tremendous value in knowing which is the right approach for the problem at hand, and more generally, how to mix the approaches to find the “sweet spot.”

                We can talk about this, or we can look at what is happening right now on this thread.

                What I mean is, your thoughts are jumping all over the place, in the sense that you left the actual topic behind several posts ago. Which fine. Randomness can be fun sometimes. But honestly, I don’t think we’re learning anything here, except that you cannot stay on-topic.

                Is any of this salient?Report

              • Avatar Murali in reply to veronica d says:

                This is the kind of bullshit you expect an amateur to spew.

                It’s good you’ve read some early analytic philosophy. Even better, you’ve read your Frege. It is somewhat unfortunate that you fail to realise the impact Frege’s work had on logic and consequently computer science. It would be even better if you understood the problem that Frege and Russell were trying to solve and perhaps why the things cog sci tells us (while true) do not address the issue. (And claiming that it is a non-issue simply because it cannot be solved by some neuro- psychological account is silly. While it was in fashion 100 years ago, we’ve made philosophical progress and we understand that it is silly now.

                It is also good that you are familiar with more contemporary philosophy of mind. It would have been better if you actually understood the argument that was being made with the chinese room example. Hint: The mere fact you can build one is not a refutation of the argument.

                Look, Veronica, I’m not saying philosophy is the king of disciplines, or that you need to know philosophy to be a good programmer. I’m saying respect the disciplinary boundaries and don’t talk about shit which you haven’t adequately studied. I’ve taken about 2-3 undergraduate courses that involved programming. I know that I know shit all about programming. I’m not going to intrude about your domain of expertise and claim knowledge where I haven’t. So please return the favour and show a little bit of intellectual humility: You are talking out of your ass and you really should shut the fish up about things you don’t know enough about. Yours and EY’s take on epistemology is painful, not cute.Report

              • Avatar Murali in reply to Murali says:


                Shorter me: Don’t talk shit about things you know shit about.Report

              • Avatar veronica d in reply to Murali says:

                @murali — What makes you think I don’t understand what Frege and Russell were up to, nor what the supposed “point” of the Chinese room argument was?

                As a preface, I don’t know much theology, but I know enough to see it as an intellectual dead-end. I feel very much the same about (most of what passes as) academic philosophy these days. You can say, “Well you just don’t understand,” which maybe I do, maybe I do not. I think I get Searle’s basic point, for example, except that I find it literally idiotic and cannot believe an adult produced such nonsense.

                Individual neurons in my brain do not understand English. Why would we assume the “paper shuffler” in the Chinese room understands Chinese, or the “pages of the book,” or any piece of the system? Blah. Good grief. This is obvious.

                That said, understanding what it would require to actual build such a thing seems like a question you would want to answer before just making shit up about nature. From that essay:

                Briefly, Searle proposed a thought experiment—the details don’t concern us here—purporting to show that a computer program could pass the Turing Test, even though the program manifestly lacked anything that a reasonable person would call “intelligence” or “understanding.” In response, many critics said that Searle’s argument was deeply misleading, because it implicitly encouraged us to imagine a computer program that was simplistic in its internal operations—something like the giant lookup table described in Section 4.1. And while it was true, the critics went on, that a giant lookup table wouldn’t “truly understand” its responses, that point is also irrelevant. For the giant lookup table is a philosophical fiction anyway: something that can’t even fit in the observable universe! If we instead imagine a compact, efficient computer program passing the Turing Test, then the situation changes drastically. For now, in order to explain how the program can be so compact and efficient, we’ll need to posit that the program includes representations of abstract concepts, capacities for learning and reasoning, and all sorts of other internal furniture that we would expect to find in a mind.

                I expect that Aaronson understands Searle. I’m pretty sure I understand Aaronson. Furthermore, once you are armed with even a sketch of insight into this stuff, it becomes hard to take Searle seriously. Which is all to say, failing to respect an argument is not the same as failing to understand it.

                But “qualia!” they shout. But “what it’s like to be!”

                Well, what is it like to be? I suspect the “what it’s like to be” for a Chinese room — imagine that it’s something like a Q-learning system running on a 10,000 node data cluster somewhere in the guts of Google — will be very different than “what it’s like to be” an evolved, embodied life form such as a human. I’m not sure. If we ever build one, we can ask it.


                As an aside, I find the whole “p-zombies” argument less preposterously terrible than the Chinese room argument. That said, I see no reason to suppose that p-zombies could exist. Maybe they could. Maybe they could not. Maybe a sufficiently well programmed 10,000 node data cluster can be “conscious.” Maybe it cannot. I see no way to determine the answer to these questions.

                If I ever encounter an “artificial mind” that claims to be conscious, and that seems to have some “inner life” — near as I can tell — then I’ll treat it as conscious. What else could I do?


                Anyway, I don’t play “burden of proof” tennis with people, nor do I submit to “quizzes.” If you have a case to make, step up and make it. Give one plausible reason that the Chinese room argument is not laughable nonsense? Or not. Your call. You don’t owe me your free time. But saying that I don’t understand is empty.

                Frege and Russell, of course, were very much not laughable nonsense. But then, everything useful they did is now foundational in math and compsci. Likewise for Gödel. I get that. I understand it very well. In fact, that’s the very point that I am making.

                To clarify, I’m not saying philosophy was never useful. I’m saying it’s run its race. These days, if you want to contribute, the important work is not happening in philosophy.

                I think the big truth is, this stuff is math-hard. You need the math. Philosophy tries to carry on with a paltry grasp of the math, whereas math and compsci have run off with the ball.

                So it goes with psychology, cog-sci, and linguistics. Although they don’t have the the same level of math (although sometimes they do), their focus on what brains actually do carries them so much further than armchair speculation about “knowledge.” These are hard subjects needing deep focus. “What actually happens in nature” is a big question. Every decade we have more tools to answer it.


                Anyway, you will either produce something interesting with your career or you will not. That is that.

                My career has been pretty middle-of-the-road, I guess. I started slow. On the other hand, Google hired me to do hard stuff. That’s something. I have much further to go before I should be satisfied.

                But for sure, I’m where the action is.

                We beat Go. They said we could not, but we did. What’s next?Report

              • Avatar Murali in reply to veronica d says:

                There is a difference between knowing that an argument is wrong and knowing why its wrong. The chinese room argument doesn’t work. But your initial dismissal of it seemed to be based on a non-sequitor. The argument doesn’t work is different from it being nonsense.

                The same can be said of your dismissal of the Russell-Frege project. Perhaps you really do get it. But if you do in fact get the different senses (hah) in which the question “what is meaning?” can be posed, then you would realise that there are two distinct meaningful questions. One of them is answered by a neurological and psychological story and perhaps even a game theoretical one. The second pertains to the puzzle of how we can mean two different things by two different sentences even though the sentences have the same referents. No neuro-psychological story can every answer that question. Because that is not the kind of answer we are calling for.Report

              • Avatar Stillwater in reply to veronica d says:


                Good series of thought provoking comments. I just wanted to respond to a few points you made.

                1. The Chinese Room argument has a narrow target and purpose: it’s supposed to be a reductio on Functionalism as a theory of mind, and primarily because of a philosophical point contained within: that you can’t get semantics from syntax. The thought experiment is supposed to sorta demonstrate that point, albeit with a corollary principle: that intelligence requires semantics or meaning. Now, I get that you think it’s a “yet to be determined” sorta thing, but the philosophical point remains, seems to me.

                2. Interestingly, your disdain for some of the softer disciplines derives – as I see it – from adopting a thesis that you’ve shown some sympathy to: that language is the expression of subjectively held, cognitively derived symbols, ones which can be shared by adopting or including other people’s internally determined semantics. Along those lines, you’ve said “There is no mind-independent meaning”. Yet you also criticize those disciplines for failing to include enough math and other higher level analysis into the theories which comprise them, which suggests to me (not entails, mind) that you believe – as lots of philosophers do 🙂 – that math is mind-independent. (Not the notations and squiggles of course, but the propositions expressed.) But if math is isn’t mind dependent then your claim that “there is no mind-independent meaning” isn’t quite right (which is a philosophical conclusion, one which evidence and science really have no role to play in establishing).

                3. Two of your claims above strike me as giving rise to a, frankly, horrible trend in intellectual thinking about language and cognition. The first claim I already referred to, that there is no independent meaning. The second is this

                Meaning is psychological, and involves first our own internal model of a situation, but then also how we model how other people model a situation.

                My complaint here is twofold. First, it isn’t that some uses of language are psychological (eg, “I’m feeling blue”) or that some semantics are internally determined to a model and a paradigm of language use in a community of speakers. It’s that those uses aren’t exhaustive of what people do with language. I’ve already given one example of (I’ll use scare quotes!) “mind independent” meaning: mathematical symbols used in a mathematical context. Philosophers like to say it is necessary, or that there is no possible world in which it’s not the case, that 2+2=4. By saying that (and I hope you agree the sentence expresses a necessary truth) people aren’t talking about language, or cognition, or semantics, or epistemology, or any of that. They’re talking about the proposition expressed by that particular string of symbols. In addition, proper names, natural kind terms, certain scientific terms and indexicals also appear to be mind independent meanings. (Eg, Kripke very persuasively argues that the semantics of the English term “gold” (as well as “Aristotle” and “Hesperus” and “tiger” etc) is NOT determined by internal properties of the speaker. So I think there are all sorts of words, which we English speakers string into sentences, that have mind-independent meanings. (But I’m also a realist about the external world.)

                The other complaint is that if meaning is mind-dependent (philosophical thesis! there are no exceptions!) then it sorta trivially follows no one is ever speaking the same language, nor referring to the same things, nor sharing any of the same feelings when the utter sentential constructions (presumably) used to convey information about thoughts or states of affairs. At worst, they’re utterences refer to subjectively held psychological states and at best, they refer to meanings determined by a “model”. Such a view (in the best case!) typifies, to me anyway, the very worst aspects of post-modernistic “word-as-text” thinking, in which communication becomes necessarily impossible (since the semantics are necessarily opaque) but also that disputes regarding the (supposed) content conveyed are no such thing, but instead reduce to squabbles over a merely preferred semantics irregardless of other meanings are conventionally used (to simply denote, for example!). In other words, the disputes for an internalist do not take place in the external world (because language BY DEFINITION cannot ever make direct contact with the external world) but withing the domain of disputes over a preferred “model”, or ideologically driven purpose, or etc. Which Strikes me as slightly ironic given that language in those “internalist” models satisfy the requirements of the Turing Test (syntactic proficiency) without the semantics ever making contact with the external world in a way that can be shared outside of the “model”.Report

              • Avatar veronica d in reply to Stillwater says:

                [This is a bit rambly. It’s late and I don’t feel like editing.]

                @stillwater — The material world seems to be mind-independent, in the sense that, if you stop believing in gravity, you will still fall.

                Let me state up front, empirical facts about the world are separate from our beliefs about them or the language we use to describe them. The phrase “carbon atoms have six protons” seems to have fixed, mind-independent meaning. However, I say what is happening is we are talking about a real thing called “carbon.” The meaning is not contained in the words.

                I assume that if humanity stopped existing, carbon would remain the same. I don’t know. But our concepts and words are not carbon. There is no metaphysical connection between them.

                Map-territory. So it goes.

                Regarding symbol systems, semantics, language, etc., math alone seems to be sort of thing that can be mind-independent. In other words, when you can reduce a thing to pure formality, where classical truth-value semantics apply, then you are doing something rather like math.

                In other words, once you have abstracted away all the particulars from “the masked man was at the party,” such that its meaning is only its truth, then you are doing pure two-valued formal logic, which is math.

                But note, I’m not committed to the idea that math is mind-independent. It seems like the sort of thing that should be. I don’t know. How would I determine that?

                The point is, when their truly is only one right answer, and empirical facts either don’t matter or are settled questions, then we are probably doing math.

                (Let me add, I’m a finitist. So can we set aside ZFC and the continuum hypothesis and such. That all is a separate kettle of badgers than what I mean by math. We can discuss that, but not tonight.)

                Anyway, there is more. There is the question, was the masked man actually there? In other words, can our “symbol systems” sufficiently describe reality to be useful? Is there a bridge between formal statements and empirical facts?

                Well, is there, without any mind to make that bridge? Does the “problem of reference” have a nice solution?

                Well, what is a “mind”? Can an autonomous computing system count as a “mind”?

                Can any “complex” system? Trees, for example, can sense their environment, although they respond rather slowly. Do they have a “symbol system”? Does this contain “meaning”?

                I don’t know. How would we decide that?

                To me, this sound more like a boundary drawing problem than a problem of what-is-true. Trees are what they are. We can understand them with science. Human minds are what they are. We can also understand them, although we have far to go. If trees have “symbol systems,” they are rather unlike ours.

                Which, the material world matters to meaning. If I say, “I’ll be late to the meeting,” that describes a real situation. The point is, however, it is a long path from those word coming from my mouth to their “meaning.” Along that path are many semantic puzzles that classical semantics is completely unprepared for. It is, for any real-world task, an utterly impoverished tool.

                I hate to deploy the “go read this long, boring book” argument, but I suggest everyone take some time and look at a book like this. Then consider all the “puzzles” of classical semantics and formal language. Then think of how those puzzles can be addressed by treating meaning as conceptualization.

                From that framework, the “problem of reference” becomes “the problem of using bad semantic tools to talk about how we use language.”

                What does the term “moon” mean, absent any context?

                After all, when I say it, there is a good chance I mean a girl named Usagi, who, of course, doesn’t actually exist.

                But non-existence is completely not a problem when discussing conceptualization. It is only a problem if you insist that meaning must involve denotation.

                “Carbon,” on the other hand denotes a real thing, at least usually it does. However, the reference is not in the words themselves. The words are just words. The sentences are just sentences. Language denote things when we intend for them to, when we believe they do, and when we are correct.Report

        • Avatar Brandon Berg in reply to Saul Degraw says:

          I know the less wrong crowd likes to believe that they are more rational but it often seems like their end goal of defending free market capitalism comes first.

          I don’t have a lot of interaction with them, but my impression has always been that they skew left. Sure enough, from the 2014 Less Wrong survey:

          Communist: 9, 0.6%
          Conservative: 67, 4.5%
          Liberal: 416, 27.7%
          Libertarian: 379, 25.2%
          Social Democratic: 585, 38.9%

          30% conservative or libertarian, 67% “liberal,” social democratic, or communist.Report

    • Avatar Aaron David in reply to Chip Daniels says:

      I strongly agree with @chip-daniels here, in fact this is, when you boil it down, a huge part of what informs my libertarianism. It is all about our vision of the world, and in my eyes, how we shouldn’t force it upon others if at all possible.Report

      • Avatar Saul Degraw in reply to Aaron David says:


        I’m not so sure about this. Everyone talks about how it would be great to have more rational voters and rational always ends up meaning “will believe in the same thing” as the person who desires rational voters.

        There is a lot of politics that can’t be determined via rational views or not or we have very different basis for our reason/ration.Report

    • Avatar James K in reply to Chip Daniels says:


      Emotion is not the opposite of rationality, as Hume pointed out reason can only act at the direction of our passions. The opposite of rationality is bias – not ideological bias, but failing to draw the correct factual conclusions based on your available data.

      People who demand the police be given more power because crime is on the rise are irrational because crime is falling. People who demand the police be given more power because they people it is right and proper that the police be more powerful are not irrational, they just have different views to you and me.

      A government that was in some way constrained by evidence would not eliminate political debate by a long stretch, but it would at least channel political disagreement along more productive lines. I don’t think it will ever happen, because most voters don’t like reality and don’t want their government to be constrained by it, but if it did happen i am confident that we would have a better government.Report

      • Avatar Kim in reply to James K says:

        Fewer shitstorms that way, yes.
        (Can you tell a friend of mine was just writing about the evacuation of Miami? Erm, pun not intended, but very functional nonetheless?)Report

      • Avatar Oscar Gordon in reply to James K says:

        don’t like reality and don’t want their government to be constrained by it

        NASA tried that once. Cost them a shuttle and 7 very expensive human lives, if I recall…Report

        • Avatar James K in reply to Oscar Gordon says:


          Most government policy has longer lags than that, and the data is noisier. It’s far to easy for people to convince themselves that when badness happens that it was just happenstance, and not the other shoe dropping.Report

  5. Avatar LeeEsq says:

    The last time we attempted science based on “scientific evidence” we got the some of the most abusive public policy possible. Things like Eugenics, forced sterilization, institutionalization of people who did not deserve to be institutionalized, and many other bad things. Many people still believe in “scientific racism” if you read the commentary section on any article asking why America can’t be more like Finland.

    The only people who don’t believe that their policy preferences are supported by scientific evidence, logic, and reason are the theocratic fanatics that want to re-establish the Caliphate or Jean Calvin’s Geneva. Anarcho-capitalists believe that reason supports their position. So do government oriented liberals and social democrats. When we still had Communists, they thought what they were doing was scientific socialism.Report

    • Avatar greginak in reply to LeeEsq says:

      Pah…We’ve had plenty of policy based on science that has worked well. I like vaccinations and cleaning up acid rain and the ozone hole.Report

      • Avatar veronica d in reply to greginak says:

        There is a difference between governing with a basic respect for science and the belief that math + science will provide easy answers to hard questions.Report

        • Avatar greginak in reply to veronica d says:

          I’m not backing the Tyson position, he is off base. But just noting that using science has worked pretty well in some cases. Science isn’t the actual issue. It’s the decisions.Report

        • Avatar Kim in reply to veronica d says:

          Math + Science provides easy answers to easy questions.
          Who gets rich and who dies are the hard ones we need to answer now.
          [yes, I’m biased].Report

      • Avatar Saul Degraw in reply to greginak says:

        Those are very specific examples but look at Climate Change and how much people have an economic stake in denying climate change including many libertarians who otherwise claim rationality.Report

        • Avatar greginak in reply to Saul Degraw says:

          Not sure of your point. My entire point was that science has been an effective tool in setting policy. Not a tool for evil but a good and useful tool.Report

          • Avatar Kim in reply to greginak says:

            Buuullshit. Maybe the US Military wing that deals with Food Security has been effective.

            But, really? nah. We know that the seas are being devastated — yet we still overfish.
            We know that America won’t be able to hold half the people it currently does, and soon — we still have immigration.
            We know that Miami is going to need to be evacuated — yet there’s still a growing housing bubble there.Report

          • Avatar Oscar Gordon in reply to greginak says:

            Science informs policy, that’s all. If a given policy runs counter to what science says, that doesn’t make the policy necessarily wrong, but it is a red flag that should cause us to demand a public accounting/explanation as to why a given policy is running counter to the science.Report

            • Avatar greginak in reply to Oscar Gordon says:

              I don’t’ disagree at all. I was pushing back against the “science leads to horrors”. There are all sorts of calculations that go into policy. Science will often be one of them and in many cases it had better be an important one. But it isn’t everything.Report

              • Avatar Kim in reply to greginak says:

                Get back to me after we lose significant parts of the world in terms of human habitability.

                Our petroeconomy has doomed a lot of people. Overfishing (which, yes, I can ascribe to science) will kill others.Report

            • Avatar Saul Degraw in reply to Oscar Gordon says:


              I’m wondering about things that are beyond science policy or not. How does one reason out what social issue policy should be? Policy on tort law?

              For stuff like that, it seems like all humans start with their conclusions first and then work backwards to find their reasoning and logic.

              Even climate change seems to be controversial. There are former OTers who seem to get annoyed when liberals champion the one outlier economist who support their views on income inequality or some such. Yet these people rush to the defense of the one outlier study that says climate change is not a thing. Why is that?

              It seems to me that the reasons some libertarians run to defend climate change denialism is because climate change is a collective action problem that requires government action and coordination but the liberartian mindset needs to defend the market first. So they are starting with their axioms and working backwards.Report

              • Avatar Brandon Berg in reply to Saul Degraw says:

                Although note that they’re doing it wrong. Law enforcement is a collective action problem, and very few libertarians have a problem with government doing that in principle (although how it handles it in practice is another matter altogether).

                You can even justify pollution regulation under the non-aggression principle, if you’re that kind of libertarian: If pollution is harmful, polluting without the consent of everyone affected is aggression. In fact, the real problem the NAP creates for pollution regulation is that it dictates that we shouldn’t allow any pollution at all, which would be disastrous.

                It’s also worth noting that libertarians who do support pollution regulation are actually more likely than leftists to support expert consensus (i.e., from economists) on how to regulate it.Report

              • Avatar Oscar Gordon in reply to Saul Degraw says:

                1) When it comes to policy, any policy, if there is science available on the topic, it should be used to inform the policy. The depth of the science can then act as the weight that tells policy makers how much attention they should pay to it.

                2) Climate change – this one is stickier, and it depends quite heavily on the priors of the person you are talking to. In general, climate change is a collective action problem, but you are (IMHO) very wrong that it requires government action & coordination on any significant scale. Science doesn’t even have all the variables & effects nailed down yet, so even if government had a solid track record of handling such problems, it wouldn’t be able to do so effectively at this point. This is where a lot of the informed pushback comes from (informed == people who are not flat out deniers). I can say that climate change is real, but government is ill-equipped to tackle the problem directly.

                Now, getting off our asses and instituting a Carbon/GHG* tax, especially if we could manage a treaty to set tax rates to something meaningful the world over (so China couldn’t, for instance, set a rate that undercut the US rate so much as to be meaningless), would get the ball moving in the right direction. It would get the market more focused on finding solutions, and the government would have to do is keep an eye on things and adjust the rate as needed** so as to keep things moving.

                *I say GHG because CO2 isn’t the only one. Methane is a large issue as well, but it tends to be ignored because the two big sources of methane are fossil fuel production and agriculture.

                **My only hesitation to a carbon tax is what happens when we go carbon free? Governments do love their tax revenue.Report

              • Avatar Kazzy in reply to Oscar Gordon says:

                @oscar-gordon makes a great point by saying the science should inform the policy… not dictate it wholly.

                Additionally, per @saul-degraw ‘s questions above, I think we need to ask ourselves what we mean by science. Can a scientist in a lab tell us what our tort policy should be? No.

                But there probably is a mechanism by which we can study, analyze, and come to conclusions about what outcomes different tort policies are likely to lead to and that can inform our decision making as well. Maybe it isn’t Science(TM) but it is the application of the scientific method and I think that is a good thing.Report

              • Avatar Brandon Berg in reply to Kazzy says:

                Right, this seems like the charitable interpretation. Science can’t tell you what your values should be, but it can tell you what consequences a policy is likely to have, from which you can determine whether they’re actually in line with your values. People may disagree on values, but we could at least agree not to implement policies that don’t make any sense.Report

              • Avatar James K in reply to Brandon Berg says:


                People may disagree on values, but we could at least agree not to implement policies that don’t make any sense.

                For the novelty value, if nothing else.Report

      • Avatar LeeEsq in reply to greginak says:

        That isn’t what I’m talking about and you know it.Report

        • Avatar Brandon Berg in reply to LeeEsq says:

          You said that Progressivism™ was what happened “the last time we attempted policy based on science,” implying that there have been no attempts since then that had better results. Pointing out counterexamples is a fair rebuttal.Report

    • Avatar Chip Daniels in reply to LeeEsq says:

      Communism was an example of what I mean, where it dismisses everything about humanity that doesn’t correspond to their beliefs, and does so in the guise of scientific rationalism.

      The techno utopianism seems like it also demands a different human person, who behaves differently that the actual specimens walking around.

      For example, I strongly disagree with the ethnocentrics, the nativists that Trump commands.

      But I grasp what they are feeling, and know how real and powerful it is.

      To feel like an alien in the town where you grew up, to walk down the street and not be able to read the signs, the creeping dread that your work and livelihood are slipping away, the nightmare of being old and poor and cast aside as a unwanted scorned minority- these are irrational fears maybe, but I wouldn’t dismiss them.Report

  6. Avatar Francis says:

    oh, not another is/ought dispute!

    Would the Kansas government exist in Rationalia? So the politicians believed the BS that the supply-siders were selling. Or maybe they didn’t really care one way or another, but they wanted to sell their low-tax values to the voters as having magic growth beans inside. By now, it’s pretty clear that what’s important to the elected officials is keeping taxes as low as possible on the middle class and up. That’s what voters want, so that’s what they get. The consequences are what they are. Yes, as the years go by, people will be shocked, just shocked I tell you, about the lies, omissions and general BS that accompanied the hollowing-out of various tax-supported entities, from roads to schools to hospitals. It’s just one of those comfortable lies people tell themselves.

    We elect politicians based largely on tribal alliances and emotion. In return we hope that they don’t screw us up too much and try to make a little progress on hard problems. The idea that Rationalia will be populated by people who won’t lie to themselves is just perfectly absurd.

    (Dealing with climate change means, among other things, Exxon/Mobil selling vastly less product. In Rationalia, are all the people who rely on Exxon for a living just happily going to vote themselves out of a job? 40-year old petroleum engineers can just decide to retrain as wind turbine installers, at a fraction of the salary? The climate science may tell us to cut C02 emissions, but the micro-economics tells that engineer to lobby against it with all his might.)Report

    • Avatar j r in reply to Francis says:

      By now, it’s pretty clear that what’s important to the elected officials is keeping taxes as low as possible on the middle class and up.

      Does Kansas excessively tax the poor? I mean more than any other state.Report

  7. Avatar James K says:

    This seems like a really uncharitable reading of Tyson’s tweet.

    For the record, here is a link to it:

    Now because this is Twitter, a medium calibrated to deliver about as much information as a bumper-sticker (cue my standard rant about how Twitter is the worst thing ever), its hard to say precisely what Tyson is proposing, but I didn’t read it the same way the rest of you seemed to.

    The function of a constitution is to constrain the government, not to fully determine all the decisions it makes. So I read Tyson as saying that “The ideal state’s decisions would be constrained by the weight of evidence.” rather than “The ideal state’s decisions would be fully determined by the weight of evidence.” This strikes me as a perfectly reasonable thing to aspire to.Report