The Uncanny Valley: It’s a Nice Place to Visit, But You Wouldn’t Want to Live There

David Ryan

David Ryan is a boat builder and USCG licensed master captain. He is the owner of Sailing Montauk and skipper of Montauk''s charter sailing catamaran MON TIKI You can follow him on Twitter @CaptDavidRyan

Related Post Roulette

43 Responses

  1. wardsmith says:

    Whatever Google’s self-driving car does or doesn’t do, it is going to be measurably superior to 99% of all human drivers in the exact same situation. And that assumes they are even paying attention and actively engaged, if they are distracted, arguing, talking or texting on their cell phone all bets are off.

    I hated anti-lock brakes when they first came out. They were clearly (or so I thought) inferior to what I could do with manual braking and my skill borne of years of accident free driving. But while ABS definitely increases braking distance, the newest technology allows something that only stuntmen and race car drivers can (occasionally) achieve, which is controlled steering during an ABS braking event. Old school drivers like me would hold the car arrow straight (because that was what you were taught to do when the wheels were skidding). However, ABS is analyzing /each/ tire 500 times per second and (at least on the better cars/systems) can individually apply or release braking pressure per tire. So even though you’re sliding, you can easily steer around the obstacle that made you slam on the brakes in the first place. This isn’t even a fair contest, a human would need four brake pedals and four feet to operate them and still couldn’t possibly compete with the info the car is casually using at comparatively lightning speed. The flaw happens when you go old-school and don’t use the aid the car’s tech is delivering to you, and I’ve watched as bad drivers point their cars straight ahead in abject horror as their ABS carefully pulses the brakes on individual tires to follow the (supposedly intelligent) human’s intended direction of choice, straight ahead into the obstacle.Report

    • Burt Likko in reply to wardsmith says:

      If the downside to ABS is that braking takes several feet more than with manual brakes, that means the driver must modulate (reduce) her speed a little bit more with ABS than not.

      This is an entirely one-sided bargain: safety occurs more often at slow speeds than fast.Report

    • Kim in reply to wardsmith says:

      Yeah. You tell me that when you watch Google’s car drive in circles in the parking lot.
      they said it couldn’t be hacked!Report

  2. Burt Likko says:

    If most drivers don’t understand that Snapshot doesn’t monitor their speed but rather other kinds of driving behaviors, and then drive slower in a mistaken belief that doing so will decrease their insurance rates, I can’t call that a bad thing. As I wrote in reply to Ward above: safety tends to occur at slower speeds. If you drive slower, chances are greater than if you drive fast that the safe behaviors Progressive really is looking for will coincide with reduced-speed driving.Report

    • David Ryan in reply to Burt Likko says:

      I like this construction “safety tends to occur…” very very much, Burt. Also your writing generally. I am proud to share this space with you!Report

    • Will Truman in reply to Burt Likko says:

      I think that the potential downside here is that people will decline to adopt this if they think that it’s going to monitor their speed.

      I don’t think that I will be adopting this technology. Even if it might be for The Public Good, I simply don’t like records being kept and have privacy concerns.Report

      • Dan Miller in reply to Will Truman says:

        The real challenge will come when Progressive turns it on by default for every new policy, or makes it mandatory.Report

        • Will Truman in reply to Dan Miller says:

          That only works if all of the others do it, too. Otherwise, it will almost assuredly cost them business. I’ve been with the same auto insurer for 15 years, but I’d leave tomorrow if they forced something like this on me. Unless, of course, all of their competitors did the same.Report

          • DensityDuck in reply to Will Truman says:

            “That only works if all of the others do it, too. Otherwise, it will almost assuredly cost them business.”

            I remember how that was the reason ATM fees would never happen.Report

          • Roger in reply to Will Truman says:


            As someone who was intimately involved in designing rates and working with these types of devices, I hate to say it, but you are wrong. The dynamics of insurance ensures that rates will go up for those refusing to use the devices. Here is why…

            Companies will be forced to introduce similar devices. This is because the safest drivers will discover they save money, and will flock to the devices.

            More companies introduce these, and more safe drivers use them, the base rates for those not using the device or not earning discounts will go up.

            To avoid rate increases, more average drivers will start to use the devices, now earning a minimal net reduction if any compared to the original case pre device.

            This will further drive up base rates. In effect, safe drivers willing to share info will be self identifying themselves and freeing themselves from the costs of subsidizing less safe drivers.

            This dynamic is though partially offset by a positive influence. The state of the art devices give feedback to drivers and recommendations on safer driving and safer routes that will save them money. Thus some drivers will actually drive better, thus reducing overall insurance costs for themselves and others.

            All things considered though, failure to adopt a device will increase your premiums long term. Probably considerably.Report

            • David Ryan in reply to Roger says:

              Thanks for this comment, Roger.Report

            • Tom Van Dyke in reply to Roger says:

              Not all progress is inevitable. 😉Report

            • Will Truman in reply to Roger says:

              If adoption becomes really widespread, I may not have a choice (that’s what I was getting at with my last comment). That being said, depending on the circumstances, I’d rather be assumed to be a bad driver than to have a device in my car keeps track of what I am doing. I’d pay a premium for my privacy, if I can afford it. If the differential becomes too large, I’m not going to cut off my nose to spite my face, but the keeping of and possible government access to these records ought to be cause for concern for libertarians.Report

              • Roger in reply to Will Truman says:

                Ah, but as a libertarian designing these devices there is something I forgot to share.

                It is very possible to build these devices so that the insurance company gets no data. Indeed, this was a concern that we felt needed to be addressed to get more drivers to adopt them.

                Here is how it can work…. The insurance company gives you the device and the device gives you and only you feedback. It then gives you a composite score and tells you that you earned a 14.7% discount. It then asks you if you would like the insurance company to be notified of the discount earned that month. You the driver then hits the yes tab, and the discount amount and only the discount amount is sent to the insurance company.

                The insurance company does want raw data, but insurance company can be contractually required to PAY you for the data. Now the transaction is broken in two. Safe drivers get better rates and no privacy violation. Drivers that are willing to share data get data discounts. Furthermore, the insurance company can even establish third parties that remove all personal identifiers from the data. The composite data gets transmitted, those willing to share get paid for it and no personal data is released to the evil insurance company.Report

              • Tom Van Dyke in reply to Roger says:

                Extraordinary, Roger. The Beast is evil, but fair.Report

              • Roger in reply to Tom Van Dyke says:

                Actually the beast could be fair if it chose to be. I would not count on it though. Most insurance companies are run by capitalist pigs.Report

              • Tom Van Dyke in reply to Roger says:

                We must make sure The Beast screws everyone equally, that he is fair.Report

              • DensityDuck in reply to Roger says:

                There are no discounts.

                Everyone’s rates go up. If you allow data collection and location tracking, your rates go back down…to where they were before.Report

            • Will Truman in reply to Roger says:


              I foresee one of two possibilities:

              (1) The degree of rate-drop remains something like 15% and I can therefore opt-out of sending them my data or collecting it. No harm, no foul. Depending on circumstances, it might be worth it to me.

              (2) The divergence you refer to above occurs and the initial rates are jacked up so high that you’re looking at a 70% decrease from an inflated rate. I don’t mean “artificially inflated” rate, but rather inflated from what the rates are today.

              I am hoping for #1. I find it unlikely that, once all of this data is being collected by our cars, some state legislators won’t start thinking about all of the help it would be if the government had access to this data. And, beyond that, I simply don’t want an insurance company knowing the ins and outs of every peculiar thing that I do. they have demonstrated that they are neither trustworthy nor transparent with that data.

              We’ll see what happens.

              I will add to a comment below, though, which is that insurers could easily collect data on how much we drive and choose not to. They simply take our word for it. Depending on the reason why they do this, along with the fact that Progressive doesn’t monitor speed despite it being useful in setting rates, it could indicate that they wouldn’t be all that aggressive in collecting breaking data and such.Report

              • Roger in reply to Will Truman says:

                If they can collect the data and it is relevant to rating efficiency, eventually someone will collect the data, and that will force all other companies to either collect the same data or get adverse selection. A better interpretation is that at the current level of granularity, there is substantially less value in speed data than laypeople think.

                Remember, in the future, these devices have the ability to know everything. Speed, speed limit, weather, status of lights and windshield wiper, volume of stereo, acceleration forward, to the side and braking, traffic conditions, whether you are onthe phone, whether uou are driving to work or shopping, whether you are coming from a bar, a synagogue or a strip joint, blah, blah blah. The devices of the future will tap into the car, Google maps, and will use algorithms to predict what you are doing and why.

                Furthermore, if the insurance company ever gets the data, they need to worry about being subpoenaed for it by whomever. That is why I never wanted the data. I just wanted to offer better prices than my competitors. After all, I am a capitalist pig.Report

  3. North says:

    I can certainly endorse the snapshot device. It came in the mail, it sat plugged into the car for like a month and then it went back to the company. Shortly after that our premiums dropped 10%. It’s been a pleasing experience.

    Also Flo’s adds tickle me considerably.Report

    • Glyph in reply to North says:

      Yeah, if Geico offered this I would do it (and assuming that it was also for a temporary “test” period). I work from home, so I imagine validating that my car does not get driven all that often, or very far, over the “test” period would probably drop my premiums quite a bit.

      The fact that the device is not permanent (temporary is the only way I would do it) does suggest a way to game it though. Get the device, then bum rides, or have your grandma drive the car so it goes slow all the time for a month (or whatever the test period is).

      Once the device goes back to the ins. co. and your premiums drop, start driving everywhere like a maniac again.Report

  4. Lyle says:

    Note that State Farm offers a discount for cars subscribed to OnStar that essentially charges you by the miles you drive only, not when or how hard you brake. Of course miles driven does have a direct connection to all risks except comprehensive on an auto policy.Report

  5. Damon says:

    The devices may not report on speed, but they very likely can. Even it not, speed is a major factor and the devices will one day record it. This is just the camel’s nose under the tent.

    Lot’s of entities would like this data. It’s much more valuable than speed cameras. Pass a law, make them mandatory, make the insurance company forward the data to gov’ts. Mail out the “speeding fines” for you last month’s lead foot.Report

    • Kim in reply to Damon says:

      Placement of car seems like more of a factor, at least in this state. In City == Less Likely to be hit by deer (at least I hope that’s what it means…)Report

  6. James Hanley says:

    Your self-driving car has a fraction of a second to make a choice: Either it swerves off the bridge, possibly killing you, or it runs over the children. What does the Google algorithm tell it to do?

    This is the kind of “thinking” that drives me nuts. The guy has pinpointed something that is impossibly difficult for almost any human, then says, “see, this thing–this one thing–computers can’t do it.” And from that he assumes an argument against the computer?

    This is classic nirvana thinking. I’ve found a flaw in system X; that means system Y is superior.

    And to think that guy got paid real cash money to write that.Report

    • Glyph in reply to James Hanley says:

      James and David both, I read both the Nick Carr and Alan Jacobs posts, and I don’t see anywhere where they claim a superiority of the old human systems; they are just pointing out, to people who people say “why don’t we have robot cars yet?”, that there is a lot to think about.

      You can say they are oversimplifying the issue, but they are just using terms that people can readily understand to point out that programming computers to have a “conscience” when we don’t even agree what such a thing is, or what it “should” do in a given situation (the answer being, as always, “it depends”) is a hard task.

      I don’t think they (or I) would disagree that a computer is going to do better, 95% of the time, than a human can. But it does get really sticky. I know the oversimplified re-heated example annoyed David, but I will hit it anyway – even aside from the basic ethical premise of the question (save the car with one occupant, or the one with 4 kids), you have the secondary ethical/privacy issue of the fact that the cars must either “know” and/or communicate this info (# of occupants, and their ages) to each other so as to reach any decision at all – if you are Teddy Kennnedy at Chappaquiddick, you may not want this sort of info stored/transmitted.

      And do we all agree that we should save the 4, and not the one? Maybe I am the last of my family line, and a brilliant scientist who is going to solve global warming – what are those 4 snot-noses worth, compared to me? Maybe each car should always attempt to self-preserve, regardless of whether the other car is a schoolbus or not – after all, self-preservation is probably what any human would do instinctively, and we would only rarely fault them for that.

      David, what would the “rules of the sea” say, for a captain? I don’t know enough to make the hypo, but if you had to make a collision-avoidance decision to either scuttle your 6-man boat to save a 20-man boat, or save your 6-man boat while the 20-man boat goes down, what do you do, and what would the law or most people say about your decision after you do it?

      My assumption would be that you always look out for your boat, so if you must choose, you choose your boat and crew/passengers, no matter what. But I could be wrong.

      Anyway, I agree that they are not digging deep. But as brief blog posts designed to spark thought and discussion, I also don’t see what is so egregious about them.Report

      • James Hanley in reply to Glyph says:


        I get your point, but what those guys are saying isn’t really interesting, except from a purely philosophical level. It’s trivially easy to design the software with a “conscience” in the sense they mean–you just enter the paramters into the software. You can make it wholly deterministic, or you can base it on some level of probability. Either way, you’re not going to set up a program that’s noticeably inferior to human reactions because we have no agreement on a standard for counts as a good human reaction. You could probably even program it to be completely random in response and we wouldn’t be able to say it’s inferior.

        But if we can come to an agreement on what’s the best response, we can program it with trivial ease. There’s no new philosophical question here, and the old philosophical question doesn’t really apply to the cars or software engineers in any independent way. It’s not a new moral problem, like the development of the atomic bomb, or the ability to genetically design our children.Report

      • David Ryan in reply to Glyph says:

        James has mostly covered it, except that it isn’t even interesting at a philosophical level. In an up-coming post entitled “Hard Cases Make Bad Algorithms” I’ll explain why.Report

  7. Glyph says:

    Hmmm, so at root you are just saying “this doesn’t interest me?”

    Because A.) It’s not “trivially easy” especially when you consider the various inputs, rankings, contingencies, and all ethical ramifications of each; B.) The post authors never claim superiority or inferiority of humans or computers as far as I can see, at all (and again, I believe computers will eventually, once the kinks are worked out, do a better job under most circumstances) C.) the statement “if we can come to an agreement on what’s the best response” – as you & David point out, this is the oldest philosophical question in the world, so you also just made the world’s biggest IF statement right there; D.) You could make a similar argument (though I wouldn’t) that the atomic bomb was no new moral question either, since the first bombs at least were not human-race-enders, they were basically just really really really really big bombs, and we’d seen just plain big bombs before.

    If you are just saying the question isn’t interesting to you, or is being handled shallowly, that’s totally cool. I think they are interesting questions though, precisely because they force us to look at the oldest questions yet again, in detail and in the real world; because for maybe the first time (paradoxically, since we are talking about programs) we have real ethical choices to make in that programming (like I say, most humans would almost always instinctively opt for self-preservation no matter what; but if we program the systems to act even slightly differently than humans would historically react in that split-second, that has huge ramifications – evolutionary ones).Report

    • Glyph in reply to Glyph says:

      Argh, that was supposed to be to James.Report

      • James Hanley in reply to Glyph says:

        Programming it is trivially easy. Coming up with “an” algorithm is trivially easy. Coming up with “the right” algorithm; that’s not easy, but that’s because it’s a philosophical question, not a design and programming question. It seems to me the author is either overstating the difficulty of a design and programming question in a way that misstates the fundamental process going on in design and programming, or else he is making a big deal about an old problem just because it can now apply technologically, with misstates the fundamental process going on in philosophy.

        It’s trite. And that, I think, is actually a compliment.Report

        • James Hanley in reply to James Hanley says:

          I should add this. The very nature of the design and programming of a driverless car means the designers and programmers will come up with an algorithm to createa response to that question. Whereas philosophers can dink around with it as a troublesome scenario for millennia without solving it, the engineers won’t have that option. They will devise and instantiate a solution. It’s not guaranteed to be the best solution, or the most philosophically satisfying (although they may very well ask philosophers their thoughts, as well as asking economists and actuaries. But it will be “a” solution. There’s no way around it. It will be done, and the philosophers may be among the last to notice.Report

          • Glyph in reply to James Hanley says:

            Of course it will be “solved”, one way or another. And no one is arguing that it won’t, or shouldn’t be.

            But the issues at play are both the oldest philosophical questions, and the sci-fi AI/Laws of Robotics/how-do-we-prevent-Skynet sorts of questions that we have only been seriously grappling with for the last 100 years or so. Humans respond to stimuli the way we do as a result of millions of years of evolution, blindly selecting for survival fitness. Our brains are decision-making machines, making a gazillion calculations related to survival (and maybe morality) every second, and we are only now beginning to understand how those machines work, and what questions they are even asking/answering to arrive at their calculus (42).

            So, we built computers; and we are currently in the process of a massive conscious outsourcing of unconscious decision-making to them (decisions that we make now largely by instinct/emotion, as a result of a evolutionary process we neither controlled nor fully comprehend – that is, we comprehend the basic process, but not why a particular trait was selected for/against).

            You don’t see this as an interesting point in time? As awesome a power as the bomb was, it is still a “dumb” thing – the bomb doesn’t decide anything, the people do. Once a driverless system is instantiated, the system will decide (based on the rules we gave it). And some people will die as a result of the choices that we and the system have made. They won’t always be the “right” people (whoever they are, and depending on who’s asking).

            Sure, we can and will make tweaks to that system, to try to make it conform better to our ideas about morality and best practices; but that just brings us to the next part – what we think humans would or should do, and what humans would actually do once they are in the lifeboat deciding who to eat, are two different things – and one of those things may derive from our philosophical ideas of morality and reason, and one may derive from biological survival instincts/perpetuation of the species. I hope we, and our proxy in the system, choose the right one.

            Your position seems to boil down to “the boffins will figure it out”. And they will, just as they figured out how to make an atomic bomb work.

            Then as now, scale matters, and we will continue to debate the thorny areas, and how to best handle the downsides.

            Let me make a guess – aside from BSG, you are not much of a sci-fi guy, are you? 🙂Report

            • James Hanley in reply to Glyph says:

              So, we built computers; and we are currently in the process of a massive conscious outsourcing of unconscious decision-making to them (decisions that we make now largely by instinct/emotion, as a result of a evolutionary process we neither controlled nor fully comprehend – that is, we comprehend the basic process, but not why a particular trait was selected for/against).

              You don’t see this as an interesting point in time?

              Only from a technical, engineering, point of view. Because we’re not actually giving them “decision making” authority; we’re just giving them algorithms that lead to deterministic decisions. Give me an explicit set of rules to follow that determine the outcome and I’m no longer a decision maker, just a rule enforcer. If we don’t make it actually deterministic, then we’ve made it probabilistic, which simply means that the set of explicit rules I’ve been given to follow include a requirement to “toss dice here; if the range 1-2 results, go to line 24; if the range 3-4 results, go to line 76…” etc.

              It’s a more high tech version of a themostat (when the temperature drops below 70, turn “on”), but that’s all.

              Get back to me when we give the program discretion; then I’ll really be interested.Report

              • Kim in reply to James Hanley says:

                I’m more interested in the self-modifying programs.
                Particularly the ones the government uses. 😉
                (note: this is a joke. the ones I’m aware of the
                government using are exceptionally boring.)Report

              • Glyph in reply to James Hanley says:

                It’s a more high tech version of a thermostat…that’s all.

                In a way, an atom bomb is a more high-tech version of a firecracker. And a machine gun is a more high-tech version of a kinetic projectile weapon such a slingshot. And a drone is a more high-tech version of an RC plane.

                Coincidentally, I am no longer allowed to play with any of these things, by court order.

                we’re just giving them algorithms that lead to deterministic decisions. Give me an explicit set of rules to follow that determine the outcome and I’m no longer a decision maker, just a rule enforcer… when we give the program discretion; then I’ll really be interested.

                And how do we know that this is not also the way that we operate? Maybe we are operating under a similar set of rules and have no discretion either. We don’t even know what all our own rules are, much less exactly why each exists – and we are getting ready to try to pass some semblance of one version of them on. We will be instantiating the “predetermined” worldview; hope the “freewill” one wasn’t important.

                I deal in a small simple way with business rules at my job. They have far, far fewer variables and unknowns, and nobody dies if I get one wrong (though it can still be costly monetarily). And even then, there are still many, many situations in which a rule cannot be coherently crafted or consistently applied, so we refer the situation to a human, who luckily has more than fractional seconds to make a decision (the just-awakened-and startled human driver will have no such luxury).

                Anyway, I guess I understand why you don’t find it interesting. But I do (so did Asimov and his descendants). On balance it will be good. But there will be some serious bumps and lawsuits on the way, I’d imagine.Report