The Uncanny Valley: It’s a Nice Place to Visit, But You Wouldn’t Want to Live There
From How a Creepy Car Insurance Idea Could Save Thousands of Lives (and the Planet) at The Atlantic:
“Lots of markets deal with some kind of market failure, and car insurance is no exception. Insurers have a big problem. They know a lot about us, but they don’t know a lot about how we drive. Sure, they know when we get tickets or when we get into fender benders, but they don’t know how much or how fast or how aggressively or how attentively we drive. They have to use proxies like age, sex, marital status and where we live to price policies. The end result are higher premiums than most of us should be paying.
“But now there’s a solution for those 93 percent of drivers who think they’re above average. As Randall Strossof the New York Times reports, car insurance companies are offering customers a trade: less privacy for lower premiums. Drivers who install a monitoring device that records when, how far, and how fast they drive — but not where — are eligible for discounts of up to 30 percent. The average saving from Progressive, which pioneered the program, is 10 percent. Other insurers are getting in on it too.” (Emphasis added.)
The post is illustrated with the same image of Progressive Insurance’s “Flo” character that I’ve used, and seems to imply that Progressive Insurance’s “Snapshot Product” monitors drivers’ speed.
It doesn’t. Here’s the video:
Of course League readers already know this because I wrote about the Progressive Insurance Snapshot product in the context of the norming power of insurance and the internet in my post from last Spring entitled I Can’t Drive 55:
Progressive Insurance does not care how fast you drive (so long as you don’t get caught!)
Progressive insurance does care about a few other details of your driving habits; how hard you accelerate and brake, what time of day you drive, and how much you drive.
The reason I know this is because they want you to let them monitor your driving habits in exchange for (the possibility of) lower rates. The way this works is that you let them put their Snapshot device in your car.
In their video promoting Snapshot, Progressive Insurance affirmatively states that Snapshot does not monitor where you drive or how fast you drive. It does monitor acceleration, miles driven, and time of day when driving, and presumably there is some sort of actuarial correlation between these metrics and the likelihood of accidents.
Here’s the other thing.
Progressive says that data from Snapshot cannot make your rates go up, only down, and down by as much as 30%.
This suggests that because Progressive is unable to distinguish between high-milage, hard-breaking drivers caught in rush hour traffic and those of us who work from home, brake gently, and make a point of traveling at off-hours, we safer drivers have been subsidizing less safe drivers by paying rates that are not justified by our driving habits, and that Progressive believes if they can identify and capture more cautious drivers, they’ll be able to add to their revenue out of proportion to their increased exposure.
Now I suspect some of you have your arms akimbo after reading the above passage, and I’ll grant you that suspicions of an insurance company’s motives is warranted. But in this case I’m inclined to give Progressive the benefit of the doubt.
Why? Because recreational boat insurance is incredibly cheap. Let me explain:
Our sloop S/V INTEMPERANCE was a 1979 Catalina 38 that we bought in 2007 for $23,500. I thought that was under value, so when we got recreational insurance against loss and liability I asked for a stated-value policy of $30K, plus another $1K for personal effects. That put the insurance company’s exposure at $1,000,000 if someone got hurt or if I ran into someone’s boat, or if I had an oil spill, and another $31K if the boat was wrecked.
The annual cost for this coverage was $525, and if you compare that to a commensurate amount of loss and liability coverage for a private automobile, it’s a fantastic deal.
Why? Because unlike most private cars, which are in near daily use, most private boats are tremendously under-utilized. It’s taken as axiomatic that on a per use basis for what it costs most people to keep and operate a 35 foot private boat they could hire a luxury charter for each outing and still end up ahead. But the sheer (mere?) joy of ownership is something people are willing to pay for, and the relatively low risk of their boats sitting idle subsidizes the rate that I pay, even though I use my boat a lot.
Now here’s the interesting thing.
During my guest-posting stint for Megan McArdle I actually had “I Can’t Drive 55” cued up as a post for The Atlantic. I set it up at the close of Making a Living in the Wake of the Pelican Disaster, which was supposed to be a lead in to the 55 post:
Of course not all economic actors are as embedded in the markets and communities in which they conduct commerce. In my next post I’m going to talk about Progressive Auto Insurance’s Snapshot product, Montauk’s pirate sailing operations, and why I think Jay Rosen is wrong when he says that the internet weakens the authority of The Press. Thanks for reading!
Now first of all, yes. I’m settling a score.
I thought 55 was a really strong post. It was the culmination of an ongoing correspondence with James Fallows and James Poulos about social norms and driving habits that had been going on for nearly 2 years, it took me about 20 hours to write, I got really good feedback on it from my co-guest bloggers who “cheated” and read it in edit mode, and I was pretty disappointed when it didn’t run. So I’ll say now, with as glad a heart as I can muster, “Welcome to the party, Atlantic. The Snapshot device is an interesting development in the insurance business, and a good starting point for a provocative discussion. Glad you could join us.”
Secondly, a corrective. I’ve already said it once, and I will say it again now: the Snapshot device does not monitor speed.
Thirdly, the fact that Snapshot does not monitor speed is the most provocative aspect of the story, or it is if you start to explore the reasons why it doesn’t.
Fourthly, I’ll close this post with a bit from Nick Carr that I picked up off Alan Jacob’s blog at The American Conservative:
“So you’re happily tweeting away as your Google self-driving car crosses a bridge, its speed precisely synced to the 50 m.p.h. limit. A group of frisky schoolchildren is also heading across the bridge, on the pedestrian walkway. Suddenly, there’s a tussle, and three of the kids are pushed into the road, right in your vehicle’s path. Your self-driving car has a fraction of a second to make a choice: Either it swerves off the bridge, possibly killing you, or it runs over the children. What does the Google algorithm tell it to do?… We don’t even really know what a conscience is, but somebody’s going to have to program one nonetheless.”
This bit popped up on Twitter and elsewhere, and I will say, again with as glad a heart as I can muster that: 1) It depresses me that so many people who are generally thought of as Important Thinkers don’t seem to realize that we’re well past the point of living in a world where we already have Algorithmic Ethics and Mechanical Morality, and these mechanisms make decisions that can profoundly affect people’s lives; and 2) That these same sort of Important Thinkers use a retreaded Ethics 101 puzzle, a puzzle that describes the problem with roughly the same acuity of Zeno describing the flight arrow, to ask “provocative” questions about issues that have already been decided.
—
“You can’t throw bull with the ocean, she won’t listen.” I won’t give you the citation, because if you’ve read this far you must be one of my dedicated readers, so I don’t have to. You’ve read it before, you know who said it, and you know you’ll read it again.
On the other hand, you can carry a tide table. And you should.
Whatever Google’s self-driving car does or doesn’t do, it is going to be measurably superior to 99% of all human drivers in the exact same situation. And that assumes they are even paying attention and actively engaged, if they are distracted, arguing, talking or texting on their cell phone all bets are off.
I hated anti-lock brakes when they first came out. They were clearly (or so I thought) inferior to what I could do with manual braking and my skill borne of years of accident free driving. But while ABS definitely increases braking distance, the newest technology allows something that only stuntmen and race car drivers can (occasionally) achieve, which is controlled steering during an ABS braking event. Old school drivers like me would hold the car arrow straight (because that was what you were taught to do when the wheels were skidding). However, ABS is analyzing /each/ tire 500 times per second and (at least on the better cars/systems) can individually apply or release braking pressure per tire. So even though you’re sliding, you can easily steer around the obstacle that made you slam on the brakes in the first place. This isn’t even a fair contest, a human would need four brake pedals and four feet to operate them and still couldn’t possibly compete with the info the car is casually using at comparatively lightning speed. The flaw happens when you go old-school and don’t use the aid the car’s tech is delivering to you, and I’ve watched as bad drivers point their cars straight ahead in abject horror as their ABS carefully pulses the brakes on individual tires to follow the (supposedly intelligent) human’s intended direction of choice, straight ahead into the obstacle.Report
If the downside to ABS is that braking takes several feet more than with manual brakes, that means the driver must modulate (reduce) her speed a little bit more with ABS than not.
This is an entirely one-sided bargain: safety occurs more often at slow speeds than fast.Report
Ward,
Yeah. You tell me that when you watch Google’s car drive in circles in the parking lot.
😉
they said it couldn’t be hacked!Report
If most drivers don’t understand that Snapshot doesn’t monitor their speed but rather other kinds of driving behaviors, and then drive slower in a mistaken belief that doing so will decrease their insurance rates, I can’t call that a bad thing. As I wrote in reply to Ward above: safety tends to occur at slower speeds. If you drive slower, chances are greater than if you drive fast that the safe behaviors Progressive really is looking for will coincide with reduced-speed driving.Report
I like this construction “safety tends to occur…” very very much, Burt. Also your writing generally. I am proud to share this space with you!Report
The feeling is mutual!Report
I think that the potential downside here is that people will decline to adopt this if they think that it’s going to monitor their speed.
I don’t think that I will be adopting this technology. Even if it might be for The Public Good, I simply don’t like records being kept and have privacy concerns.Report
The real challenge will come when Progressive turns it on by default for every new policy, or makes it mandatory.Report
That only works if all of the others do it, too. Otherwise, it will almost assuredly cost them business. I’ve been with the same auto insurer for 15 years, but I’d leave tomorrow if they forced something like this on me. Unless, of course, all of their competitors did the same.Report
“That only works if all of the others do it, too. Otherwise, it will almost assuredly cost them business.”
I remember how that was the reason ATM fees would never happen.Report
I still don’t pay ATM fees.Report
Nor do I.Report
I don’t either, so long as I go to the right ATM.Report
Will,
As someone who was intimately involved in designing rates and working with these types of devices, I hate to say it, but you are wrong. The dynamics of insurance ensures that rates will go up for those refusing to use the devices. Here is why…
Companies will be forced to introduce similar devices. This is because the safest drivers will discover they save money, and will flock to the devices.
More companies introduce these, and more safe drivers use them, the base rates for those not using the device or not earning discounts will go up.
To avoid rate increases, more average drivers will start to use the devices, now earning a minimal net reduction if any compared to the original case pre device.
This will further drive up base rates. In effect, safe drivers willing to share info will be self identifying themselves and freeing themselves from the costs of subsidizing less safe drivers.
This dynamic is though partially offset by a positive influence. The state of the art devices give feedback to drivers and recommendations on safer driving and safer routes that will save them money. Thus some drivers will actually drive better, thus reducing overall insurance costs for themselves and others.
All things considered though, failure to adopt a device will increase your premiums long term. Probably considerably.Report
Thanks for this comment, Roger.Report
Not all progress is inevitable. 😉Report
If adoption becomes really widespread, I may not have a choice (that’s what I was getting at with my last comment). That being said, depending on the circumstances, I’d rather be assumed to be a bad driver than to have a device in my car keeps track of what I am doing. I’d pay a premium for my privacy, if I can afford it. If the differential becomes too large, I’m not going to cut off my nose to spite my face, but the keeping of and possible government access to these records ought to be cause for concern for libertarians.Report
Ah, but as a libertarian designing these devices there is something I forgot to share.
It is very possible to build these devices so that the insurance company gets no data. Indeed, this was a concern that we felt needed to be addressed to get more drivers to adopt them.
Here is how it can work…. The insurance company gives you the device and the device gives you and only you feedback. It then gives you a composite score and tells you that you earned a 14.7% discount. It then asks you if you would like the insurance company to be notified of the discount earned that month. You the driver then hits the yes tab, and the discount amount and only the discount amount is sent to the insurance company.
The insurance company does want raw data, but insurance company can be contractually required to PAY you for the data. Now the transaction is broken in two. Safe drivers get better rates and no privacy violation. Drivers that are willing to share data get data discounts. Furthermore, the insurance company can even establish third parties that remove all personal identifiers from the data. The composite data gets transmitted, those willing to share get paid for it and no personal data is released to the evil insurance company.Report
Extraordinary, Roger. The Beast is evil, but fair.Report
Actually the beast could be fair if it chose to be. I would not count on it though. Most insurance companies are run by capitalist pigs.Report
We must make sure The Beast screws everyone equally, that he is fair.Report
There are no discounts.
Everyone’s rates go up. If you allow data collection and location tracking, your rates go back down…to where they were before.Report
Roger,
I foresee one of two possibilities:
(1) The degree of rate-drop remains something like 15% and I can therefore opt-out of sending them my data or collecting it. No harm, no foul. Depending on circumstances, it might be worth it to me.
(2) The divergence you refer to above occurs and the initial rates are jacked up so high that you’re looking at a 70% decrease from an inflated rate. I don’t mean “artificially inflated” rate, but rather inflated from what the rates are today.
I am hoping for #1. I find it unlikely that, once all of this data is being collected by our cars, some state legislators won’t start thinking about all of the help it would be if the government had access to this data. And, beyond that, I simply don’t want an insurance company knowing the ins and outs of every peculiar thing that I do. they have demonstrated that they are neither trustworthy nor transparent with that data.
We’ll see what happens.
I will add to a comment below, though, which is that insurers could easily collect data on how much we drive and choose not to. They simply take our word for it. Depending on the reason why they do this, along with the fact that Progressive doesn’t monitor speed despite it being useful in setting rates, it could indicate that they wouldn’t be all that aggressive in collecting breaking data and such.Report
If they can collect the data and it is relevant to rating efficiency, eventually someone will collect the data, and that will force all other companies to either collect the same data or get adverse selection. A better interpretation is that at the current level of granularity, there is substantially less value in speed data than laypeople think.
Remember, in the future, these devices have the ability to know everything. Speed, speed limit, weather, status of lights and windshield wiper, volume of stereo, acceleration forward, to the side and braking, traffic conditions, whether you are onthe phone, whether uou are driving to work or shopping, whether you are coming from a bar, a synagogue or a strip joint, blah, blah blah. The devices of the future will tap into the car, Google maps, and will use algorithms to predict what you are doing and why.
Furthermore, if the insurance company ever gets the data, they need to worry about being subpoenaed for it by whomever. That is why I never wanted the data. I just wanted to offer better prices than my competitors. After all, I am a capitalist pig.Report
I can certainly endorse the snapshot device. It came in the mail, it sat plugged into the car for like a month and then it went back to the company. Shortly after that our premiums dropped 10%. It’s been a pleasing experience.
Also Flo’s adds tickle me considerably.Report
Yeah, if Geico offered this I would do it (and assuming that it was also for a temporary “test” period). I work from home, so I imagine validating that my car does not get driven all that often, or very far, over the “test” period would probably drop my premiums quite a bit.
The fact that the device is not permanent (temporary is the only way I would do it) does suggest a way to game it though. Get the device, then bum rides, or have your grandma drive the car so it goes slow all the time for a month (or whatever the test period is).
Once the device goes back to the ins. co. and your premiums drop, start driving everywhere like a maniac again.Report
Note that State Farm offers a discount for cars subscribed to OnStar that essentially charges you by the miles you drive only, not when or how hard you brake. Of course miles driven does have a direct connection to all risks except comprehensive on an auto policy.Report
Yet, despite this, auto insurers are historically very uninterested in verifying mileage. It wouldn’t be hard. Instead, they take our word for it. At least mine does.Report
The devices may not report on speed, but they very likely can. Even it not, speed is a major factor and the devices will one day record it. This is just the camel’s nose under the tent.
Lot’s of entities would like this data. It’s much more valuable than speed cameras. Pass a law, make them mandatory, make the insurance company forward the data to gov’ts. Mail out the “speeding fines” for you last month’s lead foot.Report
Placement of car seems like more of a factor, at least in this state. In City == Less Likely to be hit by deer (at least I hope that’s what it means…)Report
Your self-driving car has a fraction of a second to make a choice: Either it swerves off the bridge, possibly killing you, or it runs over the children. What does the Google algorithm tell it to do?
This is the kind of “thinking” that drives me nuts. The guy has pinpointed something that is impossibly difficult for almost any human, then says, “see, this thing–this one thing–computers can’t do it.” And from that he assumes an argument against the computer?
This is classic nirvana thinking. I’ve found a flaw in system X; that means system Y is superior.
And to think that guy got paid real cash money to write that.Report
James and David both, I read both the Nick Carr and Alan Jacobs posts, and I don’t see anywhere where they claim a superiority of the old human systems; they are just pointing out, to people who people say “why don’t we have robot cars yet?”, that there is a lot to think about.
You can say they are oversimplifying the issue, but they are just using terms that people can readily understand to point out that programming computers to have a “conscience” when we don’t even agree what such a thing is, or what it “should” do in a given situation (the answer being, as always, “it depends”) is a hard task.
I don’t think they (or I) would disagree that a computer is going to do better, 95% of the time, than a human can. But it does get really sticky. I know the oversimplified re-heated example annoyed David, but I will hit it anyway – even aside from the basic ethical premise of the question (save the car with one occupant, or the one with 4 kids), you have the secondary ethical/privacy issue of the fact that the cars must either “know” and/or communicate this info (# of occupants, and their ages) to each other so as to reach any decision at all – if you are Teddy Kennnedy at Chappaquiddick, you may not want this sort of info stored/transmitted.
And do we all agree that we should save the 4, and not the one? Maybe I am the last of my family line, and a brilliant scientist who is going to solve global warming – what are those 4 snot-noses worth, compared to me? Maybe each car should always attempt to self-preserve, regardless of whether the other car is a schoolbus or not – after all, self-preservation is probably what any human would do instinctively, and we would only rarely fault them for that.
David, what would the “rules of the sea” say, for a captain? I don’t know enough to make the hypo, but if you had to make a collision-avoidance decision to either scuttle your 6-man boat to save a 20-man boat, or save your 6-man boat while the 20-man boat goes down, what do you do, and what would the law or most people say about your decision after you do it?
My assumption would be that you always look out for your boat, so if you must choose, you choose your boat and crew/passengers, no matter what. But I could be wrong.
Anyway, I agree that they are not digging deep. But as brief blog posts designed to spark thought and discussion, I also don’t see what is so egregious about them.Report
Glyph,
I get your point, but what those guys are saying isn’t really interesting, except from a purely philosophical level. It’s trivially easy to design the software with a “conscience” in the sense they mean–you just enter the paramters into the software. You can make it wholly deterministic, or you can base it on some level of probability. Either way, you’re not going to set up a program that’s noticeably inferior to human reactions because we have no agreement on a standard for counts as a good human reaction. You could probably even program it to be completely random in response and we wouldn’t be able to say it’s inferior.
But if we can come to an agreement on what’s the best response, we can program it with trivial ease. There’s no new philosophical question here, and the old philosophical question doesn’t really apply to the cars or software engineers in any independent way. It’s not a new moral problem, like the development of the atomic bomb, or the ability to genetically design our children.Report
James has mostly covered it, except that it isn’t even interesting at a philosophical level. In an up-coming post entitled “Hard Cases Make Bad Algorithms” I’ll explain why.Report
it isn’t even interesting at a philosophical level
I’ll have to take your word on it. That question’s above my pay grade.Report
Hmmm, so at root you are just saying “this doesn’t interest me?”
Because A.) It’s not “trivially easy” especially when you consider the various inputs, rankings, contingencies, and all ethical ramifications of each; B.) The post authors never claim superiority or inferiority of humans or computers as far as I can see, at all (and again, I believe computers will eventually, once the kinks are worked out, do a better job under most circumstances) C.) the statement “if we can come to an agreement on what’s the best response” – as you & David point out, this is the oldest philosophical question in the world, so you also just made the world’s biggest IF statement right there; D.) You could make a similar argument (though I wouldn’t) that the atomic bomb was no new moral question either, since the first bombs at least were not human-race-enders, they were basically just really really really really big bombs, and we’d seen just plain big bombs before.
If you are just saying the question isn’t interesting to you, or is being handled shallowly, that’s totally cool. I think they are interesting questions though, precisely because they force us to look at the oldest questions yet again, in detail and in the real world; because for maybe the first time (paradoxically, since we are talking about programs) we have real ethical choices to make in that programming (like I say, most humans would almost always instinctively opt for self-preservation no matter what; but if we program the systems to act even slightly differently than humans would historically react in that split-second, that has huge ramifications – evolutionary ones).Report
Argh, that was supposed to be to James.Report
Programming it is trivially easy. Coming up with “an” algorithm is trivially easy. Coming up with “the right” algorithm; that’s not easy, but that’s because it’s a philosophical question, not a design and programming question. It seems to me the author is either overstating the difficulty of a design and programming question in a way that misstates the fundamental process going on in design and programming, or else he is making a big deal about an old problem just because it can now apply technologically, with misstates the fundamental process going on in philosophy.
It’s trite. And that, I think, is actually a compliment.Report
I should add this. The very nature of the design and programming of a driverless car means the designers and programmers will come up with an algorithm to createa response to that question. Whereas philosophers can dink around with it as a troublesome scenario for millennia without solving it, the engineers won’t have that option. They will devise and instantiate a solution. It’s not guaranteed to be the best solution, or the most philosophically satisfying (although they may very well ask philosophers their thoughts, as well as asking economists and actuaries. But it will be “a” solution. There’s no way around it. It will be done, and the philosophers may be among the last to notice.Report
Of course it will be “solved”, one way or another. And no one is arguing that it won’t, or shouldn’t be.
But the issues at play are both the oldest philosophical questions, and the sci-fi AI/Laws of Robotics/how-do-we-prevent-Skynet sorts of questions that we have only been seriously grappling with for the last 100 years or so. Humans respond to stimuli the way we do as a result of millions of years of evolution, blindly selecting for survival fitness. Our brains are decision-making machines, making a gazillion calculations related to survival (and maybe morality) every second, and we are only now beginning to understand how those machines work, and what questions they are even asking/answering to arrive at their calculus (42).
So, we built computers; and we are currently in the process of a massive conscious outsourcing of unconscious decision-making to them (decisions that we make now largely by instinct/emotion, as a result of a evolutionary process we neither controlled nor fully comprehend – that is, we comprehend the basic process, but not why a particular trait was selected for/against).
You don’t see this as an interesting point in time? As awesome a power as the bomb was, it is still a “dumb” thing – the bomb doesn’t decide anything, the people do. Once a driverless system is instantiated, the system will decide (based on the rules we gave it). And some people will die as a result of the choices that we and the system have made. They won’t always be the “right” people (whoever they are, and depending on who’s asking).
Sure, we can and will make tweaks to that system, to try to make it conform better to our ideas about morality and best practices; but that just brings us to the next part – what we think humans would or should do, and what humans would actually do once they are in the lifeboat deciding who to eat, are two different things – and one of those things may derive from our philosophical ideas of morality and reason, and one may derive from biological survival instincts/perpetuation of the species. I hope we, and our proxy in the system, choose the right one.
Your position seems to boil down to “the boffins will figure it out”. And they will, just as they figured out how to make an atomic bomb work.
Then as now, scale matters, and we will continue to debate the thorny areas, and how to best handle the downsides.
Let me make a guess – aside from BSG, you are not much of a sci-fi guy, are you? 🙂Report
So, we built computers; and we are currently in the process of a massive conscious outsourcing of unconscious decision-making to them (decisions that we make now largely by instinct/emotion, as a result of a evolutionary process we neither controlled nor fully comprehend – that is, we comprehend the basic process, but not why a particular trait was selected for/against).
You don’t see this as an interesting point in time?
Only from a technical, engineering, point of view. Because we’re not actually giving them “decision making” authority; we’re just giving them algorithms that lead to deterministic decisions. Give me an explicit set of rules to follow that determine the outcome and I’m no longer a decision maker, just a rule enforcer. If we don’t make it actually deterministic, then we’ve made it probabilistic, which simply means that the set of explicit rules I’ve been given to follow include a requirement to “toss dice here; if the range 1-2 results, go to line 24; if the range 3-4 results, go to line 76…” etc.
It’s a more high tech version of a themostat (when the temperature drops below 70, turn “on”), but that’s all.
Get back to me when we give the program discretion; then I’ll really be interested.Report
I’m more interested in the self-modifying programs.
Particularly the ones the government uses. 😉
(note: this is a joke. the ones I’m aware of the
government using are exceptionally boring.)Report
It’s a more high tech version of a thermostat…that’s all.
In a way, an atom bomb is a more high-tech version of a firecracker. And a machine gun is a more high-tech version of a kinetic projectile weapon such a slingshot. And a drone is a more high-tech version of an RC plane.
Coincidentally, I am no longer allowed to play with any of these things, by court order.
we’re just giving them algorithms that lead to deterministic decisions. Give me an explicit set of rules to follow that determine the outcome and I’m no longer a decision maker, just a rule enforcer… when we give the program discretion; then I’ll really be interested.
And how do we know that this is not also the way that we operate? Maybe we are operating under a similar set of rules and have no discretion either. We don’t even know what all our own rules are, much less exactly why each exists – and we are getting ready to try to pass some semblance of one version of them on. We will be instantiating the “predetermined” worldview; hope the “freewill” one wasn’t important.
I deal in a small simple way with business rules at my job. They have far, far fewer variables and unknowns, and nobody dies if I get one wrong (though it can still be costly monetarily). And even then, there are still many, many situations in which a rule cannot be coherently crafted or consistently applied, so we refer the situation to a human, who luckily has more than fractional seconds to make a decision (the just-awakened-and startled human driver will have no such luxury).
Anyway, I guess I understand why you don’t find it interesting. But I do (so did Asimov and his descendants). On balance it will be good. But there will be some serious bumps and lawsuits on the way, I’d imagine.Report