The Armchair CEO
The one asks you “heads or tails?” You say heads; it’s tails. You are forever the dunce, mocked mercilessly in vulgar drinking songs and cutesy nursery rhymes alike.
It’s all obvious once you know the answer.
From the comfort of our armchairs it’s easy to make claims about what we would have done had we been CEO or what we would have done had we been that patient’s doctor. Our brains see effects and attribute causes, and when we see a bad effect, our vision seeks idiots and not human beings trying to navigate uncertainty as best they can with whatever limited information they had available at the time.
It’s an unfortunate fact that once an individual makes a decision, the context of that decision is forever lost. You can never capture that moment of uncertainty again. Documenting your thought process would help, but no one actually does this.
Well, that’s not exactly true. The opening anecdote of the coin flip rings false because we know exactly what it is like to predict a coin flip without prior knowledge. Yes, maybe that other guy would have said “tails”, but you are not an idiot for picking heads.
Once the situation gets more complicated though, our ability to empathize evaporates like gasoline, and all the heads are wearing dunce caps. The doctor sees a patient with a particular condition that could have been identified with a simple test up front, but he does not see the myriad other possibilities that had to be eliminated first*.
Sure, the Russians warned the US government about the Tsarnaev brothers, but we don’t hear about the other 1000 leads received in that particular minute of that particular day, each of which were similarly ominous.
The Atlantic provides a perfectly adequate explanation of why Ron Johnson failed as JC Penney’s CEO. Do you doubt, however, that its author would have been unable to pen a perfectly adequate explanation of Ron Johnson’s success had things turned out differently? Of course not. Because once you have the result, it is a simple matter to hunt down the explanatory facts. (“The poor and middle class are most attracted to discounts, but their purchase power is vanishing, so it was obvious from the beginning that Johnson’s strategy of centering around higher-end brands would work.”) The results tell you what you can safely ignore and allow you to hunt solely for confirming evidence. It isn’t that you ignore contrary evidence but that you were never aware of it in the first place since you never sought it.
This is real danger. It allow you to convince yourself of your brilliance. To avoid this, you really need to make your predictions in advance, on paper, and in a single repository. If you fail to document your predictions, you are likely to lie to yourself retrospectively about how strongly you felt about the result. If your prediction was wrong, you can tell yourself you weren’t that confident in the first place anyway. If it’s true, you’ve reinforced your own hubris. If you don’t document all your predictions in single place, you can pick which predictions you count. This is good for your self-esteem, but bad for your development as a thinker.
Before doing this exercise, I genuinely thought that I was a better play caller watching from home than multi-million dollar professional football coaches. Once I sat down for a game and wrote predictions for each play and how things actually turned out, I realized that my intuition was not better than a team of professionals who study their subject all day**. It was only in retrospect that I could see my hubris as hubris. Now, I can’t believe I was ever so stupidly arrogant.
Of course, it helps that everyone else seems to have the same issue.
* I would be interested in seeing what doctors thought of the treatment they had provided their own patients if they didn’t know they were their own patients by disguising the details. I wouldn’t be surprised if they found an idiot or two.
** I still think that teams punt too often. I’ve seen academic research that supports me on that though.
Photo credit: Vishal Patel of Flickr
Less seriously, you see a similar phenomenon when it comes to the picking of All-Stars in the various sports leagues. Before the ink is dry on the initial announcement, you have scribes penning about all the “snubs”… all the guys who should have been selected but weren’t. But what the vast majority of them fail to do is state who among the selectees should be removed in order to make room for their snubs. See, the rosters are finite. Much like a top-10 list can have 10 and only 10 entrees, the National League All-Star team can only have so many players (the number keeps growing, in part to mitigate the “snub” issue). So, if you really, really think so-and-so was deserving of a spot, you must necessarily think that someone who was selected was NOT deserving of a spot. I attribute part of that to the writers wanting to avoid calling out players, players they might one delay rely upon for a story. But I think part of it is due to a similar phenomenon, namely the ease of criticizing others when not knowing the context of their decision and/or the limitations on their decision making.
This is, of course, not to say that anyone is above criticism. If a CEO or anyone else genuinely demonstrate poor judgement, not just based on outcome but on process, and a case can be made to that effect, it absolutely should be made.Report
Fundamental attribution error.Report
More related reading.Report
My dad once sat in my Federal Courts class at law school.
We were reading a Supreme Court state about preemption and whether a state-court could hear certain cases that would involve questions of Federal Law.
http://en.wikipedia.org/wiki/Merrell_Dow_Pharmaceuticals_Inc._v._Thompson
The professor criticized the tactics used by the plaintiff’s lawyers in this case especially because it delayed justice for their clients because the case had to go all the way to the Supreme Court to determine whether it could be heard in state court.
My dad said that at the time this kind of litigation was new and lawyers were still figuring out how to do it. A lot of classmates seemed to like the workaday lawyer criticizing the abstractions of a professor.
That is all.Report
Good CEOs are like good generals: even the best ones have practical limits on what’s possible in a given situation. And like good generals, they’re often taking orders from faithless idiots.
Look, this is Leadership 101: follow the money. First question every consultant ought to ask is “How does your firm make money?” If you don’t get a good answer to that question, go down the hall and ask the financial people. They know. They also know where a firm is burning money.
Second question: “Who is your target customer?” That’s where most new firms are screwing up. Like bad novelists, they aren’t writing for anyone in particular and therefore for nobody.
JCPenney has a big problem. Its customer base is Not Buying Things. Sacking Ron Johnson will not change that fact. All the old big box retailers have the same problem, changing demographics and economic conditions. Two possible solutions: chase the old customer or attract new customers. A retail store is no different than a restaurant or a theatre. It needs an audience.
The military has tried all these Network Centric Paradigms for years and they can never, ever get it right. Technology is not a solution, it is a problem. Want to win wars? Want to make money? Provide your own people with meaningful objectives. Know your customer — by name. Do not pretend information is a substitute for sound judgement: you will never, ever have enough information in real time to make a decision: by the time the solution becomes obvious, you’re being overrun and you’ll be routed.Report
At least as important as having information is knowing what to do with that information. Some information is superficial and unimportant. Some information is deceptive and conceals more than it illuminates. Information is not a substitute for thought.
“Data” is worse than information in that respect.Report
This. Moah data, moah problems. The only useful information is directly related to the objective at hand — and such information is hard come by and it goes stale with amazing speed. Look at all this data NSA is collecting. Who could possibly consume it in a meaningful way?
Years ago, the military was working on a specification for a standard radio called JTRS. Seems all the different branches of service were on their own standard, to the point where ground troops weren’t able to coordinate with their close air support — that’s just one case in point. The US Marine Corps, which had a wonderfully effective structure in the MEU, already had a working solution. It should have been adopted as-is.
JTRS failed horribly. These one size fits all programs create nothing but problems. Look at the Joint Strike Fighter, miserable failure, years behind schedule. Should have never been allowed to start, by my estimation of things.
But some programs manage to solve their problems. Again, I return to the US Marine Corps, whose leadership I view to be the best exemplars. USMC stuck with the Osprey V-22 when everyone else had largely given up on it. They’ve worked with it, solved most of its problems: they made it work because they needed Osprey to accomplish their mission. Sometimes, all you need to succeed in a difficult situation is a customer who believes you care about his problem.Report
http://www.dailydot.com/lol/dog-butthole-jesus-origins/
There are multiple points proved by said link.Report
. Once I sat down for a game and wrote predictions for each play and how things actually turned out,
Out of curiosity, how did you do this without knowing what play had been called ahead of time?Report
To avoid this, you really need to make your predictions in advance, on paper, and in a single repository. If you fail to document your predictions, you are likely to lie to yourself retrospectively about how strongly you felt about the result. If your prediction was wrong, you can tell yourself you weren’t that confident in the first place anyway. If it’s true, you’ve reinforced your own hubris. If you don’t document all your predictions in single place, you can pick which predictions you count. This is good for your self-esteem, but bad for your development as a thinker.
This is one of the less stupider ideas over on Less Wrong. (I.e., it’s one of the few ideas not related to an AI-apocalypse and/or immorality.)
They assert you should use probabilities, too. Instead of just saying ‘I predict X.’, you should say ‘I think X is 75% likely’.
This is more complicated to score, but what you can do is gamble against yourself, assign the ‘won’t happen’ side negative and the ‘will happen’ side positive, and try to average to zero over a set of predictions. (A ‘set’ being predictions you make all at once, otherwise you’ll try to err to get back to zero when making new predictions.)Report
I am a huge Yudkowsky fan. I don’t read anything on Less Wrong that isn’t from him though. Perhaps that’s why I don’t remember anything about an AI apocalypse.
>They assert you should use probabilities, too
This is preferable. I would say my list is a minimal one. If you can document probabilities that would be better, I suppose, but if it gets in the way of your following the process in the first place, then it can be skipped.Report
I am a huge Yudkowsky fan. I don’t read anything on Less Wrong that isn’t from him though. Perhaps that’s why I don’t remember anything about an AI apocalypse.
You must be in a different simulated post-singularity universe than I am. (Another particularly odd idea over at Less Wrong.)
Yudkowsky is the guy who founded and runs the Machine Intelligence Research Institute, a place dedicated to making sure the AIs that (they assure us) are going to very soon taking over the world are _good_ AIs.
Hell, he’s got some sort of idiotic idea that even AIs in a completely closed environment aren’t safe, that if they are allowed to communicate at all, they can convince people to let them out.
That’s Yudkowsky himself.
Less Wrong is a great place to learn about probability and the stupid logical fallacies and biases that everyone uses, and how to avoid them. It also has some interesting ideas on moral philosophy and calculations, and even game theory.
And those things, incidentally, are what Less Wrong is _supposed_ to be about.
It gets rather stupid when it strays away from those things. And it does stray away from those things. All the time. (Although I haven’t been reading it for a year or two because, well, the basics are easy to understand, and the complicated math stuff is not actually needed by anyone, and the rest is gibberish. I probably should figure out if there’s a tag for ‘interesting philosophical stuff’ or something.)Report
Actually less wrong gets it laughably wrong when they talk about moral philosophy. Unlike in science where you can bootstrap your way to correct or nearly correct credences no matter your starting point, you can’t do the same with moral philosophy.Report
In fact, once I’ve submitted my thesis, I will write a post or a series of posts that explains why if you’re a Bayesian, you shouldn’t be any kind of utilitarian at all.Report
Really what I’ve read of his was when he was blogging with Robin Hanson on Overcoming Bias. Everything I read of his on Less Wrong dates from that era. It’s mostly what exists here, though I haven’t gotten through all of it: http://wiki.lesswrong.com/wiki/Sequences
Perhaps if you stick to that page you’d be restricted to interesting philosophical stuff?
I will write a post or a series of posts that explains why if you’re a Bayesian, you shouldn’t be any kind of utilitarian at all.
I’ll look forward to that!Report
I’ll do it as a series of posts. Each one will be a response to what Yudkowsky writes in his meta ethics sequence.Report