The Armchair CEO

Avatar

Vikram Bath

Vikram Bath is the pseudonym of a former business school professor living in the United States with his wife, daughter, and dog. (Dog pictured.) His current interests include amateur philosophy of science, business, and economics. Tweet at him at @vikrambath1.

Related Post Roulette

16 Responses

  1. Avatar Kazzy says:

    Less seriously, you see a similar phenomenon when it comes to the picking of All-Stars in the various sports leagues. Before the ink is dry on the initial announcement, you have scribes penning about all the “snubs”… all the guys who should have been selected but weren’t. But what the vast majority of them fail to do is state who among the selectees should be removed in order to make room for their snubs. See, the rosters are finite. Much like a top-10 list can have 10 and only 10 entrees, the National League All-Star team can only have so many players (the number keeps growing, in part to mitigate the “snub” issue). So, if you really, really think so-and-so was deserving of a spot, you must necessarily think that someone who was selected was NOT deserving of a spot. I attribute part of that to the writers wanting to avoid calling out players, players they might one delay rely upon for a story. But I think part of it is due to a similar phenomenon, namely the ease of criticizing others when not knowing the context of their decision and/or the limitations on their decision making.

    This is, of course, not to say that anyone is above criticism. If a CEO or anyone else genuinely demonstrate poor judgement, not just based on outcome but on process, and a case can be made to that effect, it absolutely should be made.Report

  2. Avatar Chris says:

    Fundamental attribution error.Report

  3. Avatar NewDealer says:

    My dad once sat in my Federal Courts class at law school.

    We were reading a Supreme Court state about preemption and whether a state-court could hear certain cases that would involve questions of Federal Law.

    http://en.wikipedia.org/wiki/Merrell_Dow_Pharmaceuticals_Inc._v._Thompson

    The professor criticized the tactics used by the plaintiff’s lawyers in this case especially because it delayed justice for their clients because the case had to go all the way to the Supreme Court to determine whether it could be heard in state court.

    My dad said that at the time this kind of litigation was new and lawyers were still figuring out how to do it. A lot of classmates seemed to like the workaday lawyer criticizing the abstractions of a professor.

    That is all.Report

  4. Avatar BlaiseP says:

    Good CEOs are like good generals: even the best ones have practical limits on what’s possible in a given situation. And like good generals, they’re often taking orders from faithless idiots.

    Look, this is Leadership 101: follow the money. First question every consultant ought to ask is “How does your firm make money?” If you don’t get a good answer to that question, go down the hall and ask the financial people. They know. They also know where a firm is burning money.

    Second question: “Who is your target customer?” That’s where most new firms are screwing up. Like bad novelists, they aren’t writing for anyone in particular and therefore for nobody.

    JCPenney has a big problem. Its customer base is Not Buying Things. Sacking Ron Johnson will not change that fact. All the old big box retailers have the same problem, changing demographics and economic conditions. Two possible solutions: chase the old customer or attract new customers. A retail store is no different than a restaurant or a theatre. It needs an audience.

    The military has tried all these Network Centric Paradigms for years and they can never, ever get it right. Technology is not a solution, it is a problem. Want to win wars? Want to make money? Provide your own people with meaningful objectives. Know your customer — by name. Do not pretend information is a substitute for sound judgement: you will never, ever have enough information in real time to make a decision: by the time the solution becomes obvious, you’re being overrun and you’ll be routed.Report

    • Avatar Burt Likko in reply to BlaiseP says:

      Do not pretend information is a substitute for sound judgement: you will never, ever have enough information in real time to make a decision: by the time the solution becomes obvious, you’re being overrun and you’ll be routed.

      At least as important as having information is knowing what to do with that information. Some information is superficial and unimportant. Some information is deceptive and conceals more than it illuminates. Information is not a substitute for thought.

      “Data” is worse than information in that respect.Report

      • Avatar BlaiseP in reply to Burt Likko says:

        This. Moah data, moah problems. The only useful information is directly related to the objective at hand — and such information is hard come by and it goes stale with amazing speed. Look at all this data NSA is collecting. Who could possibly consume it in a meaningful way?

        Years ago, the military was working on a specification for a standard radio called JTRS. Seems all the different branches of service were on their own standard, to the point where ground troops weren’t able to coordinate with their close air support — that’s just one case in point. The US Marine Corps, which had a wonderfully effective structure in the MEU, already had a working solution. It should have been adopted as-is.

        JTRS failed horribly. These one size fits all programs create nothing but problems. Look at the Joint Strike Fighter, miserable failure, years behind schedule. Should have never been allowed to start, by my estimation of things.

        But some programs manage to solve their problems. Again, I return to the US Marine Corps, whose leadership I view to be the best exemplars. USMC stuck with the Osprey V-22 when everyone else had largely given up on it. They’ve worked with it, solved most of its problems: they made it work because they needed Osprey to accomplish their mission. Sometimes, all you need to succeed in a difficult situation is a customer who believes you care about his problem.Report

  5. Avatar kenB says:

    . Once I sat down for a game and wrote predictions for each play and how things actually turned out,

    Out of curiosity, how did you do this without knowing what play had been called ahead of time?Report

  6. Avatar DavidTC says:

    To avoid this, you really need to make your predictions in advance, on paper, and in a single repository. If you fail to document your predictions, you are likely to lie to yourself retrospectively about how strongly you felt about the result. If your prediction was wrong, you can tell yourself you weren’t that confident in the first place anyway. If it’s true, you’ve reinforced your own hubris. If you don’t document all your predictions in single place, you can pick which predictions you count. This is good for your self-esteem, but bad for your development as a thinker.

    This is one of the less stupider ideas over on Less Wrong. (I.e., it’s one of the few ideas not related to an AI-apocalypse and/or immorality.)

    They assert you should use probabilities, too. Instead of just saying ‘I predict X.’, you should say ‘I think X is 75% likely’.

    This is more complicated to score, but what you can do is gamble against yourself, assign the ‘won’t happen’ side negative and the ‘will happen’ side positive, and try to average to zero over a set of predictions. (A ‘set’ being predictions you make all at once, otherwise you’ll try to err to get back to zero when making new predictions.)Report

    • Avatar Vikram Bath in reply to DavidTC says:

      I am a huge Yudkowsky fan. I don’t read anything on Less Wrong that isn’t from him though. Perhaps that’s why I don’t remember anything about an AI apocalypse.

      >They assert you should use probabilities, too

      This is preferable. I would say my list is a minimal one. If you can document probabilities that would be better, I suppose, but if it gets in the way of your following the process in the first place, then it can be skipped.Report

      • Avatar DavidTC in reply to Vikram Bath says:

        I am a huge Yudkowsky fan. I don’t read anything on Less Wrong that isn’t from him though. Perhaps that’s why I don’t remember anything about an AI apocalypse.

        You must be in a different simulated post-singularity universe than I am. (Another particularly odd idea over at Less Wrong.)

        Yudkowsky is the guy who founded and runs the Machine Intelligence Research Institute, a place dedicated to making sure the AIs that (they assure us) are going to very soon taking over the world are _good_ AIs.

        Hell, he’s got some sort of idiotic idea that even AIs in a completely closed environment aren’t safe, that if they are allowed to communicate at all, they can convince people to let them out.

        That’s Yudkowsky himself.

        Less Wrong is a great place to learn about probability and the stupid logical fallacies and biases that everyone uses, and how to avoid them. It also has some interesting ideas on moral philosophy and calculations, and even game theory.

        And those things, incidentally, are what Less Wrong is _supposed_ to be about.

        It gets rather stupid when it strays away from those things. And it does stray away from those things. All the time. (Although I haven’t been reading it for a year or two because, well, the basics are easy to understand, and the complicated math stuff is not actually needed by anyone, and the rest is gibberish. I probably should figure out if there’s a tag for ‘interesting philosophical stuff’ or something.)Report

      • Avatar Murali in reply to Vikram Bath says:

        Actually less wrong gets it laughably wrong when they talk about moral philosophy. Unlike in science where you can bootstrap your way to correct or nearly correct credences no matter your starting point, you can’t do the same with moral philosophy.Report

      • Avatar Murali in reply to Vikram Bath says:

        In fact, once I’ve submitted my thesis, I will write a post or a series of posts that explains why if you’re a Bayesian, you shouldn’t be any kind of utilitarian at all.Report

      • Really what I’ve read of his was when he was blogging with Robin Hanson on Overcoming Bias. Everything I read of his on Less Wrong dates from that era. It’s mostly what exists here, though I haven’t gotten through all of it: http://wiki.lesswrong.com/wiki/Sequences

        Perhaps if you stick to that page you’d be restricted to interesting philosophical stuff?

        I will write a post or a series of posts that explains why if you’re a Bayesian, you shouldn’t be any kind of utilitarian at all.

        I’ll look forward to that!Report

      • Avatar Murali in reply to Vikram Bath says:

        I’ll do it as a series of posts. Each one will be a response to what Yudkowsky writes in his meta ethics sequence.Report

Leave a Reply

Your email address will not be published. Required fields are marked *