Drinking Cheap Vodka Will Kill You: A Twitter Parable

John McCumber

John McCumber is a cybersecurity executive, retired US Air Force officer, and former Cryptologic Fellow of the National Security Agency. In addition to his professional activities, John is a former Professorial Lecturer in Information Security at The George Washington University in Washington, DC and is currently a technical editor and columnist for Security Technology Executive magazine. John is the author of the textbook Assessing and Managing Security Risk in IT Systems: a Structured Methodology

Related Post Roulette

23 Responses

  1. Damon says:

    Well, if you’re left of center, I’m sure this “minor error in our safety protocols” will be corrected forthwith. If you’re not, well, maybe someone will get around to it next century. 🙂

    I’ve always wondered why anyone would want to put their “data” in the cloud, as you said, on someone else’s computer. Never struck me as smart or secure. But taking a physical backup is just so 20th century!Report

    • Kazzy in reply to Damon says:

      A comedian was recently perma-banned from Twitter for wishing death on Lindsay Graham. Can we put aside the notion that Twitter only bans conservatives?Report

      • CJColucci in reply to Kazzy says:

        For certain values of “we,” the answer is “No.”Report

      • Damon in reply to Kazzy says:

        That was for an actual death threat, not a machine reading error, which is what the author experienced.Report

        • Kazzy in reply to Damon says:

          You claimed the response would be based on his politics. Did you forget?Report

          • Damon in reply to Kazzy says:

            No I didn’t forget. You just changed the scenario. You’re comparing an actual death threat to a machine reading error. Not exactly a similar situation. In the case I commented on, he was “auto” suspended due to a code error and the suspension is going to be reviewed by a human at some point. My comment was direct to the likelihood that that wait time to review his case would be impacted by his political affiliation or what he’s said on the platform, with the employee not in any rush to clear him given the suspendees politics and the employees politics. The permaban was an action taken by an employee, and was “post review”.

            I’d think you’d have figured that out.Report

    • Philip H in reply to Damon says:

      The AI and ML folks actually do “train” their algorithms all the time, and they are fairly capable of learning the difference in phrasing etc. Its nowhere near as perfect as actual humans yet (!) but the AI community prides themselves on how quickly their Ai can “learn.”

      As to the cloud – by paying a provider you get rid of the physical infrastructure cost. AWS or Google (who also do cloud hosting) make money by spreading out the very real costs of their server farms over multiple clients (including a surprising number of non-classified federal, state and local government systems). Its mostly secure, but do note the DoD built their own cloud for just this reason.Report

      • Damon in reply to Philip H says:

        My comment was more focused towards the individual consumer. I don’t see the value of getting rid of my hard drive and saving my vacation photos to the cloud.Report

  2. Slade the Leveller says:

    I had something similar happen on Facebook, but without a timeout. I uploaded a picture of my Thanksgiving dinner and it got flagged for drug content. It was a pretty good prime rib, but not that good!

    I don’t know where any of these social media sites are in terms of machine learning as far as parsing the English language goes. Maybe it’s early in the game, and they don’t get context yet, but these seem like harmless errors to me. Maybe they’re just overwhelmed with the amount of stuff that gets posted and they’re taking the easy way out for now. Besides, as you pointed out. humans are just as capable of misunderstanding each other without the filter of a Tweet.

    Stick to gin, it attracts a better class of drinker anyway. 😉Report

  3. DensityDuck says:

    Well, that’s just a private company saying that it didn’t care to continue to associate with someone like you. I’m sure that if you’re that salty about it you can go make your own Twitter, where you can be just as racist as you like. After all, we all knew what you really meant.Report

  4. Marchmaine says:

    The area I work in tech is adjacent to “sentiment analysis” … I hear it’s come a long way in the past decade, but from what I hear, it still seems that sarcasm’s a bitch. Should be using SarCaptcha.Report

  5. Brandon Berg says:

    I read about how Facebook has all my data and knows everything about me and use this information to keep me trapped in an ideological bubble where everyone agrees with me and feed me ads that I’m powerless to resist, and then I go on Facebook and I see a bunch of posts about how great it would be if Bernie Sanders and Alexandria Ocasio-Cortez could both be President at the same time, and ads for stuff I either just bought or have absolutely no desire for.Report

  6. Jaybird says:

    When I was young, I thought the Turing Test was about the sophistication of the machine.

    When I was older, I realized that the Turing Test was about the sophistication of the human.

    Now that I am in my dotage, I realized that my pocket was being picked by the guy who said “talk to this monitor and tell me whether or you think that someone else is typing at the other end.”Report

  7. Michael Cain says:

    Cloud computing, loosely, is when you don’t own or maintain the hardware and system software. Your Air Force hardware served Air Force clients. No one called up to say, “I need 100 virtual processors with 16G, Linux, and Perl 5.24 this month, and 24 terabytes of storage with near real-time synchronization across North America and Europe.” That’s what Rackspace or AWS sell. And are fine if the next month you call up and tell them you only need 50 processors, or 500, and want to add storage in Asia.

    When I was in graduate school getting an applied math degree one of the professors used to complain, when he was handing back homework, “Mr. Cain, of course, simply beat the problem to death with a computer.” The current AI and ML fall in the category of beating the problem to death with a computer. With varying degrees of success. I read the other day that the people at Google Translate had trained a neural network that had a trillion coefficients. Now that’s a club suitable for beating a problem. Twitter and Facebook just need bigger clubs.Report

  8. Dr X says:

    I had a Twitter account for ten years when Twitter abruptly suspended me permanently. I appealed the suspension. A few weeks later, they reinstated me with an apology indicating that an algorithm caused the erroneous ban. A few months after that, they banned me permanently again. This time they suspended the account because, they said, I created the account to evade a permanent suspension. It was the only Twitter account I ever had. I appealed again, but they told me the suspension stands. I created a new account to evade the ban. I’ve had it for months. So far, no problem.Report

  9. Burt Likko says:

    The Optimistic Technologist: “Yes, these are certainly technical problems but more and more sophisticated AI will eventually get those solved.”

    The Skeptic: “Really? If driving is tough to teach an AI how to do, how much harder is literally the entire rest of human existence?”

    The Constitutional Scholar: “What’s needed is a system of checks and balances, by which a scraper identifies problematic tweets which are then reviewed by a human being. That way we can get a good overall result quickly without needing to develop a super-sophisticated AI able to understand enough nuance so as to pass a Turing Test; we will instead have actual people involved.”

    The Venture Capitalist: “And who is going to pay for all those people that ‘we’ are supposed to employ in this scheme? Twitter’s stockholders? Twitter is profitable, but add the cost of thousands of censors (and the bad P.R. of hiring people who are, after all, censors) and maybe it stops being so. If this is a necessary part of what it takes to have Twitter or something like it, that makes overhead jump up significantly. Sounds bad for the bottom line.”

    The Socialist: “So let’s nationalize Twitter, make it a publicly-available utility rather than a for-profit company!”

    The Trumpist Republican: “Hey, you may be on to something there.”

    The Mainline Democrat: “Get real. We’re not going to nationalize Twitter. That’s one of the silliest things I’ve ever heard, and I’ve heard a lot of silly things from the likes of both of you two over the past few years.”

    The Ombudsman: “Twitter is already under fire for alleged ideological bias in the way it enforces its TOS and disciplines its users for violating them. Adding human censors, who will inevitably have actual biases themselves, will only aggravate that bad publicity. And we don’t need to address whether Twitter really is biased in its enforcement or not; what matters here is how it’s perceived.”

    The Jaded Blog Editor: “More importantly, what standard will those human reviewers apply? There is no articulated or articulatable standard which a clever enough troll cannot find a way to convey a toxic message that does not violate the black letter rule of an explicit standard. It can’t be done.”

    The Law Student: “Sure it can! Here, let me try. [Standard X]”

    The Troll: “[Toxic message Y, which adheres to Standard X]”

    The Law Student: “Damnit. Let me try again.”

    The Jaded Blog Editor: “Don’t bother. If you’re going to have a human censor, that censor has to make a judgment call. Every time. Sometimes the call will be easy but sometimes it’ll be hard. But it takes a human to do it, and that’s why you can’t come up with a black-and-white standard at all.”

    The Idealist: “But surely there can be an appropriate level of judgment about when, how, and what to censor while still permitting free-flowing and flourishing debate!”

    The Human Resources Manager: “Let’s assume that there is, at least as a platonic ideal. To get there, we’d have to train an army of censors in the institutional culture we wish for them to police, against the kinds of toxic statements we can imagine, with the certain knowledge that they will almost immediately encounter toxicity of a kind we never imagined. And the Venture Capitalist isn’t going to like paying for that, especially if it’s doomed to failure from the get go.”

    The Frustrated User: “So what can Twitter do?”

    The Stoic: “Twitter can do its best. It may never be good enough, but it’s all Twitter can do. It won’t be perfect but we want Twitter to try to get it as right as it can anyway. And we, the people affected, need to temper our expectations so as to not let the perfect become the enemy of the good.”

    — Sometimes I really miss having @JasonKuznicki around these parts.Report

    • Chip Daniels in reply to Burt Likko says:

      The Chip: “Twitters, that’s the thing all the kids on Myspace are talking about, right?”Report

    • Jaybird in reply to Burt Likko says:

      A fitting tribute.Report

    • North in reply to Burt Likko says:

      Well done, he’d probably be flattered!

      And there are so, so, so many people I miss. Damn social media to heck. it was way better when it was just the blogosphere.Report

    • Oscar Gordon in reply to Burt Likko says:

      Epic!Report

    • The Skeptic: “Really? If driving is tough to teach an AI how to do, how much harder is literally the entire rest of human existence?”

      I’m not nearly as optimistic about AI as the real optimists are, but there needs to be a response. Driving requires sub-second response, feedback loops to be sure the car is doing what it’s been told, and continuous recognition of an entire dynamic environment. Moderation? Entire rest of human existence as expressed through typed text. 99% of which doesn’t have to be recognized, only recognized as speech within the terms of service. One second response is fine. Do you remember Google Translate from ten years ago? When you had to specify the source language, and what you got back was often really bad? Today it auto-recognizes >100 languages, deals with a lot of misspellings, and almost never produces gibberish. If I were in charge at Twitter, I’d put my humans to work purely on meta moderation (a la Slashdot for the last 20 years): finding bad decisions. Then add those to the training database.

      Sometime — maybe already done but not published, but no more than three years — this will be a solved problem. Someone will do a clever thing on top of a brute force neural network. Twitter will probably be able to afford it. The little guys, maybe not.

      With my futurist hat on, driving will be a solved problem within ten years, at least sufficient to meet what I think, with my futurist hat on, the first killer app for it: keeping the boomers in their suburban houses for another decade rather than having to dump them into much more expensive institutional arrangements.Report