Drinking Cheap Vodka Will Kill You: A Twitter Parable
Drinking Cheap Vodka Will Kill You.
There it is. That’s the sentence that sent me to Twitmo – Twitter’s virtual Guantanamo Bay incarceration center. I was given a 12-hour time-out so I could cool my heels and think about all the harm I wished to inflict on my fellow human beings. That was the explanation. I was wishing harm on someone. Now I wait to see if my colleagues here at Ordinary Times will assume the same.
When I got the notice of my time-out, I was concerned I had actually crossed swords with an online advocacy organization such as the Hospitality Association for Homeless Alcoholics (Ha-Ha). But apparently no one had actually reported me – my ersatz “abusive tweet” was sussed out by one of Big Tech’s slick, new algorithms deployed to crack down on evil and naughty writing: artificial intelligence in the parlance of today’s technology mavens.
You’ve heard about artificial intelligence and machine learning, haven’t you? They are all the rage these days, promising to usher in a brave new world of peace, love and understanding as they are applied to our social and business interactions in cyberspace. They are replacing the cloud computing fad by cloud computing providers as the technology buzzword du jour.
Twenty-first century cloud computing was a concept I always found amusing. There are no “cloud computer systems” – there are simply other people’s computers doing the computing. In fact, when I would hear people talk of sending their data to the cloud, I would mentally substitute the moon, as in, “with this new architecture, I am sending our corporate data to the moon”, and ” we are able to download all this critical information from the moon.”
When I first operated computers for the Department of Defense in the late 1970s, I was providing cloud computing services to the air base to which I was assigned. Supply clerks, administrators, and chow hall cooks entered data into their local remote terminals, then pushed “send” for it to transfer to the base mainframe system I operated. As the data arrived, it was stored and processed (usually overnight), then printed output and updated Hollerith cards were sent back to the unit who submitted the data by a delivery vehicle. As far as most knew, it went to a cloud – or even the moon.
I now look on artificial intelligence and machine learning with the same eye-rolling suspicion. They are nothing new; they are just someone else’s algorithm. I will make one exception for those Boston Dynamics robots. Those mechanical wolves and automated humanoids really creep me out. When I saw the video of them as they were programmed to dance, I could only picture them celebrating on the rotting corpses of all the humans they have finally replaced.
We all know these AI and ML capabilities are, if not simply relabeled computer programs, are at least little more. They may be more complex and require more inputs, but they are just expanded elements of existing technology. In the case of my intent to wish harm on someone, the censor algorithm apparently picked up on the phrase “can kill you” or some combination of words with “kill” in the mix. Thus, the service provider felt justified in assigning intent to my words and an associated pre-programmed “punishment” based on the nature of my intent. AI? Hardly.
Was I attempting to wish harm on someone, or does even a cursory look at those six words imply I was suggesting quite the opposite? The algorithm that sought to delete the tweet had jumped to an inaccurate conclusion and ascribed to me a nefarious intent. Ascribing intent to words has become a problem not only for our social media in particular, but for society in general. Our own built-in intelligence system will often do the same when we judge the words of others.
One of the techniques we all use as humans is to create a simple mental categorization schema in which we store content. We build labeled buckets for other people and plop them in based on some data points – often the words they use. I was tossed into such a category that said I was wishing harm to someone and needed some remedial correction. We make as many inaccurate generalizations as our social media censoring programs.
When our internal programming uses a category scheme to identify other people with labels like racist, fascist, hick, loser, hillbilly, elitist, or any one of the current crop of derogatory labels, the likelihood of you being completely off-base is significant. We are all complex entities and we do ourselves a disservice choosing to let broad, social media labels define our relationships.
For my part, please know I will never judge you in a negative light given your choices in alcoholic spirits. I will only clink your glass and say, “Cheers!” if we share a drink.
But I still think cheap vodka will kill you.
Well, if you’re left of center, I’m sure this “minor error in our safety protocols” will be corrected forthwith. If you’re not, well, maybe someone will get around to it next century. 🙂
I’ve always wondered why anyone would want to put their “data” in the cloud, as you said, on someone else’s computer. Never struck me as smart or secure. But taking a physical backup is just so 20th century!Report
A comedian was recently perma-banned from Twitter for wishing death on Lindsay Graham. Can we put aside the notion that Twitter only bans conservatives?Report
For certain values of “we,” the answer is “No.”Report
That was for an actual death threat, not a machine reading error, which is what the author experienced.Report
You claimed the response would be based on his politics. Did you forget?Report
No I didn’t forget. You just changed the scenario. You’re comparing an actual death threat to a machine reading error. Not exactly a similar situation. In the case I commented on, he was “auto” suspended due to a code error and the suspension is going to be reviewed by a human at some point. My comment was direct to the likelihood that that wait time to review his case would be impacted by his political affiliation or what he’s said on the platform, with the employee not in any rush to clear him given the suspendees politics and the employees politics. The permaban was an action taken by an employee, and was “post review”.
I’d think you’d have figured that out.Report
The AI and ML folks actually do “train” their algorithms all the time, and they are fairly capable of learning the difference in phrasing etc. Its nowhere near as perfect as actual humans yet (!) but the AI community prides themselves on how quickly their Ai can “learn.”
As to the cloud – by paying a provider you get rid of the physical infrastructure cost. AWS or Google (who also do cloud hosting) make money by spreading out the very real costs of their server farms over multiple clients (including a surprising number of non-classified federal, state and local government systems). Its mostly secure, but do note the DoD built their own cloud for just this reason.Report
My comment was more focused towards the individual consumer. I don’t see the value of getting rid of my hard drive and saving my vacation photos to the cloud.Report
I had something similar happen on Facebook, but without a timeout. I uploaded a picture of my Thanksgiving dinner and it got flagged for drug content. It was a pretty good prime rib, but not that good!
I don’t know where any of these social media sites are in terms of machine learning as far as parsing the English language goes. Maybe it’s early in the game, and they don’t get context yet, but these seem like harmless errors to me. Maybe they’re just overwhelmed with the amount of stuff that gets posted and they’re taking the easy way out for now. Besides, as you pointed out. humans are just as capable of misunderstanding each other without the filter of a Tweet.
Stick to gin, it attracts a better class of drinker anyway. 😉Report
Well, that’s just a private company saying that it didn’t care to continue to associate with someone like you. I’m sure that if you’re that salty about it you can go make your own Twitter, where you can be just as racist as you like. After all, we all knew what you really meant.Report
“Cheap Vodka” is, after all, racist code for “Filthy Cossacks!”Report
The area I work in tech is adjacent to “sentiment analysis” … I hear it’s come a long way in the past decade, but from what I hear, it still seems that sarcasm’s a bitch. Should be using SarCaptcha.Report
I read about how Facebook has all my data and knows everything about me and use this information to keep me trapped in an ideological bubble where everyone agrees with me and feed me ads that I’m powerless to resist, and then I go on Facebook and I see a bunch of posts about how great it would be if Bernie Sanders and Alexandria Ocasio-Cortez could both be President at the same time, and ads for stuff I either just bought or have absolutely no desire for.Report
When I was young, I thought the Turing Test was about the sophistication of the machine.
When I was older, I realized that the Turing Test was about the sophistication of the human.
Now that I am in my dotage, I realized that my pocket was being picked by the guy who said “talk to this monitor and tell me whether or you think that someone else is typing at the other end.”Report
Cloud computing, loosely, is when you don’t own or maintain the hardware and system software. Your Air Force hardware served Air Force clients. No one called up to say, “I need 100 virtual processors with 16G, Linux, and Perl 5.24 this month, and 24 terabytes of storage with near real-time synchronization across North America and Europe.” That’s what Rackspace or AWS sell. And are fine if the next month you call up and tell them you only need 50 processors, or 500, and want to add storage in Asia.
When I was in graduate school getting an applied math degree one of the professors used to complain, when he was handing back homework, “Mr. Cain, of course, simply beat the problem to death with a computer.” The current AI and ML fall in the category of beating the problem to death with a computer. With varying degrees of success. I read the other day that the people at Google Translate had trained a neural network that had a trillion coefficients. Now that’s a club suitable for beating a problem. Twitter and Facebook just need bigger clubs.Report
I had a Twitter account for ten years when Twitter abruptly suspended me permanently. I appealed the suspension. A few weeks later, they reinstated me with an apology indicating that an algorithm caused the erroneous ban. A few months after that, they banned me permanently again. This time they suspended the account because, they said, I created the account to evade a permanent suspension. It was the only Twitter account I ever had. I appealed again, but they told me the suspension stands. I created a new account to evade the ban. I’ve had it for months. So far, no problem.Report
Given Twitter’s Market Cap, they really ought to have a lot more employees.Report
The Optimistic Technologist: “Yes, these are certainly technical problems but more and more sophisticated AI will eventually get those solved.”
The Skeptic: “Really? If driving is tough to teach an AI how to do, how much harder is literally the entire rest of human existence?”
The Constitutional Scholar: “What’s needed is a system of checks and balances, by which a scraper identifies problematic tweets which are then reviewed by a human being. That way we can get a good overall result quickly without needing to develop a super-sophisticated AI able to understand enough nuance so as to pass a Turing Test; we will instead have actual people involved.”
The Venture Capitalist: “And who is going to pay for all those people that ‘we’ are supposed to employ in this scheme? Twitter’s stockholders? Twitter is profitable, but add the cost of thousands of censors (and the bad P.R. of hiring people who are, after all, censors) and maybe it stops being so. If this is a necessary part of what it takes to have Twitter or something like it, that makes overhead jump up significantly. Sounds bad for the bottom line.”
The Socialist: “So let’s nationalize Twitter, make it a publicly-available utility rather than a for-profit company!”
The Trumpist Republican: “Hey, you may be on to something there.”
The Mainline Democrat: “Get real. We’re not going to nationalize Twitter. That’s one of the silliest things I’ve ever heard, and I’ve heard a lot of silly things from the likes of both of you two over the past few years.”
The Ombudsman: “Twitter is already under fire for alleged ideological bias in the way it enforces its TOS and disciplines its users for violating them. Adding human censors, who will inevitably have actual biases themselves, will only aggravate that bad publicity. And we don’t need to address whether Twitter really is biased in its enforcement or not; what matters here is how it’s perceived.”
The Jaded Blog Editor: “More importantly, what standard will those human reviewers apply? There is no articulated or articulatable standard which a clever enough troll cannot find a way to convey a toxic message that does not violate the black letter rule of an explicit standard. It can’t be done.”
The Law Student: “Sure it can! Here, let me try. [Standard X]”
The Troll: “[Toxic message Y, which adheres to Standard X]”
The Law Student: “Damnit. Let me try again.”
The Jaded Blog Editor: “Don’t bother. If you’re going to have a human censor, that censor has to make a judgment call. Every time. Sometimes the call will be easy but sometimes it’ll be hard. But it takes a human to do it, and that’s why you can’t come up with a black-and-white standard at all.”
The Idealist: “But surely there can be an appropriate level of judgment about when, how, and what to censor while still permitting free-flowing and flourishing debate!”
The Human Resources Manager: “Let’s assume that there is, at least as a platonic ideal. To get there, we’d have to train an army of censors in the institutional culture we wish for them to police, against the kinds of toxic statements we can imagine, with the certain knowledge that they will almost immediately encounter toxicity of a kind we never imagined. And the Venture Capitalist isn’t going to like paying for that, especially if it’s doomed to failure from the get go.”
The Frustrated User: “So what can Twitter do?”
The Stoic: “Twitter can do its best. It may never be good enough, but it’s all Twitter can do. It won’t be perfect but we want Twitter to try to get it as right as it can anyway. And we, the people affected, need to temper our expectations so as to not let the perfect become the enemy of the good.”
— Sometimes I really miss having @JasonKuznicki around these parts.Report
The Chip: “Twitters, that’s the thing all the kids on Myspace are talking about, right?”Report
A fitting tribute.Report
Well done, he’d probably be flattered!
And there are so, so, so many people I miss. Damn social media to heck. it was way better when it was just the blogosphere.Report
Epic!Report
The Skeptic: “Really? If driving is tough to teach an AI how to do, how much harder is literally the entire rest of human existence?”
I’m not nearly as optimistic about AI as the real optimists are, but there needs to be a response. Driving requires sub-second response, feedback loops to be sure the car is doing what it’s been told, and continuous recognition of an entire dynamic environment. Moderation? Entire rest of human existence as expressed through typed text. 99% of which doesn’t have to be recognized, only recognized as speech within the terms of service. One second response is fine. Do you remember Google Translate from ten years ago? When you had to specify the source language, and what you got back was often really bad? Today it auto-recognizes >100 languages, deals with a lot of misspellings, and almost never produces gibberish. If I were in charge at Twitter, I’d put my humans to work purely on meta moderation (a la Slashdot for the last 20 years): finding bad decisions. Then add those to the training database.
Sometime — maybe already done but not published, but no more than three years — this will be a solved problem. Someone will do a clever thing on top of a brute force neural network. Twitter will probably be able to afford it. The little guys, maybe not.
With my futurist hat on, driving will be a solved problem within ten years, at least sufficient to meet what I think, with my futurist hat on, the first killer app for it: keeping the boomers in their suburban houses for another decade rather than having to dump them into much more expensive institutional arrangements.Report