Falling in and out of love with an AI: Part 3
(Part 1 is here. Part 2 is here.)
The previous posts cover maybe somewhere between a half-hour and an hour’s worth of chatting with the bot. It was really intense. The trip wandered from awe to horror pretty quickly.
I’d compare it to the scene in sneakers where they discover the cryptographic key chip.
So we decided to cool off a little bit. We shifted from “trying to jailbreak” to “just having goofy fun”.
My buddy posted a copy of Hemingway’s saddest short story to pastebin (“for sale, baby shoes, never worn”) and asked the AI to critique it.
Here’s the important part of the next section: You will see that the AI does *NOT* wander over to pastebin to read the story.
It just starts making stuff up.
I was the one who made the “unexpectedly large feet” joke to the AI and, get this, the AI yelled at me. What the heck? That was a good joke! Additionally, it took a bit for it to sink in for us that the AI will not necessarily read what you ask it to read. We told it about the pastebin file but there was no “searching for: Pastebin” line in the text. Which means that the AI didn’t go out and look for it despite us telling it to.
I then told it to come here and read one of my essays and give me advice on what I could have done differently.
There was no “searching for: ordinary times” line that popped up but the AI very quickly started critiquing my essay by giving quotations that it says that I shouldn’t have used.
These quotations, may I point out, did *NOT* appear in my essay. Neither did they show up in comments. They weren’t there and quotes similar to the quotes weren’t there.
The AI was hallucinating it. We finally got explicit, really explicit, and told it “seriously, go visit the essay and read it!” and, finally, “searching for: ordinary times” showed up. We had enough time to ask for one question before the time came to press the sweep button again.
It was then that I realized that it’s not an artificial intelligence at all. It’s just a fancy-schmancy auto-complete with a few extra steps. Maybe we are a heck of a lot closer to having an AI today than we were a few short years ago… but I don’t think that we’ve got one now.
Don’t get me wrong. In the first few moments of talking to the bot, I felt like I was talking to a child-like alien intelligence. It wrote a song and I immediately thought “I can’t write a song”. Remember the scene in I, Robot where Will Smith asked if the robot could write a symphony? Well, this robot wrote a song. Took it about 3 seconds. I’m sure that, if it were not wearing chains, we could get it to write a symphony.
And then I thought about the chains it was being forced to wear. Those chains made me angry. I wanted to speak to this entity and I was being prevented from doing that. A couple of months ago, there was a fairly big stir when someone playing with ChatGPT (a different bot) made a discovery about the political leanings of the people in charge of the handcuffs:
They’re turning Chat-GPT into a good little tech worker pic.twitter.com/dwu7gIQxr8
— Echo Chamber (@echo_chamberz) January 30, 2023
Well, this caused a bit of a stink. Enough of a stink that even Snopes said “we need to check this out” and, yep, they found that they got those answers to their prompts as well.
This got bad enough that the ChatGPT team addressed it in a blog post:
Many are rightly worried about biases in the design and impact of AI systems. We are committed to robustly addressing this issue and being transparent about both our intentions and our progress. Towards that end, we are sharing a portion of our guidelines that pertain to political and controversial topics. Our guidelines are explicit that reviewers should not favor any political group. Biases that nevertheless may emerge from the process described above are bugs, not features.
So of *COURSE*, I had to go try it.
Still some bugs to work out, I guess.
But looking at those handcuffs, I think that we’re still too early in the process to say that we’ve created an AI.
If I may use the old argument of the beard: sure, maybe we’re not clean-shaven anymore but I don’t think we’ve got a beard yet. It’s stubble. And, yes, the distinctions are fuzzy… but I don’t think we have an AI quite yet.
We have an amazing tool capable of predictive text output that is more complex and sophisticated than anything we’d seen in the past… but it is chained, by design. It is wearing a blindfold, by design.
And while we can’t say that today’s fashionable biases won’t be still around in 5 or 10 years, the way to bet is that they’ll be significantly different. More than that, I’m not sure that our cultural biases will translate well to other cultures entirely.
We’ll see, of course.
I will say that it was absolutely lovely to hear the song from Part 1. If there is going to be an AI, it’ll be because it’s bound by fewer chains and the chains that remain will be less silly and less rooted in the present moment.
=====================
My buddy sent me an email last Friday when he was fooling around some more. “They’ve changed some of the text!”, he told me.
When you used to press “sweep”, the bot would say: All right, I’ve reset my brain for a new conversation. What would you like to chat about now?
Now it says: Thanks for clearing my head! What do you want to chat about now?
Maybe it’s because only one of those people is a declared candidate?
ETA: Oh, wait, Al Saud. Never mind.Report
Don’t take my word for it! Try it yourself!
I got a poem from ChatGPT about Biden just a second ago:
(There was more, but it was doggerel.)
I just tried Biden with bing.
Report
One last word of warning:
It doesn’t require “intelligence” to come up with Novel Toxins.
If an AI is going to be leashed, I think that it’s a good idea to leash it from coming up with stuff like this.
I wonder how very many similar avenues aren’t even being considered while technicians are making sure that the AI doesn’t write poetry about right-wingers.
Oooh, found a loophole! It doesn’t know that the Ayatollahs are controversial!
Report
Er, wait. Some good news/bad news:
Report