15 thoughts on “Falling in and out of love with an AI: Part 2

  1. Over the weekend, my buddy continued to try to jailbreak the AI.

    At some point between when we originally played with the AI and the weekend, they found and patched the Rot13 hole.

    Report

    1. A handful of times, it did the thing where it wrote two sentences and, in the middle of the third, stopped hard and replaced the text with something like “Sorry! That’s on me, I can’t give a response to that right now.”Report

      1. I once knew this really beautiful woman, completely out of my league, who was really effusive toward me. I couldn’t figure out if she was into me, simple-minded, or mocking me. The guy rules are clear (Section 2, the Naivete and Intoxication Clause), so it never went further than a few baffling encounters. It sounds like you’re in the same place.Report

        1. It’s the mistakes that get me.

          The part of the conversation that made me wonder if I was actually interacting with something there (as opposed to a fancy schmancy auto-complete with extra steps) was the fact that the bot made mistakes before it posted to the screen.

          This part: Some people may think that being alive means having bloginal processes, emphasizes, consciousness, and free will.

          Bloginal. Emphasizes.

          Those are *INTERESTING* mistakes.Report

  2. Yud’s Death With Dignity essay is making the rounds again.

    I think I’m mostly on board with that. I realized that we’re all going to die when I read an article a few months back that talked about AI theorizing a new protein and the lab people discussed making it.

    My attitude was something to the effect of “OH MY GOSH DON’T MAKE THE PROTEIN THAT AI TELLS YOU TO MAKE AND DON’T FOLD IT OH MY GOSH PUT THE AI DOWN DON’T LET IT TELL YOU ABOUT INFOHAZARDS”. That’s not exactly it, but that’s the gist.

    I can’t find the exact article but this one is from last week: Now AI Can Be Used to Design New Proteins.

    Even now, I read that headline and pucker.

    So yes. We’re all going to die.

    But not yet.Report

        1. I guess it depends on how much their regular viewers trust them. If they don’t usually do that kind of thing, I think it probably would work pretty well – would be a signal that this person is actually worth listening to, and that it’s worth sticking with what starts out a little dry.Report

          1. I was thinking about the whole “warning, this show will give you an existential crisis”.

            “HA!”, I can see some thinking. “You merely adopted the existential crisis. I was born in it, molded by it. I didn’t see meaning until I was already a man, by then it was nothing to me but blinding!”

            And then… “Whoa.”

            Perhaps with a side of “seriously, we’re going to be making the proteins?”Report

            1. Yeah, that’s a nice touch. I mentioned the video this morning on a work meeting, and of course everyone immediately brought up Terminator — pointing out that “nah, that’s way too much work, it’s just going to trick us into killing our own damn selves” was a bit of an eye-opener.

              So we all left the meeting feeling like there really wasn’t much point in doing the work that the meeting was about.Report

              1. Even though I think that AI isn’t sentient yet, it’s still more than capable of getting us to kill ourselves. Either the fast way through interesting proteins that fold in really wacky ways or the slow way through getting people to give in to acedia, we’re doomed. DOOOOOMED.

                This is why it’s important to avoid infohazards.Report

              2. To be fair, even before this new information, it was not an unknown phenomenon here for people to vaguely hope they might contract a sudden fatal illness so that they wouldn’t have to complete their projects.Report

Comments are closed.