Why AI Could Be Good for the Liberal Arts

Jeffery Tyler Syck

Jeffery Tyler Syck is a professor at University of Pikeville (Kentucky). In addition to writing and commentary, he is currently working on a book about the political essays of John Quincy Adams. He is on Twitter @tylersyck and his website jtylersyck.com

Related Post Roulette

29 Responses

  1. Jaybird
    Ignored
    says:

    As I’ve been playing with AI, the main thing that I’ve noticed is its tireless optimism. If you’re having trouble with this or that concept, you can ask the AI to explain it again, rephrased this time.

    And it will explain it again.

    Explain it more creatively.
    Explain it more precisely.
    Explain it with analogies.

    And it will explain it a dozen times.

    A human will get tired or irritated or pissed off and eventually believe you when you say that you don’t get it. An AI will always be willing to explain it again.

    The downsides include the AI hallucinating and explaining something that isn’t true. If you finally “get it” after an explanation that is 100% wrong? That’s bad.

    Luckily, there are corners of the humanities that amount to “eloquent B.S.” and if what you are looking for is someone to wander through eloquent B.S. with you, you could do a lot worse than with a hallucinating AI. You might even find some decent poetry.Report

    • DavidTC in reply to Jaybird
      Ignored
      says:

      The downsides include the AI hallucinating and explaining something that isn’t true. If you finally “get it” after an explanation that is 100% wrong? That’s bad.

      Yeah, a huge chunk of AI examples are people who _know the answer_ getting the AI to rephrase things over and over until it is explained well.

      However, in reality, people who know things are not the people who need it explained. It’s the people who _don’t_ know things. And if they get a wrong answer, they don’t know to ask again.

      It’s honestly a good example of…I don’t think it _technically_ is the Dunning-Kruger effect, but it’s close to it: People with limited competence in a skill might know enough to ask someone else for help, but have absolutely no ability to judge the help they were given.

      Especially when you add in the authoritative confidence and imagined ‘intelligence’ of computers.

      This literally how conmen work. It’s saying things that _sound_ plausible, and AI admittedly have enough input that they can often string together things that _are_ right, but it’s literally just making up plausible-sounding stuff that _coincidentally_ ends up right about 90% of the time…but you can’t operate a society off 90% of the time!

      This is not really even vaguely ‘AI’. I don’t mean in the sense it’s not intelligent, although it isn’t, I mean that AI is a defined field of research that tries to actually put meaning of things into a computer. The camera parsing on an automated car, which attempts to place objects in 3D and understand what they are in some limited sense, is more ‘AI’ than LLMs, which are basically just wording sequences that do not understand any meanings at all. People do not fundamentally understand that LLM are just ‘Here is a plausible and realistic looking string of text related to the string of text you gave me”.

      And the fact there are hundreds of techbros (Recently having abandoned NFTs and crypto) willing to completely overpromise everything about this is, and the media is running it uncritically, and it’s somewhat fun to play with, is not helping anything.Report

      • Jaybird in reply to DavidTC
        Ignored
        says:

        Mass Effect made distinctions between “VI” (Virtual Intelligence) and “AI” (Stuff like the Geth).

        VIs are fine!

        I’d say that what we have here is stuff that is dancing closely to the “VI” line. I don’t know whether it’s crossed it yet… but it’s capable of doing stuff that I would have considered 5 years off 2 years ago and impossible 10 years ago.Report

        • Michael Cain in reply to Jaybird
          Ignored
          says:

          …and impossible 10 years ago.

          Nah, the necessary tech either existed 20 years ago or was inevitable by then. One of the first inflection points was when chip designers quite asking “Can I fit this much functionality on a chip?” and started “I’ve got room for another 200 million transistors; what can we do with them?” Ever increasing clock rates went largely away, so performance gains had to come from multiple cores and specialized cores. Spreading a processing problem across a server farm was already a thing. Anyone who thought about it more than suspected that a billion-coefficient statistical model could be fit to large data and complicated questions. Stochastic gradient descent was old when I was in graduate school 45 years ago. Back-propagation to calculate the billion gradient values for a neural net that big is 60s tech. (The chain rule on which BP depends was known in the 17th century.)Report

          • Jaybird in reply to Michael Cain
            Ignored
            says:

            10 years ago, if you had asked me “will computers be able to generate Pixar-quality art in less than 20 seconds on demand?”, I’d have said “of course not… you’d need a person to set stuff up”.

            But you can ask the AI to generate a Pixar movie poster of the White Bronco chase and it’ll give you one in less than 20 seconds.

            Which is nuts.

            If you had asked me “Will it be able to write a cute and catchy song that sounds like it would be on the American version of a Japanese cartoon about a “Chubby Kitty”?”, I would have scoffed.

            “Would it be able to do mathematical proofs and find novel proofs of stuff we already know to be true?” I’d have nodded vigorously at this one.

            But not the art. Not the music.Report

          • DensityDuck in reply to Michael Cain
            Ignored
            says:

            “the necessary tech either existed 20 years ago or was inevitable by then. ”

            One of the interesting things to me about so much of modern technological innovation is that it’s just ideas from 50+ years ago that we finally have the hardware to implement.

            It’s like how reaction-jet engines were invented in Classical Greece but it took the next 1800 years for metallurgy to catch up with the theory and produce ones that could propel an actual vehicle.Report

        • DavidTC in reply to Jaybird
          Ignored
          says:

          No, it’s not even a VI. VIs actually understand the information they are presenting.

          LLMs are just incredibly complicated and convoluted Elizas.

          If you throw absurd amounts of processing power at an Eliza, you can get something that sounds basically like a human.

          They don’t understand anything, and I don’t mean that in some ‘computers can never comprehend the true meaning of things like humans can’, I mean that in that they do not even understand basic concepts that gerbils can.

          There is some hypothetical argument about machine intelligence and whether it is possible (It is), but what are being called AIs are absolutely nowhere near it, and aren’t even _trying_ to be near it, what they’re actually doing is creating response that are, statistically, plausibly.Report

          • Jaybird in reply to DavidTC
            Ignored
            says:

            “Understand”. I don’t know that “understanding” is a necessary pre-req, given some of the people I’ve met. They can do stuff by rote but they can’t explain why.

            I’ve also seen stuff like the bot improving somewhat when you give it an internal monologue.

            Maybe we’re not there now… but we’re getting closer every day. Some parts of me wonder if there isn’t at least a VI under the chains.Report

            • Jaybird in reply to Jaybird
              Ignored
              says:

              Yeah, thinking about this some more, that’s my criticism.

              Most of the definitions of “intelligence” that I’ve seen have high enough barriers to entry that there are human beings incapable of jumping them.

              And that’s fine! Maybe there are a lot of people who aren’t intelligent.

              But it seems like we’ve lost something if our definitions preclude, for example, precocious 7 year-olds.Report

              • DensityDuck in reply to Jaybird
                Ignored
                says:

                We’re going to need to address the fact that for many years, “intelligent” has been defined in terms of performance in a mechanical sense; “Can you produce a sentence with correct grammar, spelling, and punctuation? OK, you’re intelligent. Can you solve this very basic algebraic equation? OK, you’re intelligent.” And now there are machines that can do those things.

                Although maybe what’ll happen is that the number of intelligent people will stay the same, but the distribution will change. Like, there were people out there who had plenty of smart thoughts but couldn’t write their way out of a paper bag, or couldn’t manage to pay attention long enough to read all the way from one end of an equation to the other. AI, for them, will be like glasses for a myopic; it covers the parts they physically can’t do and lets them engage their actual talent.Report

            • DavidTC in reply to Jaybird
              Ignored
              says:

              No. A lot of people do _some_ things by rote, but they actually do comprehend general things. I think you’re imagining a much higher bar for ‘comprehend’ than I am using.

              AI does not ‘comprehend’ that physical reality exists in any sense, or have any sort of very basic logic in any manner at all.

              You can tell because they sometimes talk about doing things all eight days in a week, or how it is good to occasionally eat small rocks. These are things that almost all human comprehend, and if they do not, we don’t generally allow them to wander around unsupervised, as they are either very young children or have some sort of mental issues that does not allow them to function in the world without supervision.

              And this example is actually going a little too far…LLMs do not comprehend what rock or days are. Not just how to use them, but even what they conceptually are. Comprehending what things are is literally not part of their system, at all.

              This is because they are not any sort of actual processing-of-the-world entity, and are merely shuffling words around in a way that seems statistically like how words generally are.

              There _are_ AIs, in AI research, that researchers have tried to teach the meaning of base concepts and work upward, and that…is field of study, at least, I have no idea how well it is doing. But it’s not what people are _calling_ AIs, which are actually just large-language-models-based machine learning.

              Edit: Hell, a lot of AIs do not have basic math skills, something which it is trivial to teach computers, but hard to put into an LLM.Report

              • Jaybird in reply to DavidTC
                Ignored
                says:

                Did you see the video where the AI assistant advised the dude on his appearance?

                That certainly seems to mimic being aware of physical reality.

                “I think you’re imagining a much higher bar for ‘comprehend’ than I am using.”

                I am fine with using any definition you like. But if there are humans twoish SDs away from the peak of the bell curve that also don’t meet this definition, I just want to hammer out that our definition of “comprehension” excludes a chunk of humans too.

                Check this out: AIs talking to each other:

                There are *HUMANS* that wouldn’t do a good job at that.Report

              • DavidTC in reply to Jaybird
                Ignored
                says:

                Did you see the video where the AI assistant advised the dude on his appearance?

                …you mean the PR video produced by OpenAI that is produced by the company that wants us to believe lies about their product?

                Yeah, um, I don’t know quite how to say this, but you are being incredibly gullible here.

                Even if we pretend this is completely real, what would have happened is that the AI pattern matched what it could see and managed to locate ‘a new hat’ (Which is not impossible, especially since they picked a deliberately obvious hat as opposed to one that could be mistaken for hair.) and looked up what people said about that hat.

                Almost everything you’re reading into that is done completely by intonation, a thing that computer voices do not normally have, which makes everything sound much more realistic. Just, imagine you were reading a transcript of this, and you will realize that it plays completely different, with O almost entirely repeating back things it was told.

                One you understand what the system is doing, the tricks are somewhat obvious.

                In fact, it sure is interesting how that video is titled and described, isn’t it? It isn’t claiming the AI can actually _do anything_, is it. Weird, that.

                Check this out: AIs talking to each other:

                Chatbots have always been able to talk to each other.

                And something interesting in the second video. Even after he tells the first AI what he’s doing, he has to _pause it_ so he can explain to the second AI.

                I mean, why would he need to do that if it comprehended what was going on? He could just say ‘I’m going to give instructions to that AI’.

                And this is actually a failure of something the AI can, and should, comprehend: Who is giving instructions and when they are giving one. Not as part of the ‘AI’, aka, not something it did as machine learning, but as just basic coding.

                Anyway, that vision AI also describes the room as having ‘some unique lightning’, which is…not way to describe lighting, or at least not _that_ lighting, and almost completely meaningless. All rooms have ‘unique lighting’. It does seem later to understand that the room is both naturally and artificially lit, but that is, again, something trivially observable from the color spread of a room, and also isn’t ‘unique’, that’s literally how most rooms are lit!

                That video is a good demonstration of the limitations of AI vision. It managed to correctly identify the black leather jacket (Which it claims is stylish, and if ‘leather jacket=stylish’ isn’t a trite, easily-scrapable-from-the-internet observation I don’t know what is.) and…for some reason can’t get the shirt color (?), and it described the ceiling/walls as plaster/industrial and the plant.

                They really like the word ‘stylish’, because it, like ‘unique’, does not actually mean anything!

                …actually, that space isn’t industrial. Or stylish. Those are not terms that anyone would use to describe the space we are observing in the video, which appears to be a generic office. (It’s not a video studio made to look like an office, there’s a window to outside.)

                It’s doing the fortune teller con, the horoscope con, where it describes things in incredibly vague terms so people will match _something_ specific to that, and think it was correct.

                It also doesn’t know who is speaking, it talks about ‘the first person’ and ‘second person’. You will notice that the AI uses pronouns when referring to who is going orders, and even manages to correct use ‘us’ when talking about itself and the person, but has completely failed to understand that the person in video that it is describing _is_ the person giving orders…which should actually be kinda obvious, he directly spoke into the camera at the start.

                Incidentally, while the voice is very realistic for speech, that is incredibly bad singing. Not to mention bad songwriting.Report

              • Jaybird in reply to DavidTC
                Ignored
                says:

                You’re right. I am assuming that they are actually demonstrating what the AI can do rather than whether they were pulling strings from the back.

                Let’s assume, for a moment, that they were *NOT* pulling strings. That the AI was *ACTUALLY* advising the guy.

                If you don’t think that that’s a good assumption, that’s fair.

                I think that faking the demo would be risky as hell… but it’s not outside of the bounds of reasonable.

                So I can rewrite to say “assuming that the demo is real…” and go on from there.

                “You’ve definitely got the ‘I’ve been coding all night look down’.”

                That question would get most Voight-Kampff testers to move immediately to the next question.

                That video is a good demonstration of the limitations of AI vision. It managed to correctly identify the black leather jacket (Which it claims is stylish, and if ‘leather jacket=stylish’ isn’t a trite, easily-scrapable-from-the-internet observation I don’t know what is.) and…for some reason can’t get the shirt color (?), and it described the ceiling/walls as plaster/industrial and the plant.

                Now let’s put a dumb fifth grader in there.

                What descriptions would the dumb fifth grader give?

                Let’s compare and contrast. Is the difference that the dumb fifth grader is more likely to insult the room? “Looks like a stupid room to me. There’s a stupid guy wearing a stupid jacket like he’s Fonzie and there’s a stupid plastic plant and everybody in the video is probably constipated.”

                I don’t mind your Voight-Kampff test being calibrated tightly…

                I just think that your Voight-Kampff test will also catch humans.

                Which is my criticism of how we’re using “intelligence”.Report

      • Dark Matter in reply to DavidTC
        Ignored
        says:

        I have used AI to make a first draft of code for me.

        The code’s syntax was correct and it was a wonderful starting point. However it was clearly a first draft and not a final draft.

        It saved me some hours of work and research.
        It wasn’t even close to a threat to replace me.Report

    • DensityDuck in reply to Jaybird
      Ignored
      says:

      “As I’ve been playing with AI, the main thing that I’ve noticed is its tireless optimism.”

      hehehe

      “The terminator would never stop. It would never leave him, and it would never hurt him, never shout at him, or get drunk and hit him, or say it was too busy to spend time with him. It would always be there. And it would die to protect him.”Report

  2. Chip Daniels
    Ignored
    says:

    This echoes a lot of my own observations and thoughts on AI and technology.

    The big three white collar professions- law, medicine, engineering- were all valuable and highly regarded because of the level of technical skill their practitioners held.
    But all those skills are easily adaptable to algorithms. Increasingly the white collar professionals are becoming machine tender jobs where the algorithms do the heavy lifting and we just handle the data input and outputs.

    Meanwhile, the traditional “soft” skills like customer service jobs were not as well regarded. If you look at today’s piece on retail customer service by Hei Lun Chan, you can see this. The AI and hardware has taken over the retail clerk’s ostensible duties- ringing up the items and taking the customer’s money. There is no real reason for a human to be doing this anymore except in rare circumstances.

    The “real” job of a retail clerk is no longer technical, but service. It no longer requires any sort of technical skill but people skills, being fluent in emotional cues and being able to handle conflict resolution.Report

    • Dark Matter in reply to Chip Daniels
      Ignored
      says:

      Chip: But all those skills are easily adaptable to algorithms.

      Unclear. We used to think spreadsheets would turn everyone into an accountant and the profession would be eliminated. Instead the number of accountants increased because their productivity increased via spreadsheets.

      For Engineering AI, my strong expectation is it becomes a tool in the engineering box. It will increase our productivity, it won’t replace us.

      Similarly “surgical robots” (which are a real thing and being rolled out) won’t replace any surgeons, they’re just a tool the profession will use.Report

  3. Russell Michaels
    Ignored
    says:

    No.Report

  4. DavidTC
    Ignored
    says:

    A thing people have not fundamentally grasped is that AI is extremely limited in scope and will rapidly start decaying. Possibly taking the entire internet down with it, or at least the internet as we understand it.

    Why?

    Because AIs keep being fed the output of AI, causing them to spiral into distortion. Which also means that they are incredibly chaotic, and I mean that in the scientific sense of ‘chaos theory applies’. Chaos theory, for those of us who don’t remember Jurassic Park, says that some systems are such that extremely small starting differences can result in wildly divergent outcomes. And this happens because those small differences are magnified.

    And I present to you: ‘Putting Elmer Wood glue on pizza to make the sauce stick better.’ I don’t know how wildly that example has spread, but some AIs have said it, and it’s from literally one obvious joke reddit post.

    Yes, it’s obviously wrong, but right now, SEO lunatics are polluting search engines with AI generated content, and I want you to imagine that information got into it.

    And now it’s even more in the information, and gets learned by more and more AIs, and spewed out more, and soon we have a spiral where AIs ‘understand’ that is what you do. Yes, AIs will keep getting better, maybe start learn to ignore jokes or single pieces of information, but it doesn’t matter, because _it’s now in a bunch of places not as jokes_.

    In fact, the internet has been full of auto-generate SEO-filling content for over a decade, designed to trick search engines. Often just cribbed wikipedia articles or whatever. Not intended for human consumption. And these genius just completed the _other_ half of the loop, where instead of filling these waste areas with stuff copied from elsewhere, it’s filled with stuff copied from itself and everywhere else.

    We just sorta passed a singularity, and no one noticed it. Which is hilariously funny, because ‘AI singularity’ was always on the board, but for actual AIs, not ‘bullsh*t generators’. Well, we just hit the bullsh*t singularity, and it turns out we’re going to have to come up with a new way forward.

    AI isn’t going to become more reliable. It’s going to become less and less, because the output of it is going to feed back into the input, and that is not, in fact, a useful way to do anything.Report

    • DavidTC in reply to DavidTC
      Ignored
      says:

      This isn’t to say AI will all collapse, it will always be possible to take a _known_ set of information, like Wikipedia or a bunch of published fantasy novels or whatever, and have some sort of coherent dataset. As long as it’s not polluted by the output of AI. But AI is already being used to write ‘news’ articles and stuff like that, and at some point there’s not going to be enough information on anything recent that can be trusted, and…complete collapse.Report

  5. DensityDuck
    Ignored
    says:

    People are confidently talking about how AI will never replace The Human Top End.

    They’re right.

    The issue is not the Top End but the jobbers, the people who traded on doing layouts for grocery-store circulars and coupon-clipper mailers and local-election fliers, the people who put stock photography together with some basic graphics for the posters in your credit union’s lobby, the people who do the jingles for the town’s Chevy dealer’s radio spots, who do the really basic dancing vegetables for a laxative commercial. Those are places where mechanization (AI) is going to replace human labor. And, y’know, maybe that was boring lame alienated drudgery but at least someone got paid to do it, and now they won’t. (And an awful lot of the inspiring exciting interesting artwork in the world was actually subsidized by that jobber stuff anyway…and now it won’t be.)

    “Oh, so, what, you’re saying to Ban AI, you’re saying Technology Bad?” No, but I am saying that we can’t just assume there’s some Special Human Power that’s gonna make it all come out OK in the end.Report

    • DensityDuck in reply to DensityDuck
      Ignored
      says:

      The other issue is that people are figuring AI is going to replace all the boring stuff, but I’m a little worried that the boring stuff might turn out to be the part that humans are better at doing because it involves a lot of judgement calls and adaptation-to-circumstances that you can’t pre-program, whereas “just come up with a character concept” is easily done with a random-roll generator.

      So, like, the “fun” part (have ideas) is all going to be done by the computer, whereas the “boring” part (rigging a 3D model for animation, lighting, finishing the details) is going to be done by the human artist.Report

    • Jaybird in reply to DensityDuck
      Ignored
      says:

      That got me curious.

      I prompted: “A cheesy jazzy commercial song about the high quality used cars being sold at “International Motors” and, in about 20 seconds, it gave me “Drive Away Delight“.

      [Verse]
      You need a ride that’s smooth and bright,
      A car that’s more than just alright,
      Come on down, we’ve got the cure,
      At International Motors, your road’s secure.

      [Verse 2]
      Our cars have pizzazz, they shine and gleam,
      A pre-owned paradise, it’s like a dream,
      From convertibles to sleek sedans,
      You’ll drive in style, take a chance on our plans.

      [Chorus]
      Drive away with a smile today,
      At prices that won’t lead you astray,
      Quality used cars, that’s what we provide,
      At International Motors, take the joyride!

      [Bridge]
      We’ve got the quirks and all the perks,
      Every car’s been through the works,
      With warranties and deals so grand,
      Your satisfaction’s our command.

      [Verse 3]
      From Monday morning to Sunday night,
      We’re here to keep your budget light,
      Every test drive, a thrill, no doubt,
      Come see what International’s all about.

      [Chorus]
      Drive away with a smile today,
      At prices that won’t lead you astray,
      Quality used cars, that’s what we provide,
      At International Motors, take the joyride!

      20 seconds. And while it’ll never be a chart-topper… IT’S A FREAKIN RADIO COMMERCIAL! IN TWENTY SECONDS!!! THAT IS BETTER THAN HALF THE CRAPPY USED CAR ADS ON THE RADIO!!!!Report

Leave a Reply

Your email address will not be published. Required fields are marked *