Tech CEOs want us to believe that generative AI will benefit humanity. They are kidding themselves

  • fiasco
    link
    fedilink
    arrow-up
    36
    ·
    1 year ago

    I guess the important thing to understand about spurious output (what gets called “hallucinations”) is that it’s neither a bug nor a feature, it’s just the nature of the program. Deep learning language models are just probabilities of co-occurrence of words; there’s no meaning in that. Deep learning can’t be said to generate “true” or “false” information, or rather, it can’t be meaningfully said to generate information at all.

    So then people say that deep learning is helping out in this or that industry. I can tell you that it’s pretty useless in my industry, though people are trying. Knowing a lot about the algorithms behind deep learning, and also knowing how fucking gullible people are, I assume that—if someone tells me deep learning has ended up being useful in some field, they’re either buying the hype or witnessing an odd series of coincidences.

    • flatbield@beehaw.org
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      The thing is, this is not “intelligence” and so “AI” and “hallucinations” are just humanizing something that is not. These are really just huge table lookups with some sort of fancy interpolation/extrapolation logic. So lot of the copyright people are correct. You should not be able to take their works and then just regurgitate them out. I have problem with copyright and patents myself too because frankly lot of it is not very creative either. So one can look at it from both ends. If “AI” can get close to what we do and not really be intelligent at all, what does that say about us. So we may learn a lot about us in the process.

      • fiasco
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        I guess there’s a sense in which all computer science is table lookups, but if you want a nauseatingly technical summary of deep learning—it’s high-dimensional nonlinear regression with all the methodological seatbelts left unfastened.

        The only thing this says about us is that philosophical illiteracy is a big problem in the sciences, and that computer science is the most embarrassing field in all STEM. Otherwise, you know, people find beauty in randomness (or in stochasticity, if you prefer) all the time. This is no different.

      • hglman@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        I would agree that either you have to start saying the ai is smart or we are not.

    • the_wise_man@kbin.social
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      Deep learning can be and is useful today, it’s just that the useful applications are things like classifiers and computer vision models. Lots of commercial products are already using those kinds of models to great effect, some for years already.

      • exohuman@kbin.socialOP
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        What do you think of the AI firms who are saying it could help with making policy decisions, climate change, and lead people to easier lives?

        • GizmoLion@kbin.social
          link
          fedilink
          arrow-up
          4
          ·
          1 year ago

          Absolutely. Computers are great at picking out patterns across enormous troves of data. Those trends and patterns can absolutely help guide policymaking decisions the same way it can help guide medical diagnostic decisions.

          • exohuman@kbin.socialOP
            link
            fedilink
            arrow-up
            4
            ·
            edit-2
            1 year ago

            The article was skeptical about this. It said that the problem with expecting it to revolutionize policy decisions isn’t that we don’t know what to do, it’s that we don’t want to do it. For example, we already know how to solve climate change and the smartest people on the planet in those fields have already told us what needed to be done. We just don’t want to make the changes necessary.

            • GizmoLion@kbin.social
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              I mean… no argument there. Politicians are famous for needing to be dragged, kicking and screaming, to do the right thing.
              Just in case one decides to, however, I’m all for having the most powerful tools and complete information possible.

            • manitcor@lemmy.intai.tech
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              Thats been the case time and again, how many disruptions from the tech bros came to industries that had been stagnant or moving at a snails pace when it came to adopting new technology (esp to lock into more expensive legacy systems).

              Most of those industries disrupted could have been secured by the players in those markets instead the allowed a disruptor to appear unchallenged.

              Remember the market is not as rational as some might think, you start filling gaps and people often won’t ask about the fallout and many of these services did have people warning against these things.

              We are for the most part, in a nation that lets you do whatever you want until the effects have hit people, this is even more a thing if you are a business. I don’t know an easy answer, in some of these cases, old gaurd needed a smack, in others a more controlled entry may have been better. As of now “controlled” is jut about the size of ones cash pile.

              Cue the ethical corporations discussion…

    • Turkey_Titty_city@kbin.social
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      I mean AI is already generating lots of bullshit ‘reports’. Like you know, stuff that reports ‘news’ with zero skill. It’s glorified copy-pasting really.

      If you think about how much language is rote, in like law and etc. Makes a lot of sense to use AI to auto generate it. But it’s not intelligence. It’s just creating a linguistic assembly line. And just like in a factory, it will require human review to for quality control.

      • 🐝bownage [they/he]@beehaw.org
        link
        fedilink
        arrow-up
        8
        ·
        1 year ago

        The thing is - and what’s also annoying me about the article - AI experts and computational linguistics know this. It’s just the laypeople that end up using (or promoting) these tools now that they’re public that don’t know what they’re talking about and project intelligence onto AI that isn’t there. The real hallucination problem isn’t with deep learning, it’s with the users.

        • exohuman@kbin.socialOP
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          The article really isn’t about the hallucinations though. It’s about the impact of AI. its in the second half of the article.

        • mrnotoriousman@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Spot on. I work on AI and just tell people “Don’t worry, we’re not anywhere close to terminator or skynet or anything remotely close to that yet” I don’t know anyone that I work with that wouldn’t roll their eyes at most of these “articles” you’re talking about. It’s frustrating reading some of that crap lol.

      • fiasco
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        This is the curation effect: generate lots of chaff, and have humans search for the wheat. Thing is, someone’s already gotten in deep shit for trying to use deep learning for legal filings.

      • shoelace@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        It drives me nuts about how often I see the comments section of an article have one smartass pasting the GPT summary of that article. The quality of that content is comparable to the “reply girl” shit from 10 years ago.

    • Arnerob@kbin.social
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      I think it can be useful. I have used it myself, even before chatgpt was there and it was just gpt 3. For example I take a picture, OCR it and then look for mistakes with gpt because it’s better than a spell check. I’ve used it to write code in a language I wasn’t familiar with and having seen the names of the commands needed I could fix it to do what I wanted. I’ve also used it for some inspiration, which I could also have done with an online search. The concept just blew up and people were overstating what it can do, but I think now a lot of people know the limitations.

    • Niello@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 year ago

      What I hate most about it is people are doing very poorly at checking their own information intake for accuracy and misinformation already. This comes at one of the worst time to make things go south. It’s going to challenge the stability of society in a lot of ways and with how crypto went I have 0% trust that techbros and corporates will not sabotage efforts to get things right for the sake of their own profit.

    • MagicShel@programming.dev
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 year ago

      In a way, NLP is just sort of an exercise in mental muscle-memory. The AI can’t do the math that 1+1=2, but if you ask it what 1+1 equals, it will give you a two. Pretty much like any human would do - we don’t hold up one finger and another finger and count them.

      So in a way, AI embodies a sort of “fuzzy common sense” knowledge. You can ask it questions it hasn’t seen before and it can give answers that haven’t been given before, but conceptually it will spit out “basically the answer” to “basically that question”. For a lot of things that don’t require truly novel thinking, it does sort of know things.

      Of course, just like we can misunderstand a question or phrase an answer badly or even just misremember an answer, the AI can be wrong. I’d say it can help out quite a bit, but I think it works best as a sort of brainstorming partner to bounce ideas off of. As a software developer, I find it a useful coding partner. It definitely doesn’t have all the answers, but you can ask it something like, “why the hell doesn’t his code work?” and it might give you a useful answer. It might not, of course, but nothing ventured, nothing gained.

      It’s best to not think of it or use it like a database, but more like a conversational partner who is fallible like any other, but can respond at your level on just about any subject. Any job that cannot benefit from discussing ideas and issues is probably not a good fit for AI assistants.