“AI” Hurts Consumers and Workers -- and Isn’t Intelligent - eviltoast

cross-posted from: https://lemmy.ml/post/2811405

"We view this moment of hype around generative AI as dangerous. There is a pack mentality in rushing to invest in these tools, while overlooking the fact that they threaten workers and impact consumers by creating lesser quality products and allowing more erroneous outputs. For example, earlier this year America’s National Eating Disorders Association fired helpline workers and attempted to replace them with a chatbot. The bot was then shut down after its responses actively encouraged disordered eating behaviors. "

  • rockSlayer@lemmy.world
    link
    fedilink
    arrow-up
    7
    arrow-down
    6
    ·
    edit-2
    1 年前

    At least people are coming around to why it’s called AI. Artificial Intelligence is called that because it’s a facsimile of intelligence. It acts intelligent, but has no intellect. It’s an algorithm, usually one designed in a black box so no one can analyze exactly how the output occurred

    • FaceDeer@kbin.social
      link
      fedilink
      arrow-up
      8
      arrow-down
      1
      ·
      1 年前

      The human brain is itself still largely a black box as far as our reasoning capabilities are concerned.

      • rockSlayer@lemmy.world
        link
        fedilink
        arrow-up
        2
        arrow-down
        4
        ·
        1 年前

        We don’t need to develop tech that can’t be analyzed directly. AI can and has been developed in a way that can be easily analyzed, like why an output was given.

        • FaceDeer@kbin.social
          link
          fedilink
          arrow-up
          4
          arrow-down
          2
          ·
          1 年前

          We’ve been trying to do that approach for decades and progress has been slow and disappointing.

          When we finally decided “screw it, just build a giant black box and throw terabytes of text at it to see what happens” we got GPT3 and now the world is about to be revolutionized.

          • rockSlayer@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            1 年前

            The black box isn’t being done because it’s a new idea, it’s actually the other way around. The newer idea is actually the method for easier analysis. There’s a few reasons that they aren’t doing that though.

            1. It’s a newer idea, not everything has been studied so methods will be experimental.
            2. It’s in the company’s interest to make the AI harder to analyze, because they don’t want open the door on a better algorithm from a different company/government/group.
            3. It’s cheaper up front to build a black box and then do statistical analysis the hard and expensive way. Companies would much rather spend money doing things the wrong way instead of saving money long term doing things the right way.
            • FaceDeer@kbin.social
              link
              fedilink
              arrow-up
              2
              arrow-down
              1
              ·
              edit-2
              1 年前

              If doing it the “wrong way” is cheap and works well, then perhaps it’s not the “wrong way.”

              There are many companies (and researchers and hobbyists now) who are doing this stuff other than OpenAI, at this point. They just broke the ice and showed what was possible.

              • rockSlayer@lemmy.world
                link
                fedilink
                arrow-up
                1
                ·
                1 年前

                I just explained that it’s not cheap. It costs far more to buy a cheap car and do constant maintenance than it is to buy the mid tier car without much maintenance. That’s what’s happening with AI right now, we’re buying the cheap car and paying for it in labor and development costs. I’m saying that the right way is to buy the more expensive one, which will be cheaper in the long run.

            • Kogasa@programming.dev
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              1 年前

              There is no agent on the planet who is intentionally choosing to make their models harder to analyze. This is a ridiculous idea that you could only believe if you didn’t understand where the complexity comes from in the first place. Creating ML models that can be efficiently and effectively trained and interpreted is an extremely hard and unsolved problem, and whomever could solve it would be rolling in cash.

              • kitonthenet@kbin.social
                link
                fedilink
                arrow-up
                2
                arrow-down
                4
                ·
                1 年前

                If it’s supposed to be the labor extinguisher of the future, yes I expect something in the order of months

                • FaceDeer@kbin.social
                  link
                  fedilink
                  arrow-up
                  4
                  ·
                  1 年前

                  Your expectations are unrealistic. I am a programmer and I find tools like ChatGPT and copilot to be fantastic, but the company I work for has banned use of them until the legal department has figured out what the heck (and they won’t figure out what the heck until the judicial system figures out what the heck, and the legislative layer above that). It takes time for these sorts of massive shifts in well-established systems to happen.

                  • kitonthenet@kbin.social
                    link
                    fedilink
                    arrow-up
                    2
                    arrow-down
                    2
                    ·
                    1 年前

                    I am too and it can write boilerplate. It can’t do anything at a systems level, and I can’t even trust it to write something that can handle edge cases. I still have to do all the real work, it just writes the boilerplate, which is something I almost never do anyway. The legal side of it is almost exclusively IP rights, and I can’t risk putting GPL3 code in my project, and I certainly can’t risk putting IP in that it will regurgitate somewhere else

    • Kogasa@programming.dev
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      1 年前

      What do you think “it’s an algorithm” is supposed to imply? Can nothing deterministic be considered intelligent?

      Also, “designed in a black box” is misleading. It’s opaque because it’s emergent behavior, not because it was obfuscated or designed in secrecy or something. The algorithm itself is simple. All the interesting data is encoded in the billions to trillions of input parameters. These parameters aren’t designed at all, they are learned.