OpenAI next model Orion by December, it's 100x cooler bro trust me bro no we're not releasing to the public bro it's just too dangerous and cool bro - eviltoast
  • LostXOR@fedia.io
    link
    fedilink
    arrow-up
    24
    ·
    2 months ago

    You forgot the best part, the screenshot of the person asking ChatGPT’s “thinking” model what Altman was hiding:

    Thought for 95 seconds … Rearranging the letters in “they are so great” can form the word ORION.

    AI is a complete joke, and I have no idea how anyone can think otherwise.

    • modifier@lemmy.ca
      link
      fedilink
      English
      arrow-up
      27
      ·
      2 months ago

      I’m already sick and tired of the “hallucinate” euphemism.

      It isn’t a cute widdle hallucination, It’s the damn product being wrong. Dangerously, stupidly, obviously wrong.

      In a world that hadn’t already gone well to shit, this would be considered an unacceptable error and a demonstration that the product isn’t ready.

      Now I suddenly find myself living in this accelerated idiocracy where wall street has forced us - as a fucking society - to live with a Ready, Fire, Aim mentality in business, especially tech.

      • bitofhope@awful.systems
        link
        fedilink
        English
        arrow-up
        15
        ·
        2 months ago

        I think it’s weird that “hallucination” would be considered a cute euphemism. Would you trust something that’s perpetually tripping balls and confidently announcing whatever comes to them in a dream? To me that sounds worse than merely being wrong.

        • YourNetworkIsHaunted@awful.systems
          link
          fedilink
          English
          arrow-up
          12
          ·
          2 months ago

          I think the problem is that it portrays them as weird exceptions, possibly even echoes from some kind of ghost in the machine. Instead of being a statistical inevitability when you’re asking for the next predicted token instead of meaningfully examining a model of reality.

          “Hallucination” applies only to the times when the output is obviously bad, and hides the fact that it’s doing exactly the same thing when it incidentally produces a true statement.

          • bitofhope@awful.systems
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 months ago

            I get the gist, but also it’s kinda hard to come up with a better alternative. A simple “being wrong” doesn’t exactly communicate it either. I don’t think “hallucination” is a perfect word for the phenomenon of “a statistically probable sequence of language tokens forming a factually incorrect claim” by any means, but in terms of the available options I find it pretty good.

            I don’t think the issue here is the word, it’s just that a lot of people think the machines are smart when they’re not. Not anthropomorphizing the machines is a battle that was lost no later than the time computer data representation devices were named “memory”, so I don’t think that’s really the issue here either.

            As a side note, I’ve seen cases of people (admittedly, mostly critics of AI in the first place) call anything produced by an LLM a hallucination regardless of truthfulness.

    • blakestacey@awful.systems
      link
      fedilink
      English
      arrow-up
      19
      ·
      2 months ago

      [ChatGPT interrupts a Scrabble game, spills the tiles onto the table, and rearranges THEY ARE SO GREAT into TOO MANY SECRETS]