"I lost trust": Why the OpenAI team in charge of safeguarding humanity imploded - eviltoast
  • Veedem@lemmy.world
    link
    fedilink
    English
    arrow-up
    52
    arrow-down
    8
    ·
    edit-2
    6 months ago

    I mean is this stuff even really AI? It has no awareness of what it’s saying. It’s simply calculating the most probable next word in a typical sentence and spewing it out. I’m not sure this is the tech that will decide humanity is unnecessary.

    It’s just rebranded machine learning, IMO.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      22
      arrow-down
      6
      ·
      edit-2
      6 months ago

      It has no awareness of what it’s saying. It’s simply calculating the most probable next word in a typical sentence and spewing it out.

      Neither of these things are true.

      It does create world models (see the Othello-GPT papers, Chess-GPT replication, and the Max Tegmark world model papers).

      And while it is trained on predicting the next token, it isn’t necessarily doing it from there on out purely based on “most probable” as your sentence suggests, such as using surface statistics.

      Something like Othello-GPT, trained to predict the next move and only fed a bunch of moves, generated a virtual Othello board in its neural network and kept track of “my pieces” and “opponent pieces.”

      And that was a toy model.

      • technocrit@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        3
        ·
        edit-2
        6 months ago

        Something like Othello-GPT, trained to predict the next move and only fed a bunch of moves, generated a virtual Othello board in its neural network and kept track of “my pieces” and “opponent pieces.”

        AKA Othello-GPT chooses moves based on statistics.

        Ofc it’s going to use a virtual board in this process. Why would a computer ever use a real one board?

        There’s zero awareness here.

        • xthexder@l.sw0.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 months ago

          Let me try putting this a different way: The machine is picking the next best word / action / chess move to output based on its past experience of the world (i.e. it’s training data). It’s not just statistics, it’s making millions of learned connections between words, and through association they start to have meaning.

          Is this not exactly what the human brain does itself? Humans just have the advantage of multiple senses and having a physical agent (a body) to interact with the world.

          The problem that AI has is it’s got no basis in reality. It’s like a human talking about fantasy things like unicorns. We’ve only ever experienced them as descriptions and art created from those descriptions without any basis in reality.

    • Pilferjinx@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      1
      ·
      6 months ago

      The definitions and semantics are getting stressed to breaking points. We don’t have clear philosophy of mind for us humans let alone an overlay of other non human agents.

      • dustyData@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        6
        ·
        edit-2
        6 months ago

        We have 3 thousand years of tradition on philosophy of the mind, we have a clear idea. It’s just somewhat complex and difficult to grasp with, and there is still room for development and understanding. But this is like saying that we don’t have a clear philosophy of physics just because quantum physics is hard and there are things we don’t fully understand yet. As for non-human agents, what even is that? are dogs non-human agents? fish? virus? Computers are just the newest addition to the list of non-human agents we have philosophized about and we probably understand better the mind of other relatively simple life forms than our own. Definitions and semantics are always being stressed and are always breaking, that’s what symbols are for, that’s one of their main defining use cases. Go talk to an north-east African about rizz and tell me how that goes.

    • redcalcium@lemmy.institute
      link
      fedilink
      English
      arrow-up
      12
      ·
      6 months ago

      Supposedly they found a new method (Q*) that significantly improved their models, enough to make some key people revolt to force the company to not monetize it out of ethical concern. Those people have been pushed out ofc.

    • erwan@lemmy.ml
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      6 months ago

      OK, generative AI isn’t machine learning.

      But to get back to what AI is, the definition has been moving forever as AI becomes “just software” when it becomes ubiquitous. People were shocked that machines could calculate, then that they can play chess better than humans, then that they can read handwriting…

      The first mistake have been to invent the term to start with, as it implies thinking machine but they’re not.

      Or as Dijkstra puts it: “asking whether a machine can think is as dumb as asking if a submarine can swim”.

      • blurg@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 months ago

        Or as Dijkstra puts it: “asking whether a machine can think is as dumb as asking if a submarine can swim”.

        Alan Turing puts it similarly, the question is nonsense. However, if you define “machine” and “thinking”, and redefine the question to mean: is machine thinking differentiable from human thinking; you can answer affirmatively, theoretically (rough paraphrasing). Though the current evidence suggests otherwise (e.g. AI learning from other AI drifts toward nonsense).

        For more, see: Computing Machinery and Intelligence, and Turing’s original paper (which goes into the Imitation Game).

    • Possibly linux@lemmy.zip
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      6 months ago

      The problem is that it is capable of doing things that historically wasn’t possible with a machine. It can “act natural” in a sense.

      There are so many cans of worms