Top clowns all agree their balloon animals are slightly sentient - eviltoast

Then: Google fired Blake Lemoine for saying AIs are sentient

Now: Geoffrey Hinton, the #1 most cited AI scientist, quits Google & says AIs are sentient

That makes 2 of the 3 most cited scientists:

  • Ilya Sutskever (#3) said they may be (Andrej Karpathy agreed)
  • Yoshua Bengio (#2) has not opined on this to my knowledge? Anyone know?

Also, ALL 3 of the most cited AI scientists are very concerned about AI extinction risk.

ALL 3 switched from working on AI capabilities to AI safety.

Anyone who still dismisses this as “silly sci-fi” is insulting the most eminent scientists of this field.

Anyway, brace yourselves… the Overton Window on AI sentience/consciousness/self-awareness is about to blow open>

  • BigMuffin69@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    17
    ·
    7 months ago
    • Barges in
    • Insists that somewhere between randomly initializing the model weights and finishing training, sentience magically emerges
    • Refuses to elaborate
    • Leaves Google

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      ·
      7 months ago

      Ah but we all know that plato’s cave is an allegory about the shadows cast by the basilisk upon all our mental theaters

      (That twitter clip was amazingly unhinged, I wonder what the full context was)

      • zogwarg@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        7 months ago

        And those shadows are just as sentient as we are, even if they don’t depict the world, they convey a perception of a hypothetical world in which they are accurate!

        Trying to grapple with the meaning consciousness through input/output is so close to being philosophical zombies type interesting, and yet so far and vacuous in what he actually says, that could apply to dice picking which color the sky is today. Also pretty hilarious that we would choose being WRONG, as a baseline (because LLM’s are so bad) for outrospection, instead using the more natural cooperative nature of language. (Which machines fail at, which is maybe also why)