LessWrong: but what about some eugenics, tho? - eviltoast
  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    OK, so obviously “alignment” means “teach AI not to kill all humans”, but now I figure they also want to prevent AI from using all that computing power to endlessly masturbate, or compose hippie poems, or figure out Communism is the answer to humanity’s problems.

    • locallynonlinear@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      In practice, alignment means “control”.

      And the the existential panic is realizing that control doesn’t scale. So rather than admit that goal “alignment” doesn’t mean what they think it is, rather than admit that darwinian evolution is useful but incomplete and cannot sufficiently explain all phenomena both at the macro and micro levels, rather than possibly consider that intelligence is abundant in systems all around us and we’re constantly in tenuous relationships at the edge of uncertainty with all of it,

      it’s the end of all meaning aka the robot overlord.