Andrew Molitor on "AI safety". "people are gonna hook these dumb things up to stuff they should not, and people will get killed. Hopefully the same people, but probably other people." - eviltoast
  • scruiser@awful.systems
    link
    fedilink
    English
    arrow-up
    3
    ·
    13 days ago

    This is a good summary of half of the motive to ignore the real AI safety stuff in favor of sci-fi fantasy doom scenarios. (The other half is that the sci-fi fantasy scenarios are a good source of hype.) I hadn’t thought about the extent to which Altman’s plan is “hey morons, hook my shit up to fucking everything and try to stumble across a use case that’s good for something” (as opposed to the “we’re building a genie, and when we’re done we’re going to ask it for three wishes” he hypes up), that makes more sense as a long term plan…