The Circle of AI Life - eviltoast
  • Stamets@lemmy.world
    link
    fedilink
    arrow-up
    20
    arrow-down
    1
    ·
    11 months ago

    Most likely rogue AI scenario

    Doubt.jpg

    We don’t have any data to base such a likelihood off of in the first place.

    • Leate_Wonceslace@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      2
      arrow-down
      5
      ·
      11 months ago

      Doubt is an entirely fair response. Since we cannot gather data on this, we must rely on the inferior method of using naive models to predict future behavior. AI “sovereigns” (those capable of making informed decisions about the world and have preferences over worldstates) are necessarily capable of applying logic. AI who are not sovereigns cannot actively oppose us, since they either are incapable of acting uppon the world or lack any preferences over worldstates. Using decision theory, we can conclude that a mind capable of logic, possessing preferences over worldstates, and capable of thinking on superhuman timescales will pursue its goals without concern for things it does not find valuable, such as human life. (If you find this unlikely: consider the fact that corporations can be modeled as sovereigns who value only the accumulation of wealth and recall all the horrid shit they do.) A randomly constructed value set is unlikely to have the preservation of the earth and/or the life on it as a goal, be it terminal or instrumental. Most random goals that involve the AI behaving noticeably malicious would likely involve the acquisition of sufficient materials to complete or (if there is no end state for the goal) infinitely pursue what it wishes to do. Since the Earth is the most readily available source for any such material, it is unlikely not to be used.

      • Stamets@lemmy.world
        link
        fedilink
        arrow-up
        7
        arrow-down
        1
        ·
        edit-2
        11 months ago

        This makes a lot of assumptions though and none of which are ones that I particularly agree with.

        First off, this is predicated entirely off of the assumption that AI is going to think like humans, have the same reasoning as humans/corporations and have the same goals/drive that corporations do.

        Since we cannot gather data on this, we must rely on the inferior method of using naive models to predict future behavior.

        This does pull the entire argument into question though. It relies on simple models to try and predict something that doesn’t even exist yet. That is inherently unreliable when it comes to its results. It’s hard to guess the future when you won’t know what it looks like.

        Decision Theory

        Decision Theory has one major drawback which is that it’s based entirely off of past events and does not take random chance or unknown-knowns into account. You cannot focus and rely on “expected variations” in something that has never existed. The weather cannot be adequately predicted three days out because of minor variables that can impact things drastically. A theory that doesn’t even take into account variables simply won’t be able to come close to predicting something as complex and unimaginable as artificial intelligence, sentience and sapience.

        Like I said.

        Doubt.jpg

        • Leate_Wonceslace@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          11 months ago

          predicated entirely off of the assumption that AI is going to think like humans

          Why do you think that? What part of what I said made you come to that conclusion?

          worthless

          Oh, I see. You just want to be mean to me for having an opinion.

          • Stamets@lemmy.world
            link
            fedilink
            arrow-up
            3
            arrow-down
            1
            ·
            11 months ago

            Why do you think that? What part of what I said made you come to that conclusion?

            I worded that badly. It should more accurately say “it’s heavily predicated on the assumption that AI will act in a very particular way thanks to the narrow scope of human logic and comprehension.” It still does sort of apply though due to the below quote:

            we can conclude that a mind capable of logic, possessing preferences over worldstates, and capable of thinking on superhuman timescales will pursue its goals without concern for things it does not find valuable, such as human life.

            Oh, I see. You just want to be mean to me for having an opinion.

            I disagree heavily with your opinion but no, I’m not looking to be mean for you having one. I am, however, genuinely sorry that it came off that way. I was dealing with something else at the time that was causing me some frustration and I can see how that clearly influenced the way I worded things and behaved. Truly I am sorry. I edited the comment to be far less hostile and to be more forgiving and fair.

            Again, I apologize.