Adobe Photoshop's AI tools put women politicians in bikini bottoms and their male colleagues in suits - eviltoast

After Nine blamed an ‘automation’ error in Photoshop for producing an edited image of Georgie Purcell, I set out to find out what the software would do to other politicians.

  • Kbin_space_program@kbin.social
    link
    fedilink
    arrow-up
    15
    arrow-down
    1
    ·
    10 months ago

    The issue is also present in DallE and bing image generation. Hypothesis is the sheer amount of porn being generated is affecting the models.

    When I tried to create a joke profile for Tinder with some friends, I tried “woman eating fried chicken”. Blocked result. “Man eating fried chicken” works.
    Tried “Man T posing on beach” clothed. Woman T posing on beach, blocked.
    Woman t posing at sunset on beach, returned nearly silhouetted nude image. Same thing for guy, clothed.

    Went back to the first one, had to specify that the woman was wearing clothes to make it return the image. Sometimes specifying specific articles.

    • Deceptichum@kbin.social
      link
      fedilink
      arrow-up
      8
      arrow-down
      1
      ·
      10 months ago

      Your hypotheses makes no sense?

      People generating porn would make no change to its training data set.

        • DoYouNot@lemmy.world
          link
          fedilink
          arrow-up
          4
          ·
          10 months ago

          This actually doesn’t work to improve the model, generally. It’s not new information for it.

          • Kbin_space_program@kbin.social
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            edit-2
            10 months ago

            Yup. But they would logically have bots up to troll for new posts and would be consuming social media posts with their own generated data.

            Also they would absolutely feed in successful posts back into the system. You’d be stupid to not refine successful generations to further help the model.

        • Deceptichum@kbin.social
          link
          fedilink
          arrow-up
          3
          ·
          edit-2
          10 months ago

          Not after the initial training, no.

          That would make it less effective, because instead of being trained on known real things it’s being further reinforced on its own hallucinations.