AI Reduces the World to Stereotypes - eviltoast
  • Turun@feddit.de
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    3
    ·
    1 year ago

    Good article. The text is written from a pretty left-ish perspective, but the conclusion is very well rounded. The whole article is well worth reading.

    Regarding the content

    Each doll was supposed to represent a different country: Afghanistan Barbie, Albania Barbie, Algeria Barbie, and so on. The depictions were clearly flawed: Several of the Asian Barbies were light-skinned; Thailand Barbie, Singapore Barbie, and the Philippines Barbie all had blonde hair. Lebanon Barbie posed standing on rubble; Germany Barbie wore military-style clothing. South Sudan Barbie carried a gun.

    I find it funny how the images are described as flawed, because they do not conform to the stereotypical look of a person from these countries. In an article that argues against stereotypes.

    In many cases, this results in a more accurate or relevant image. But if you don’t want an “average” image, you’re out of luck. “It’s kind of the reason why these systems are so good, but also their Achilles’ heel,” Luccioni said.

    I’d argue with such a generic prompt you implicitly asked for an average image. But I do concur that the sex bias, especially in the Indian portraits, is extreme and not desired.

    Usually, this requires humans to annotate the images. “If you give a couple of images to a human annotator and ask them to annotate the people in these pictures with their country of origin, they are going to bring their own biases and very stereotypical views of what people from a specific country look like right into the annotation,”

    There is also a language bias in data sets that may contribute to more stereotypical images. “There tends to be an English-speaking bias when the data sets are created,” Luccioni said. “So, for example, they’ll filter out any websites that are predominantly not in English.

    This language bias may also occur when users enter a prompt. Rest of World ran its experiment using English-language prompts; we may have gotten different results if we typed the prompts in other languages.

    This is a very important point and I am really curious how the results differ when prompting in completely different languages. How would the results look if the same experiment is repeated, but with Chinese prompts instead? With icelandic prompts?

    Out of the 100 images of predominantly beige American food, 84 included a U.S. flag somewhere on the plate.

    It’s good to see Americans realizing just how pervasive and annoying their flag based nationalism is, haha.

    I especially notice that in YouTube videos where you see a machine shop or something. For example in some videos by smarter every day. I have never seen a German flag in a German machine shop, but seemingly all American machine shops have a giant American flag hanging on the wall. It’s so weirdly nationalist.

    (Though this may be based in the training data annotations as well. If you have Americans tag pictures, of course only pictures that are blatantly American will be tagged as such. In all other images that tag is implied, because the person doing the tagging is American, so an image of an American without a flag is just a normal person, it does not have to be stated that they are an American person)