Content moderators who worked on ChatGPT say they were traumatized by reviewing graphic content: 'It has destroyed me completely.' - eviltoast

Content moderators who worked on ChatGPT say they were traumatized by reviewing graphic content: ‘It has destroyed me completely.’::Moderators told The Guardian that the content they reviewed depicted graphic scenes of violence, child abuse, bestiality, murder, and sexual abuse.

  • AlmightySnoo 🐢🇮🇱🇺🇦@lemmy.world
    link
    fedilink
    English
    arrow-up
    53
    arrow-down
    2
    ·
    edit-2
    1 year ago

    He said that many of these passages centered on sexual violence and that the work caused him to grow paranoid about those around him. He said this damaged his mental state and his relationship with his family.

    Another former moderator, Alex Kairu, told the news outlet that what he saw on the job “destroyed me completely.” He said that he became introverted and that his physical relationship with his wife deteriorated.

    The moderators told The Guardian that the content up for review often depicted graphic scenes of violence, child abuse, bestiality, murder, and sexual abuse.

    A Sama spokesperson told the news outlet that workers were paid from $1.46 to $3.74 an hour. Time previously reported that the data labelers were paid less than $2 an hour to review content for OpenAI.

    Sam deserves to be sued to bankruptcy at this point.

    • fluxion@lemmy.world
      link
      fedilink
      English
      arrow-up
      30
      arrow-down
      4
      ·
      1 year ago

      Meanwhile Elon Musk is playing with his nipples at the thought of acquiring ChatGPT and then firing all these moderators

    • nxfsi@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      They should be grateful, they’re being infinitely better paid than reddit mods.

      • spiderman@ani.social
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        1 year ago

        less pay, not providing mental health care to the moderators, better detection of nsfw and nsfl by bots or ai (or any automated thing) so that only very small amount of stuffs go to the moderators for checking after most of them being taken down automatically.

        • Thorny_Thicket@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          4
          ·
          1 year ago

          It’s the job of a content moderator to look at this kind of stuff though. That’s literally what you’re being paid for.

          • spiderman@ani.social
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 year ago

            doesn’t mean that they should be getting paid very low and without a mental healthcare. they are humans too and they are sensitive to weird stuffs like us too.

            • Thorny_Thicket@sopuli.xyz
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              5
              ·
              edit-2
              1 year ago

              Sure but these aren’t random people grabbed from the street and forced to do content moderation so I don’t quite get why Sam needs to be sued into backruptcy because of this