Computers make mistakes and AI will make things worse — the law must recognize that - eviltoast
  • Teluris@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    10 months ago

    I the example you gave I would actually put the blame the software provider. It wouldn’t be ridiculously difficult to anonimize the data, get rid of name, race, gender, and leave only the information about the crime committed, the evidence, any extenuating circumstances, and the judgment.

    It’s more difficult then simply throwing in all the data, but it can and should be done. It could still contain some bias, based on things like the location of the crime. But the bias would be already greatly reduced.

    • TwilightVulpine@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      edit-2
      10 months ago

      I don’t think you can completely anonymize data and still end up with useful results, because the AI will be faced with human inconsistency and biases regardless. Take away personally identifiable information and it might mysteriously start behaving harsher regarding certain locations, like, you know, districts where mostly black and poor people live.

      We’d need to have a reckoning with our societal injustices before we can determine what data can be used for many purposes. Unfortunately many people who are responsible for these injustices are still there, and they will be the people who will determine if the AI output is serving their purpose or not.

      • HauntedCupcake@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        10 months ago

        The “AI” that I think is being referenced is one that instructs officers to more heavily patrol certain areas based on crime statistics. As racist officers often patrol black neighbourhoods more heavily, the crime statistics are higher (more crimes caught and reported as more eyes are there). This leads to a feedback loop where the AI looks at the crime stats for certain areas, picks out the black populated ones, then further increases patrols there.

        In the above case, any details about the people aren’t needed, only location, time, and the severity of the crime. The AI is still being racist despite race not being in the dataset