Study: Some language reward models exhibit political bias - eviltoast

Is it possible to train reward models to be both truthful and politically unbiased?

This is the question that the CCC team, led by PhD candidate Suyash Fulay and Research Scientist Jad Kabbara, sought to answer. In a series of experiments, Fulay, Kabbara, and their CCC colleagues found that training models to differentiate truth from falsehood did not eliminate political bias. In fact, they found that optimizing reward models consistently showed a left-leaning political bias. And that this bias becomes greater in larger models. “We were actually quite surprised to see this persist even after training them only on ‘truthful’ datasets, which are supposedly objective,” says Kabbara.

  • vzq@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    15 days ago

    Maybe it’s because a certain end of the political spectrum JUST LIES ALL THE TIME?

  • supersquirrel@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    15 days ago

    “may also be biased, even when trained on statements known to be objectively truthful.”

    I feel like computer science aggressively ignores the humanities/philosophy as a waste of time and then fundamentally undermines and hopelessly entraps itself in the wrong questions for doing so.