• jacksilver@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    ·
    10 months ago

    It’s done because the underlying training data is heavily biased to begin with. It’s been a known issue for along time with AI/ML, for example racist cameras have been an issue for decades https://petapixel.com/2010/01/22/racist-camera-phenomenon-explained-almost/.

    So they do this to try to correct for biases in their training data. It’s a terrible idea, and shows the rocky path forward for GenAI, but it’s easier than actually fixing the problem ¯\_(ツ)_/¯