The student ended up with a fairer complexion, dark blonde hair and blue eyes after her Playground AI request

  • Jeena@jemmy.jeena.net
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    To be fair, I used a Chinese AI picture generator app with my face and it made it more Asian looking. It’s obvious that each software has biases towards the people who made and trained it. It’s not good, but it’s expected and happening everywhere.

    • stopthatgirl7@kbin.socialOP
      link
      fedilink
      arrow-up
      2
      arrow-down
      8
      ·
      1 year ago

      Ok, but she asked it to make her look professional and the only thing it changed was her race. Not the background, not her clothes. Last I checked, a university sweatshirt wasn’t exactly professional wear.

      • jet@hackertalks.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Machine learning is biased towards its training data. If the image generation algorithm (notice I’m not saying AI) is trained on photos of “” professionals " being of a certain demographic that’s what it will prefer when it’s generating an image.

        So these shocking exposés should simply be this image generator was trained with biased data. But the human condition is building biases. So we’re never really going to get away from that.

  • Encrypt-Keeper@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    1 year ago

    I mean wouldn’t this just be due to like the sheer number of BS “female professional” stock photos used on the websites of call centers globally, that the AI ingested? Said “professional white person” photos being used especially in non-western websites in order to gain legitimacy in the west?

    Like given what little I know about how AI ingests and spits out data, it might be correlating the buzzword “professional”, and stock photos of white people that were ingested from Asian websites. It might be “wrong” but the AI doesn’t attempt to be “right” it’s just trying to give you what you expect based on the data it has.

    • Fubber Nuckin'@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Just, you know, as long as that profile isn’t for some site whose sole purpose is allowing others to identify you for your career.

  • ∟⊔⊤∦∣≶@lemmy.nz
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    There’s been huge discussion on this already: https://lemmy.nz/post/684888

    Sorry, not sure how to ! post so it opens in your instance.

    TL;DR

    Any result is going to be biased. If it generated a crab wearing liederhosen, it’s obviously a bias towards crabs. You can’t not have a biased output because the prompting is controlling the bias. There’s no cause for concern here. The model is outputting by default the general trend of the data it was trained with. If it was trained with crabs, it would be generating crab-like images.

    You can fix bias with LoRAs and good prompting.