The student ended up with a fairer complexion, dark blonde hair and blue eyes after her Playground AI request
To be fair, I used a Chinese AI picture generator app with my face and it made it more Asian looking. It’s obvious that each software has biases towards the people who made and trained it. It’s not good, but it’s expected and happening everywhere.
Ok, but she asked it to make her look professional and the only thing it changed was her race. Not the background, not her clothes. Last I checked, a university sweatshirt wasn’t exactly professional wear.
Machine learning is biased towards its training data. If the image generation algorithm (notice I’m not saying AI) is trained on photos of “” professionals " being of a certain demographic that’s what it will prefer when it’s generating an image.
So these shocking exposés should simply be this image generator was trained with biased data. But the human condition is building biases. So we’re never really going to get away from that.
I mean wouldn’t this just be due to like the sheer number of BS “female professional” stock photos used on the websites of call centers globally, that the AI ingested? Said “professional white person” photos being used especially in non-western websites in order to gain legitimacy in the west?
Like given what little I know about how AI ingests and spits out data, it might be correlating the buzzword “professional”, and stock photos of white people that were ingested from Asian websites. It might be “wrong” but the AI doesn’t attempt to be “right” it’s just trying to give you what you expect based on the data it has.
@stopthatgirl7 She also ended up with slightly frizzy hair compared to her relatively straight hair.
All around messed up and creepy.
That’s what I noticed. It made her hair arguably LESS “professional.”
deleted by creator
Just, you know, as long as that profile isn’t for some site whose sole purpose is allowing others to identify you for your career.
There’s been huge discussion on this already: https://lemmy.nz/post/684888
Sorry, not sure how to ! post so it opens in your instance.
TL;DR
Any result is going to be biased. If it generated a crab wearing liederhosen, it’s obviously a bias towards crabs. You can’t not have a biased output because the prompting is controlling the bias. There’s no cause for concern here. The model is outputting by default the general trend of the data it was trained with. If it was trained with crabs, it would be generating crab-like images.
You can fix bias with LoRAs and good prompting.