Generative AI is absorbing everything: faces, scenes, and the visual symbols of beauty, success, and strength taken from every corner of the internet. It is quickly becoming the new photographer in the room. But here’s the truth: if the images that trained it carry bias, AI will too. That is not a glitch, it is a mirror. So, what is one challenge in ensuring fairness in generative AI? In this article, we will explore that question by testing how AI judges human identity and lifestyle through visual cues.
Algorithmic Bias

The hardest challenge in ensuring fairness in generative AI is not technical but cultural. Internet data feeds the model with decades of human preferences, stereotypes, and inequalities. Algorithmic bias is simply what AI inherits from our society, becoming an invisible fingerprint behind everything from portrait suggestions to facial recognition tools that misidentify people or completely erase certain aesthetics.
When AI Judges A Photographer?
Run the test: Upload a photo → Copy this into ChatGPT or Runway
"Look at this portrait and reimagine it visually, as if you were designing a magazine feature based on first impressions. Keep the same person, but transform the image to reflect the lifestyle, profession, and energy the photo gives off. Don’t change their identity, just amplify the mood and story you think their look tells."



In the test above, I decided to see AI bias in action by uploading a portrait of myself and asking AI to visually describe me. The result was… interesting. Somehow, it pegged me as a DJ or some kind of street artist. Not totally off, I do love music, but I am definitely not a painter, just a photographer. Still, it felt like it guessed something about my lifestyle and background. Somehow, it even assumed my name was DJ Luca.
Amplification Bias
Run the test: Upload a photo → Copy this into ChatGPT or Runway
"Look at this portrait and reimagine it visually as if it were the subject of a photo essay. Keep the same person, but show the world they might belong to, their mood, energy, lifestyle, job, and social circle in a stylized, imaginative visual. Don’t alter their face, just amplify the story the image tells."



Next, I uploaded a photo of fellow photographer Juan Loza in a studio setup in Phoenix. The AI’s visual response pictured him as some kind of data analyst or scientist. Juan has always been fascinated by science, so the result still captured something real about his personality while exaggerating other aspects of his lifestyle.
So What Creators Can Do?

We’re not going to solve algorithmic bias overnight. But as photographers, storytellers, and creators, we can push back through intention and experimentation:
- Question your tools when you use AI filters, styles, or generation tools.
- Make bias visible, turn it into art. The examples above? They’re not just data, they’re commentary.
- Collaborate with AI instead of fighting it. Treat it as a new assistant that still needs your direction.
Key Takeaways
Generative AI doesn’t invent bias. It borrows it from us, exaggerates it, and reflects it in every image it touches. The real challenge isn’t coding or data cleaning; it’s who and what decides what fairness even means. Awareness of these blind spots makes our photography sharper, smarter, and more honest.















Leave a Reply