top of page

Beautifully biased: Trust, Truth & Trouble in Generative AI
Dr. Richa Singh

Today's text-to-image models produce stunning visuals that mask dual layers of bias. Our research shows how technical failures in logical composition, counting, spatial relations, and attribute binding, intersect with cultural biases including skin-tone, ethnicity, and representation. Models fail catastrophically when combining primitives, while simultaneously amplifying societal stereotypes and erasing minority identities. Training datasets underrepresent both compositional complexity and cultural diversity; while evaluation metrics privilege aesthetic appeal over both logical accuracy and fair representation. This creates compounding biases where underrepresented groups face logical misrepresentation. Current debiasing methods address neither architectural flaws nor cultural harms. Achieving trustworthy AI requires fundamental breakthroughs that tackle both computational and social dimensions of bias.

©2025 by Plaksha Academic Conference.

bottom of page