What if we could just ask AI to be less biased?
last week, I publish a story about new tools developed by researchers at AI startup Hugging Face and the University of Leipzig that allow people to see for themselves what kinds of AI models have inherent biases about genders and colors different ethnicities.
Although I have written a lot about How are our biases reflected in AI models, it still jars me to see exactly how pale, manly, and old the humans of the AI are. That’s especially true for DALL-E 2, which produces 97% of white men when prompted as “CEO” or “director”.
And the problem of bias is even deeper than you think into the larger world created by AI. These models were built by American companies and trained on North American data, said Federico Bianchi, a researcher at Stanford University. I.
As the world becomes increasingly flooded with AI-generated images, we will most likely see images that reflect America’s biases, culture, and values. Who knew that AI could eventually become a major tool of American soft power?
So how do we solve these problems? A lot of work has been done to correct the biases in the dataset on which the AI models are trained. But two recent research papers suggest interesting new approaches.
What if instead of making the training data less biased, you could simply ask the model to give you less biased answers?
A team of researchers at the Technical University of Darmstadt, Germany and the AI startup Hugging Face developed a tool call fair diffusion makes it easy to tweak AI models to create the types of images you want. For example, you could create a stock photo of the CEO at different settings, then use Fair Diffusion to swap the white men in the photos with women or people of different ethnicities.
As the Hugging Face tools show, AI models that generate images on the basis of image-text pairs in their training data default to very strong biases on occupation, gender, and gender. nation. The German researchers’ fair diffusion tool is based on a technique they have developed called semantic guideallows the user to guide how the AI system creates an image of a human and edit the results.
Kristian Kersting, a professor of computer science at TU Darmstadt who was involved in the study, says the AI system keeps very close to the original image.