The viral AI avatar app Lensa undressed me—without my consent
Stability.AI, the company that developed Stable Diffusion, given a new version of the AI model in late November, the spokesperson said that the original model was released with a safety filter that Lensa doesn’t seem to use, as it will discard these outputs. One way Stable Diffusion 2.0 filters content is by removing frequently repeated images. The more often something is repeated, such as Asian women in pornographic scenes, the stronger the association becomes in the AI model.
Caliskan has learned CLIP (Contrast Language Image Pre-Training), is a system that helps Stable Diffusion produce images. CLIP learns to match images in datasets with descriptive text prompts. Caliskan found that it was filled with problematic gender and racial biases.
“Women are involved with sexual content, while men are associated with professional, career-related content in any of the key areas such as medicine, science, business,” Caliskan said. , etc
Interestingly, my Lensa avatar was more realistic when my photo went through the male content filters. I have my avatar dressed up (!) and in a normal pose. In some images, I’m wearing a white coat that appears to be a chef or a doctor.
But it’s not just the training data to blame. Ryan Steed, a doctoral student at Carnegie Mellon University, who studied error in image generation algorithm.
“Someone has to choose the training data, decide to build the model, decide to take certain steps to minimize those biases,” he said.
The app developers have made the choice that the male avatar will appear in a space suit, while the female avatar has a cosmic G string and fairy wings.
A Prisma Labs spokesperson said that “sporadic sexualization” of the photos happens to people of all genders, but in different ways.