How it feels to be sexually objectified by an AI
This story originally appeared in Algorithms, our weekly newsletter on AI. To get stories like this in your inbox first, Register here.
My social feeds this week have been dominated by two hot topics: OpenAI’s newest chatbot, ChatGPT, and the viral AI avatar app Lensa. I love exploring new technology, so I gave Lensa a try.
I was hoping to get the same results as my colleagues at MIT Technology Review. The app created realistic and eye-catching avatars for them—think astronauts, warriors, and electronic music album covers.
Instead, I have a lot of nudity. Of the 100 avatars I’ve created, 16 are topless and another 14 feature me in extremely skimpy clothes and overtly sexually suggestive poses. You can Read my story here.
Lensa creates her avatar using stable diffusion, an open source AI model that generates images based on text prompts. Stable diffusion was trained on LAION-5B, a huge open source dataset that was synthesized by collecting images from the internet.
And because the internet is flooded with images of naked or unclothed women, and the photos reflect sexist, racist stereotypes, the dataset is also skewed on what types of images this.
As an Asian woman, I think I’ve seen it all.I was annoyed after realizing a previous date only dated Asian women. I have struggled with men who think Asian women are great housewives. I’ve heard crude comments about my genitals. I was confused with other Asian in the room.
Being sexually aroused by an AI is not what I expected, although that is not surprising. To be honest, it was severely disappointing. My colleagues and friends have the honor of being stylized as artistic representations of themselves. They are recognizable in their avatars! I did not. I have images of Asian women in general clearly modeled after cartoon or video game characters.
Interestingly, I’ve found more realistic depictions of myself when telling the app that I’m male. This could have applied a different set of prompts to the image. The difference is obvious. In the images created with the male filter, I am dressed, I look assertive and—most importantly—I can recognize myself in the photo.
“Women are associated with sexual content,while men are related to professional, career-related content in any of the key areas such as medicine, science, business, etc,” said Aylin Caliskan, assistant professor at the University of Washington , who studies bias and representation in AI systems, said.
This type of stereotype can easily be detected with a new tool built by researcher Sasha Luccioni, who works at AI startup Hugging Face, which allows people toexplore different biasesin stable diffusion.
The tool shows how the AI model renders white men as doctors, architects and designers while women are depicted as hairdressers and maids.
But it’s not just the training data to blame.Ryan Steed, a doctoral student at Carnegie Mellon University, who studiederror in image generation algorithm.
“Someone has to choose the training data, decide to build the model, decide to take certain steps to minimize those biases,” he said.
Prisma Labs, the company behind Lensa, says all genders face “sporadic sexualization”. But for me, that’s not good enough. Someone made a conscious decision to apply certain color schemes and scenarios and highlight certain body parts.
In the short term, these decisions could have some obvious harm, such as easy access to deepfake generators that create non-consensual nudity of women or children. em.
But Aylin Caliskan sees even bigger long-term problems ahead. As AI-generated images with their embedded trends flood the internet, they will eventually become training data for future AI models. “Are we going to create a future where we continue to amplify these biases and marginalize the population?” she speaks.
That’s a really scary thought, and I hope that we bring these issues up in time and consideration before the problem gets bigger and deeper.
Deeper learning
The grants are intended to help cities prepare for attacks, says a new report by advocacy groups Center for Action on Race and the Economy (ACRE), LittleSis, MediaJustice and Immigrants. Terrorist attacks are being spent on “massive purchases of surveillance technology” for US police departments. Defense Project shows.
Procurement of AI-powered spying technology:Example: The Los Angeles Police Department used counterterrorism funding to purchase automatic license plate readers worth at least $1.27 million, radio equipment worth up to 24 million dollars, data fusion platform Palantir (commonly used for AI-powered predictive policing) and social networks. media monitoring software.
Why is this important: For a variety of reasons, a lot of problematic technology has ended up in high-stakes sectors, such as police, with little or no oversight. For example, facial recognition company Clearview AI offers a “free trial” of its technology to police departments, allowing them to use it without a purchase agreement or budget approval. Federal funding for counterterrorism does not require much transparency and public scrutiny. The report’s findings are yet another example of a growing pattern in which citizens are increasingly ignorant of police procurement of technology. Read more from Tate Ryan-Mosley here.
Bits and bytes
hatGPT, Galactica and progress traps
AI researchers Abeba Birhane and Deborah Raji write that “the approaches lack a basis for model release” (as seen with theMeta .’s Galaxy) and an extremely defensive response to critical feedback constitutes a trend of “deep concern” in AI right now. They argue that when the models don’t “live up to the expectations of those most likely to be harmed by them,” “their product is not ready to serve these communities and does not deserve to be harmed.” widely released”. (Wired)
New chatbots can change the world. Can you trust them?
People were convinced by how coherent ChatGPT was. The problem is, a significant amount of what it spits out is meaningless. Big language models are no more than confident bullshit and we should be wise to approach them with that mindset.
(The New York Times)
Stuttering, some people let AI do the talking
Despite the flaws of technology, some people—such as those with learning difficulties—still find large language models useful as a way to help express themselves.
(washington articles)
EU countries’ stance on AI rules draws criticism from lawmakers and activists
EU AI law,AI Act, is getting closer to completion. EU countries have accepted their views on what regulation should be, but critics say many are important, such as companies’ use of facial recognition in public places , has not been addressed and many protections have been reduced. (Reuters)
Investors looking to profit from innovative AI startups
It’s not just you. Venture capitalists also argue that artificial intelligence startups like Stability.AI, which created the popular text-to-image model Stable Diffusion, are among the hottest in the industry. current technology. And they’re throwing heaps of money at them. (Financial Times)