This story originally appeared in Algorithms, our weekly newsletter on AI. To get stories like this in your inbox first, Register here.
While the US and EU may differ on how to regulate technology, their lawmakers seem to agree on one thing: the West needs to ban AI-powered social scoring.
As they are knowledge In it, social scoring is a practice in which authoritarian governments—China in particular—rank people’s trustworthiness and punish them for undesirable behaviour, such as stealing or not paying. Basically, it is seen as a backward superpoint assigned to each citizen.
The EU is currently negotiating a new law calledAI Actwould ban member states and possibly even private companies from implementing such a system.
Vincent Brussee, an analyst at the Mercator Institute for China Studies, a German think tank, said the problem is that “it’s basically banning the air”.
Back in 2014, China announced a six-year plan to build a system that rewards actions that build trust in society and punishes actions that don’t. Eight years on, a draft law has been published that attempts to codify past social credit pilot programs and guide future implementation.
There have been some controversial local trials, such as one in the small city of Vinh Thanh in 2013, which gave every resident a starting personal credit score of 1,000, which can increase or decrease depending on how their actions are assessed. . People can now opt out, and local authorities have removed some of the controversial criteria.
But these have not gained wider traction elsewhere and do not apply to the entire population of China. There is no comprehensive, nationwide social credit system with algorithms that rank people.
Like my colleague Zeyi Yangexplain“The reality is, that dreaded system doesn’t exist, and the central government doesn’t seem to want to build it either.”
What has been done is mostly pretty low technology. “It is a combination of efforts to regulate the financial credit industry, allow government agencies to share data with each other, and promote ethical values accepted by the state,” Zeyi writes.
Kendra Schaefer, a partner at Trivium China, a Beijing-based research consulting firm, who compiled areporton the subject for the US government, could not find a single instance where data collection in China resulted in automated sanctions without human intervention. The South China Morning PostFindthat in Rongcheng, human “information collectors” would go around town and write down people’s misdeeds with pen and paper.
The legend originates from an experimental program called Sesame Credit, Developed by Chinese technology company Alibaba. Brussee said it was an attempt to gauge people’s creditworthiness using customer data at a time when most Chinese people didn’t have credit cards. This effort is combined with the entire social credit system in what Brussee describes as the “Chinese whisper game”. And the misunderstanding has taken on a life of its own.
The irony is that while US and European politicians describe this as a problem rooted in authoritarian regimes, systems of ranking and punishing people were available in the West. Algorithms designed to automate decisions are mass released and used to deny people housing, jobs, and basic services.
In Amsterdam, for example, the government used an algorithm toyouth ratingfrom disadvantaged neighborhoods according to their ability to become criminals. They claim the purpose is to prevent crime and help provide better, more targeted assistance.
But in reality, human rights groups argue, it has increased stigma and discrimination. Young people on this list face more containment from the police, home authorities and stricter surveillance from schools and social workers.
IIt’s easy to fight a dark algorithm that doesn’t really exist. But as lawmakers in both the EU and the US try to build a common understanding of AI governance, they will do better to look closer to home. Americans don’t even have federal privacy laws that provide some basic protections against algorithmic decision-making.
There is also a huge need for governments to conduct honest, thorough audit about how governments and companies use AI to make decisions about our lives. They may not like what they find—but that makes them look even more.
A bot that watched 70,000 hours of Minecraft could unlock AI’s next big thing
Research firm OpenAI built an AI that uses 70,000 hours of Minecraft players’ videos to play the game better than any AI before. It’s a breakthrough for a powerful new technique, called imitation learning, that can be used to train machines to perform a variety of tasks by watching humans perform them first. It also raises the potential that sites like YouTube could be a vast and untapped source of training data.
Why it is a big deal: Imitation learning can be used to train AI to control a robotic arm, drive a car, or navigate websites. Some people, such as Meta’s chief AI scientist, Yann LeCun, think that watching videos will eventually help us train AI with human-level intelligence. Read the story of Will Douglas Heaven here.
Bits and bytes
Meta’s gaming AI can make and break alliances like humans
Diplomacy is a popular strategy game in which seven players vie for control of Europe by moving pieces across the map. The game requires players to talk to each other and detect when others are cheating. Meta’s new AI, called Cicero, has tricked humans into winning.
It’s a huge step forward for AI that can help solve complex problems, such as planning routes around heavy traffic and negotiating contracts. But I won’t lie—it’s also a terrifying thought that AI can fool humans so successfully. (Technology Review MIT)
We may run out of data to train AI language programs
The tendency to create ever larger AI models means we need even larger data sets to train them. The thing is, we could run out of relevant data by 2026, according to a paper by researchers from Epoch, an AI research and forecasting organization. This will push the AI community to figure out how to do more with existing resources. (Technology Review MIT)
Stable diffusion 2.0 is out
Diffusion Stable AI text-to-image has been granted onebig facelift, and its output looks much more polished and realistic than before. It can even dohand. Stable Diffusion’s growth rate is breathtaking. Its first version came out only in August. We will likely see even more advancements in the field of general AI in the coming year.