The Party’s AI: How China’s new AI Systems are Reshaping Human Rights
This research brief by the Australian Strategic Policy Institute (ASPI) makes for depressing reading. In it, the authors methodically examine how Chinese policies have implemented AI in a number of different ways, mainly internally, but with external implications as well.
Internally, China is regulating and applying AI in ways that enforce their official policies. For me, the most evocative example was in justice (Ch.2). Surveillance and policing are, of course, boosted with facial recognition, omnipresent cameras, phone trackers, and biometric databases; the government plans to build in capabilities to time and coordinate government responses. But AI is also used in courts and prosecution — AI is meant to play an auxiliary role in a system that is understaffed. But in practice, “AI designed to aid the prosecution might do so in ways that aren’t consistent with due process and the fair treatment of defendants” (p.38). The 206 System used in Shanghai “is able to make sentencing recommendations, review evidence and keep tabs on ‘deviations’ by prosecutors” — if the prosecutor disagrees with the system’s recommended decisions, the prosecutor must explain the deveiation and send a completed approval form to the court leader (p.40). Additionally, “Shenzen in 2024 announced the country’s first AI-assisted trial oversight system for judges,” and this system “helps to generate judgments for confirmation by the judge” (p.40).
Similarly, China is now using AI for surveillance targeting ethnic minorities (Ch.4), censoring politically sensitive images (Ch.1), and censoring online disocourse more generally (Ch.3). Censorship involves not just suppressing information, but also being vague — for instance, if you present an LLM with “images related to the Tiananmen Square massacre” (p.19), Chinese LLMs avoid using key terms such as “crackdown,” “reform,” or even “Beijing,” and “tended to frame the event as a necessary measure to maintain social stability”; US-based LLMs ChatGPT and Gemini were less likely to do this (p.19). The authors found similar results when asking LLMs to describe images of Falun Gong and the Dalai Lama. Additionally, results varied depending on the language the human operator used to query the LLM: English, Chinese (simplified), and Chinese (traditional) (p.27).
Such censorship doesn’t just affect domestic Chinese audiences, it also affects others using these systems. That’s true when interacting with LLMs outside China’s borders. But the report also describes a specific case in which Chinese LLMs deeply affect those outside China’s borders: AI fishing platforms (Ch.5). “Fleets of Chinese fishing trawlers prowl the world’s oceans and costs, pulling in enormous catches at industrial scale,” the report states (p.55) — their distant-water fleet of fishing vessels account for “around 15% of global marine capture” (p.56). Sometimes these fleets poach from other countries’ waters. In this overfishing, they are aided by ‘AI-enabled fishing forecasting platforms” (p.55) that coordinate AI forecasting and satellite data to increase accuracy and fishing hauls.
In conclusion, the report argues that the Chinese government has the goal of “ensuring that global AI standards benefit Chinese companies and China’s authoritarian political system” (p.62). The authors make several policy recommendations, which seem broadly positive (ex: “Promote transparency around AI vendors”) but unlikely to be pursued by the global community.
Overall, I found the report to be both insightful and depressing. A decade ago, I would have considered the report as cataloguing China’s problems and issues, and in some ways positive for the USA, because AI censorship and conformity lead to inflexibility, caution, and conservativism — preserving Western advantages in innovation by lowering the costs of failure and rewarding flexible thinking. Or so I would have told myself. But the West is also growing more authoritarian. The US system is different, but as Elon Musk’s well-publicized tinkering around Grok suggests, it allows similarly problematic use of AI.