In a bold move amid escalating U.S.-China tech rivalries, OpenAI announced on October 7, 2025, the banning of multiple ChatGPT accounts suspected of ties to Chinese government entities. These users allegedly leveraged the AI model to draft proposals and promotional materials for sophisticated surveillance tools, raising alarms about authoritarian exploitation of generative AI. The disclosures, detailed in OpenAI’s latest threat intelligence report, reveal how state actors are harnessing tools like ChatGPT not for innovation, but to streamline repression and monitoring of dissidents.
The most concerning activities centered on targeting vulnerable populations. One banned account, accessed via VPN from China, prompted ChatGPT to help craft a proposal for a “High-Risk Uyghur-Related Inflow Warning Model.” This tool aimed to analyze travel movements and police records to track “high-risk” individuals, including the Uyghur Muslim minority—a group long accused by the U.S. of facing genocide under Chinese policies, charges Beijing vehemently denies. Another account sought assistance in designing project plans and marketing for a social media “probe” capable of scanning platforms like X, Facebook, Instagram, Reddit, TikTok, and YouTube for “extremist speech” tied to ethnic, religious, or political topics. The user explicitly noted this was for a government client, underscoring potential state-backed intent.
Additional probes included attempts to unmask critics: one user asked ChatGPT to identify funding sources for an X account lambasting the Chinese government, while another targeted organizers of a Mongolian petition drive. Crucially, OpenAI emphasized that while ChatGPT aided in planning and documentation, the model was not used for actual surveillance implementation—its safeguards refused overtly malicious requests lacking legitimate uses. “What we saw and banned in those cases was typically threat actors asking ChatGPT to help put together plans or documentation for AI-powered tools, but not then to implement them,” explained Ben Nimmo, principal investigator on OpenAI’s Intelligence and Investigations team.
OpenAI’s swift bans are part of a broader crackdown, with over 40 networks disrupted since February 2024. The report also flags misuse by Russian and North Korean actors, who refined malware code, phishing lures, and influence operations using the model—such as generating video prompts for a Russian “Stop News” campaign on YouTube and TikTok. Chinese officials pushed back hard, with embassy spokesperson Liu Pengyu dismissing the claims as “groundless attacks and slanders against China,” touting Beijing’s “AI governance system with distinct national characteristics” that balances innovation, security, and inclusiveness.
These incidents illuminate AI’s dual-edged sword in geopolitics. As Michael Flossman, head of OpenAI’s threat intelligence, noted, adversaries are “routinely using multiple AI tools hopping between models for small gains in speed or automation,” enhancing existing tradecraft rather than inventing new threats. Yet, they signal a “direction of travel” toward more efficient authoritarian control, from Uyghur tracking to quelling dissent abroad. With China investing billions in AI supremacy—evidenced by its cost-effective DeepSeek R1 rival to ChatGPT—the U.S. faces mounting pressure to restrict tech exports and bolster safeguards.
OpenAI’s transparency, while commendable, highlights gaps in global AI ethics. As Nimmo observed, “There’s a push within the People’s Republic of China to get better at using artificial intelligence for large-scale things like surveillance and monitoring.” Without international norms, such abuses could proliferate, turning AI from a democratizing force into a tool of division. For researchers and policymakers, this serves as a wake-up call: in the race for AI dominance, vigilance must match velocity.
Leave a Reply