BTCC / BTCC Square / Cryptopolitan /
China Tightens AI Chatbot Rules: New Restrictions Target Gambling and Self-Harm Content

China Tightens AI Chatbot Rules: New Restrictions Target Gambling and Self-Harm Content

Published:
2025-12-29 11:58:45
12
1

China releases new rules to restrict AI chatbots from promoting gambling, self-harm

Beijing's latest regulatory move hits AI chatbots—and the timing couldn't be more ironic.

The New Guardrails

Chinese authorities just dropped fresh rules specifically targeting AI chatbots. The focus? Shutting down any algorithmic promotion of gambling or content that glorifies self-harm. It's a direct clampdown on how these systems generate and suggest information.

No more creative interpretations or 'helpful' links to offshore betting sites. No more sympathetic algorithms nudging vulnerable users toward dark corners. The directive is blunt: filter it out or face the consequences.

The Compliance Clock is Ticking

Developers and platforms are now scrambling. Retraining models, overhauling content filters, and implementing new safety protocols—all under the watchful eye of regulators. It's a massive technical and operational lift, and the deadline isn't flexible.

This isn't just about adding a keyword block. It's about embedding ethical guardrails deep into the AI's decision-making process. A misstep could mean more than a fine; it could mean losing the right to operate in the world's largest internet market.

The Bigger Picture—and a Finance Jab

This move is another piece in China's broader puzzle of tech governance. It follows a pattern of assertive regulation, shaping digital ecosystems with a firm hand. For the global AI industry, it's a case study in state-level intervention.

And for those in finance watching from the sidelines? It's a stark reminder that in the race for AI supremacy, the biggest bottleneck might not be processing power or talent—it's navigating a regulatory landscape that can change faster than a meme coin crashes. Innovation moves at silicon speed, but compliance often travels at bureaucratic pace.

The final line is clear: in China's digital future, even the algorithms need to toe the party line.

China’s proposals are expected to protect minors from self-harm

According to the draft rules that were released on Saturday by the Cyberspace Administration, these are targeted at what it has termed “human-like interactive AI services,” as per CNBC translation of the Chinese-language document.

The draft rules have several proposals. For example, AI chatbots cannot generate content that encourages self-harm or suicide, engage in verbal violence or emotional manipulation that can damage users’ mental health.

Additionally, AI chatbots are not supposed to create obscene or violent and gambling-related content. According to the draft rules, if a user proposes suicide, the AI company is supposed to have a human who takes over the conversation and immediately contacts the user’s guardian or a designated individual.

The draft rules also propose that minors have guardian consent for emotional companionship use, with time limits on usage. Under the new rules, AI platforms are expected to decide if a user is an adult or a minor even if they do not disclose their age. In the event of doubts, platforms must apply settings for minors, while allowing for appeals.

Once finalized, these rules WOULD mark the world’s first attempt to regulate AI with human or anthropomorphic characteristics, according to NYU School of Law professor Winston Ma. These developments come as businesses have rapidly developed AI companions and digital celebrities.

When comparing this with China’s 2023 generative AI regulation, Ma opined that this version “highlights a leap from content safety to emotional.”

The proposals come as two Chinese AI chatbot startups Z.ai and Minimax have this month filed for initial public offerings (IPOs) in Hong. Minimax is best known internationally for its Talkie AI app that lets users chat with VIRTUAL characters.

According to CNBC, the app and its domestic Chinese version, known as Xingye, accounted for more than a third of the firm’s revenue in the first three quarters of the year, with an average of over 20 million monthly active users during that time.

As for Z.ai, which is also known as Zhipu, it filed under the name Knowledge Atlas Technology, but did not disclose its monthly active users. However, the AI company revealed that its technology is on about 80 million devices, including smartphones, personal computers and smart vehicles.

As previously reported by Cryptopolitan, the two AI startups, both backed by Alibaba and Tencent are targeting to go public in early January next year on the Hong Kong Stock Exchange.

Join Bybit now and claim a $50 bonus in minutes

|Square

Get the BTCC app to start your crypto journey

Get started today Scan to join our 100M+ users

All articles reposted on this platform are sourced from public networks and are intended solely for the purpose of disseminating industry information. They do not represent any official stance of BTCC. All intellectual property rights belong to their original authors. If you believe any content infringes upon your rights or is suspected of copyright violation, please contact us at [email protected]. We will address the matter promptly and in accordance with applicable laws.BTCC makes no explicit or implied warranties regarding the accuracy, timeliness, or completeness of the republished information and assumes no direct or indirect liability for any consequences arising from reliance on such content. All materials are provided for industry research reference only and shall not be construed as investment, legal, or business advice. BTCC bears no legal responsibility for any actions taken based on the content provided herein.