AI Governance in China: Strategies, Initiatives, and Key Considerations

Written By

james gong Module
James Gong

Legal Director
China

I am a Legal Director based in Hong Kong and lead the China data protection and cybersecurity team.

harry qu Module
Harry Qu

Associate
China

I am a data associate in our Beijing office. My practice focuses on data privacy, cybersecurity, TMT, as well as antitrust and anti-competition law.

hunter dorwart Module
Hunter Dorwart

Associate
UK

I am an associate in our Privacy & Data Protection Group in London.

* This article is reproduced from Practical Law with the permission of the publishers.

As artificial intelligence (AI) technology continues to rise, legislative discussions and practices regarding AI have proliferated. The EU’s AI Act is coming to fruition, and China is frequently exploring options for AI technology regulation. As the first in a series of articles on AI, this article addresses the evolution and pillars of China’s emerging system for AI governance. It discusses how China's new regulatory approach for AI would fit within the country's larger data protection framework and introduces the main pillars of China's AI governance regime, including a table summary of key compliance considerations arising from a patchwork of AI-related laws, regulations, policies, and standards.

If you would like to subscribe for our newsletters and be notified of our events on China Technology, Media, and Telecommunications regulatory updates, please contact James Gong at [email protected].

Background

In recent years, artificial intelligence (AI) has attracted enormous attention from companies, regulators, and policymakers across the world, who have all become increasingly convinced of the technology's strategic importance and potential for pitfalls. While the term continues to be used in a broad context, a growing number of companies have begun to incorporate complex algorithms trained through machine learning models into their services to expand their functionality and performance. A lot of this predates the present, with concerns over algorithmic decision making, big data, and behavioural profiling having been a crucial dimension of policy debates for some time. What’s new is the degree to which AI services (including large language models (LLM) like ChatGPT) can mirror human output with increasing sophistication, and the alarm regulators have signalled over their impact.

In China (PRC), AI has drawn considerable attention, with many tech companies announcing plans to introduce their own products and services to consumers and businesses. These products have already raised various legal implications. In 2019, a face-swapping App sparked major privacy concerns as it collected users' facial recognition feature to create human images without consent. Celebrities and other figures have also been victim to numerous deepfake campaigns, which have caused public scandal and raised extensive discussion in Chinse society about the potential for these technologies to destabilise political order or spread disinformation. Additionally, a growing chorus of complaints have directed their energy towards the phenomenon of algorithms-based price discrimination (known in Chinese as "big data killing" (大数据杀熟), as more and more individuals complain of unfair prices online.

These issues have sparked a response from regulators, who have since 2016 put into place a complex legal framework for cybersecurity and data protection and then have turned their attention to novel problems raised by the variety of AI products on the market. How will this new regulatory approach for AI fit within China's larger data protection framework and what are its key components? This Practice Note addresses the evolution and pillars of China's emerging system for AI governance.

The current attitude of Chinese regulators towards AI is both supportive and prudential, reflecting the balance of promoting the use and value of AI while placing safeguards around the technology to prevent social and economic harms. Overtime, the Chinese government has increased its oversight on these technologies but in a targeted manner, often preferring to set rules on specific AI technologies vertically rather than on the entire industry in order not stifle innovation. Apart from the high-level cybersecurity and data protection requirements (as well as the specific provisions issued by the Cyberspace Administration of China (CAC), most regulatory documents are non-mandatory and still in an experimental phase. In the long term, more concrete requirements will be included in a future AI law and through provincial level regulations. It is predicted that additional regulatory requirements for AI will become more and more refined over time and will likely extend to other sectors.

Stages of AI Regulation in China

Emerging with other notable policy developments around data and cybersecurity in the late 2010s, the path towards comprehensive AI regulation has emerged in the context of China's rise as a cyber-power (网络强国). Key to this is the government's intention to set its own rules over cyberspace and leverage the power of digital technologies to achieve its economic and social goals while managing and mitigating known risks. With respect to AI, policymakers in China have for some time viewed the technology as critical for the future development of innovation, smart industrial systems, and digital life.

At least since 2017, the Chinese government has made AI governance a priority and currently explicitly uses the term in its numerous planning documents and regulations. Overtime, three distinct stages of development have come into focus, each representing a different balance of interests and approaches to the issue. China initially prioritised industry self-regulation, with key planning documents promoting the need for industry to invest in the technology and seize market opportunities. With the solidification of China's cybersecurity regime, policymakers then shifted to issuing national standards to encourage industry participation and set the foundation for future rules. Lastly, in recent years, Chinese regulators (led by the CAC) have promulgated concrete rules to target specific industries and technologies. It is expected that this trend will continue with the creation of an AI law in the near future.

Stage I: Strategic Planning and Industry Self-Discipline (2017 - 2020)

Like other countries, China initially took a "wait and see" approach towards governing AI and preferred to frame the issue through the lens of seizing strategic opportunities and developing a globally competitive industry. At the time, while policymakers began to understand the domestic and international compliance risks associated with the technology (mainly in the context of security), the priority was to establish a foundation where the Chinese AI industry could flourish without strict regulatory barriers.

Strategic Plans

Nonetheless, the Chinese government took a keen early interest in AI and begun to lay the groundwork for future regulation. Specifically:

Establishment of Advisory Committees

To implement the 2017 Plan, the Chinese government established two institutions: the AI Strategic Advisory Committee (Committee) (新一代人工智能战略咨询委员会) and the AI Planning and Promotion Office (新一代人工智能规划推进办公室). Led by the Ministry of Science and Technology, these bodies became the first institutions solely responsible for overseeing AI policy on the national level. In 2019, the Committee created an additional specialist body to strengthen research on the legal, ethical, and social issues related to next-generation AI technologies, as well as deepen international co-operation.

Industry Self-Discipline

Despite this, Chinese regulators during this period did not issue any mandatory rules that specifically targeted AI technologies. Due to the lack of regulation, several organisations (mainly internet companies) established their own internal AI management systems. They also co-operated with other trade bodies to draft self-regulatory frameworks and promote guiding principles for the industry. For instance, in 2019 the Shenzhen AI Industry Association launched the New-Generation AI Industry Self-Regulation Convention (新一代人工智能行业自律公约) while the AI Industry Alliance issued the AI Industry Self-Regulation Convention (人工智能行业自律公约).

Stage II: Voluntary Standards and the Beginning of Regulatory Oversight (2020 - 2022)

After several years of unsupervised development, Chinese authorities stepped up their efforts to formulate industry standards to cover algorithmic technologies and machine learning applications. These national standards, which reflect industry best-practice and are non-mandatory, ushered in a new stage of AI regulation by refocusing the role of state organisations over the technologies. This stage saw an increased role of direct government participation in AI governance but little concrete regulatory obligations. In other words, China continued to lack a comprehensive legal architecture through which state bodies would impose direct oversight and enforcement.

In August 2020, key administrative bodies (including the CAC and the Ministry of Industry and Information Technology) jointly issued the Guidelines to the Construction of the National New-Generation AI Standard System 2020, which set out eight structural aspects of the AI standard system to guide technical standards formation. These include basic components of other industrial regulations such as software and hardware platforms, key area technologies, and industrial applications, but also regulatory principles like AI security and ethics. The purpose of this guidance was to prepare technical committees to gain first-hand knowledge of how AI technologies work and issue sector-specific standards from the bottom-up.

Indeed, technical standardisation in China often reflects an experimental regulatory attitude, where officials work directly with industry and academic experts to prepare guidance for future implementation of rules. Following these guidelines, the National Information Security Standardization Technical Committee (TC260) (which is the leading standards body for digital technologies), drafted several national standards on machine learning and algorithm ethics. Additionally, a separate standards body TC28 (that is, the National Information Technology Standardisation Technical Committee) formulated its own AI-related standards, including a code of practice (GB/T 42755-2023) for labelling and annotating input data in machine learning models.

The lack of direct regulatory oversight at this stage can be partially explained by the priority of Chinese legislators at the time to finalise and upgrade China's key data protection laws, including the Personal Information Protection Law 2021 (2021 PIPL) and the Data Security Law 2021, as the supplement of the Cybersecurity Law 2016 (effective 1 June 2017). Regulators had for a while recognised the need to directly address some of the well-known and documented risks of AI but approached combating them through China's larger data governance regime, which was just coming into shape. With the finalisation of these laws, Chinese policymakers could begin to issue AI-specific rules in a more targeted, sector-specific fashion.

Stage III: Direct Supervision of AI Technologies (2022 - present)

In contrast to TC260 and TC28's non-binding standards, the CAC has over the past few years rapidly promulgated technology-specific, mandatory regulations that have drastically changed the landscape of AI governance in China. The three most important of these are:

These measures compliment and expand on other rules currently in force that AI technologies implicate, such as those related to content moderation, intellectual property, and cybersecurity. One important aspect of this stage is the bespoke approach Chinese regulators have taken towards these technologies. Each of the regulations currently in force has a more targeted scope than other laws that govern companies and technology providers broadly. For instance, the 2021 Recommendation Algorithm Provisions applies narrowly to services that push and order content or make recommendations to users via algorithms while the 2023 Generative AI Measures target services that can automatically create human-like textual output. 

Another noteworthy aspect of this phase is the increasing number of AI regulatory documents developed by provincial governments. Pilot cities such as Shenzhen and Shanghai have taken the lead in issuing regulations to promote the development of AI and create local-level administrative experiments to attract AI investment and political approval from the central government. Enacted in 2022, the Shanghai Regulations on Promoting the Development of AI Industry 2022 and the Shenzhen Special Economic Zone Regulations on AI Industry Promotion 2022 both call for the creation of AI Ethics Committees to oversee AI development, conduct audits and assessments, and promote industrial parks where input and training data may be traded easily and lawfully.

Besides, as China gradually improves its AI regulatory tools, we have noticed that agencies that play a major role in the second phase have begun to formulate standards to facilitate the implementation of 2023 Generative AI Measures. For example, TC260 released TC260-003 Basic security requirements for generative artificial intelligence service on February 29, 2024, providing enterprises with practical guidance on 2023 Generative AI Measures compliance from aspects such as training data security, model security, internal measures, as well as instructions on conducting the security assessment.

Upcoming AI Laws

These regulations reflect growing consensus among regulators that the risks posed by AI deserve special regulatory oversight and that industry self-regulation is insufficient to combat the major social and economic issues emerging from digital markets and technologies. Indeed, the emergence of a distinct paradigm for AI governance in China follows the solidification of the country's data governance regime over the past years, and should be seen as a related, but separate extension of that framework. To this end, Chinese policymakers have indicated in the State Council's 2023 Legislative Work Plan that they will formulate a general AI law in the coming years.

Main Pillars of China's AI Governance Regime

China's AI governance framework attempts to address multiple interconnected yet distinct legal issues originating from the technology including:

  • The content and information generated and disseminated online.
  • The protection and security of personal data.
  • The use of algorithms to make decisions about individuals.

Content Moderation

The first pillar of China’s AI regulatory regime concerns the governance and management of online content. With respect to AI-generated content (such as the output text of an LLM), regulators will prioritise traceability and authenticity of the content to restrict circulation of information that would violate well-established information services regulations. One way of interpreting this goal is that any content created through an AI tool must be traceable by law enforcement both at its source of creation and dissemination.

  • Key Requirements. This will take on many dimensions in the context of enforcement. The CAC will similarly obligate service providers to add a watermark informing users that a particular piece of content (whether visual, auditory, or textual) is generated by AI. Relatedly, companies have the obligation to mark content that could cause confusion or misidentification by the public (Article 12, 2023 Generative AI Measures).
  • Lawful Content. These requirements build off other recent laws and regulations that aim to ensure that platform companies respect social and moral norms and delineated political directions and do not enable the spread of undesirable information. Indeed, recent enforcement and guidelines from the CAC indicate that this includes, for instance, unsubstantiated rumours, false information that could cause public panic, pornography, and other content unsuitable to children, and social media posts that fuel addiction or unsafe consumptive habits (Article 7, Provisions on the Governance of Network Information Content Ecolog 2019). Under the 2023 Generative AI Measures, providers that detect such information must report it to the authorities and restrict users from creating or spreading it (Article 14).
  • High Risk Service Providers. Content moderation is also a key theme in the recent concept of services with "public opinion attributes or social mobilisation capabilities." This term is used in each of the key AI regulations and denotes a class of service providers that are subject to heightened regulatory compliance obligations due to their ability to spready information to a large group of people. These services must not only meet dedicated filing requirements for their recommendation algorithms, but also undergo a security assessment with the CAC if they launch generative AI services (Please see Article 3, Provisions on the Security Assessment for Internet-Based Information Services Capable of Creating Public Opinions or Social Mobilisation 2018 and the other articles of this series).

Data Protection

Compared to content management, China's emerging AI governance system only briefly touches upon personal data protection, leaving much of this area to the current provisions in the 2021 PIPL and other relevant regulations. The 2021 PIPL regulates personal information handlers broadly, a category that all AI service providers fall under if they process personal data. Consequently, the statutory responsibilities of the 2021 PIPL will extend to AI companies, including the need to have a lawful basis and obtain consent to process personal data where necessary (for example, input, training, and output data of AI models), accountability obligations, provisions related to automated decision-making and data subject rights, and processing of sensitive data (for example, facial, voice, or other biometric information).

The goals of this pillar are to ensure that personal data processing does not harm users or otherwise undermine public order (these are objectives well enshrined in China’s data protection framework). With respect to AI services, policymakers will prioritise lawfulness of processing, transparency, sincerity, and accountability. The data protection provisions in the AI regulations currently in force are essentially a restatement of those in the 2021 PIPL. For instance, the 2022 Deep Synthesis Provisions indicate that training data will be subject to the 2021 PIPL and that companies may need to obtain separate consent when providing functions that edit face, voice, or auditory information (Article 14). The 2023 Generative AI Measures also set forth consent and accountability requirements regarding the training and input data of machine learning models (Article 7), while the 2021 Recommendation Algorithm Provisions apply special requirements on the use of personal data for behavioural and targeted advertising purposes (Articles 17-21).

This notwithstanding, data protection by no means will be a less important dimension of AI regulation in China. Indeed, the deference to the 2021 PIPL indicates that policymakers have already contemplated many compliance requirements when formulating the law that will apply specifically to AI services. In order not to conflict with existing law, Chinese AI regulation will be more targeted and focus on different types of products and services offered to businesses and consumers. The 2021 PIPL applies broadly to all personal information handlers and is more horizontal in regulatory approach. By contrast, China’s emerging AI regulations designate a subset of entities on a more vertical basis and apply heightened compliance obligations on them that compliment existing rules under the 2021 PIPL. It is expected that future regulations will continue this trend.

Algorithmic Governance

Lastly, while both content governance and data protection implicate the use of algorithms, China's AI governance framework uniquely carves out algorithmic management as a distinct regulatory component. Crucial here is the need for organisations to ensure security, ethicality, and clarity of the algorithms they create and train.

  • Security. A crucial component of Chinese AI regulation is the role of security assessments. Administered by the CAC, these assessments involve complex filing procedures and organisational preparation. The origin of these assessments goes back as early to security certification under the multi-layer protection system (MLPS) in the early 2000s, but their modern form has been re-tailored for cybersecurity review and cross-border data flows. They will also have a central role in AI governance, particularly for services with "public opinion attributes or social mobilisation capabilities." The CAC has created an online registry where services that fall within this category must additionally submit information regarding their algorithms, including the type and use of the algorithm.
  • Ethicality. Policymakers have also indicated the necessity of aligning algorithms with well-defined ethical standards. To date, the requirements here have typically focused on whether services reflect "public order and morality" (with the two terms undefined), but more specific obligations will arise in the future. For instance, the CAC prohibits the use of AI to generate any discriminatory content or decision based on race, ethnicity, beliefs, nationality, region, gender, age, occupation, and health (Article 4(2), 2023 Generative AI Measures). Additionally, the use of certain algorithms must protect vulnerable groups such as the elderly and labourers (Articles 19-20, 2021 Recommendation Algorithm Provisions). Several departments, leading by the MOST, have formulated the Measures for Ethical Review of Science and Technology (Trial), requiring artificial intelligence enterprises to establish Science and Technology Ethics Committees and conduct ethical reviews.
  • Clarity. Another goal of algorithmic governance in China is for companies to provide clear and transparent information with respect to their services, including advertising and recommended content. This can be seen in two contexts. First, regulators have stressed the need for platforms and other companies to create public external rules regarding how and why a particular user receives content (Article 15, Provisions on the Governance of Network Information Content Ecolog 2019). This has even bled over to how influencers, content creators and streamers interact with platforms and the types of products and materials they market to users. Second, AI services that generate human-like content (whether textual, visual, or auditory) must present clear, specific and actionable annotation rules and make clear that content has been generated with the use of AI (Articles 8 and 12, 2023 Generative AI Measures).

Summary of Key Compliance Considerations: Table

Pillar

Legal Instrument

Key Considerations

Content Moderation

  • 2017 CSL
  • Administrative Measures for Internet-based Information Services 2011
  • Provisions on the Governance of Network Information Content Ecolog 2019, effective 1 March 2020
  • 2022 Deep Synthesis Provisions
  • 2023 Generative AI Measures

This pillar encourages the use of AI technology to produce, reproduce and publish content encouraged by the state while prohibiting the dissemination of illegal and undesirable information.

Some explicit requirements include:

  • Adding labels to identify the content generated by deep synthesis technology.
  • Conducting security assessment and filing with the CAC.
  • Publishing transparency information regarding AI services.
  • Establishing internal rules for identifying and taking down illegal content.

While content governance rules apply to many online service providers, it is expected that policymakers will draft more tailored requirements for certain AI services.

Data Protection

  • 2021 PIPL
  • 2021 DSL
  • 2021 Recommendation Algorithm Provisions
  • 2022 Deep Synthesis Provisions
  • 2023 Generative AI Measures

Organisations must have a lawful basis and obtain consent where necessary (including separate consent if applicable) for the use of personal information in accordance with the 2021 PIPL.

Provide users with the ability to select or remove user tags that are specific to their personal characteristics for use in algorithmic recommendation services, including for placement of behavioural advertising or personalised content.

Companies must also timely handle requests from individuals, such as those for accessing, making copies of, amending, supplementing, or deleting their personal information.

Companies must adhere to the principles of legality, sincerity, transparency, and accountability.

Algorithmic Governance

  • 2021 Recommendation Algorithm Provision
  • 2022 Deep Synthesis Provisionss
  • 2023 Generative AI Measures
  • Measures for Ethical Review of Science and Technology (Trial)

Algorithmic governance is an emerging pillar of Chinese regulation that supplements existing laws.

Requirements will vary for different types of algorithms but at a minimum, companies must ensure their services do not discriminate based on protected characteristics, perform security assessments for certain algorithms (that is, those involved in generative AI), and file algorithmic information with the CAC.

Regulators in China are currently developing more tools to understand and gather information on how algorithms work, including their training protocols, data sets, parameters, and mechanisms.

Latest insights

More Insights
featured image

Bird & Bird marks World Children’s Day by announcing its forthcoming Global Comparative Guide to Children in the Digital World

6 minutes Nov 20 2024

Read More
Carabiner

Update: Reform of product liability adopted - New liability and litigation risks for companies!

Nov 19 2024

Read More
The European Commission Modern office buildings in Brussels, Belgium.

VAT in the Digital Age (“ViDA”): prepare your business with Bird & Bird – 10 key insights for success

Nov 15 2024

Read More