* This article is reproduced from Practical Law with the permission of the publishers.
As artificial intelligence (AI) technology continues to rise, legislative discussions and practices regarding AI have proliferated. The EU’s AI Act is coming to fruition, and China is frequently exploring options for AI technology regulation. As the first in a series of articles on AI, this article addresses the evolution and pillars of China’s emerging system for AI governance. It discusses how China's new regulatory approach for AI would fit within the country's larger data protection framework and introduces the main pillars of China's AI governance regime, including a table summary of key compliance considerations arising from a patchwork of AI-related laws, regulations, policies, and standards.
If you would like to subscribe for our newsletters and be notified of our events on China Technology, Media, and Telecommunications regulatory updates, please contact James Gong at [email protected].
In recent years, artificial intelligence (AI) has attracted enormous attention from companies, regulators, and policymakers across the world, who have all become increasingly convinced of the technology's strategic importance and potential for pitfalls. While the term continues to be used in a broad context, a growing number of companies have begun to incorporate complex algorithms trained through machine learning models into their services to expand their functionality and performance. A lot of this predates the present, with concerns over algorithmic decision making, big data, and behavioural profiling having been a crucial dimension of policy debates for some time. What’s new is the degree to which AI services (including large language models (LLM) like ChatGPT) can mirror human output with increasing sophistication, and the alarm regulators have signalled over their impact.
In China (PRC), AI has drawn considerable attention, with many tech companies announcing plans to introduce their own products and services to consumers and businesses. These products have already raised various legal implications. In 2019, a face-swapping App sparked major privacy concerns as it collected users' facial recognition feature to create human images without consent. Celebrities and other figures have also been victim to numerous deepfake campaigns, which have caused public scandal and raised extensive discussion in Chinse society about the potential for these technologies to destabilise political order or spread disinformation. Additionally, a growing chorus of complaints have directed their energy towards the phenomenon of algorithms-based price discrimination (known in Chinese as "big data killing" (大数据杀熟), as more and more individuals complain of unfair prices online.
These issues have sparked a response from regulators, who have since 2016 put into place a complex legal framework for cybersecurity and data protection and then have turned their attention to novel problems raised by the variety of AI products on the market. How will this new regulatory approach for AI fit within China's larger data protection framework and what are its key components? This Practice Note addresses the evolution and pillars of China's emerging system for AI governance.
The current attitude of Chinese regulators towards AI is both supportive and prudential, reflecting the balance of promoting the use and value of AI while placing safeguards around the technology to prevent social and economic harms. Overtime, the Chinese government has increased its oversight on these technologies but in a targeted manner, often preferring to set rules on specific AI technologies vertically rather than on the entire industry in order not stifle innovation. Apart from the high-level cybersecurity and data protection requirements (as well as the specific provisions issued by the Cyberspace Administration of China (CAC), most regulatory documents are non-mandatory and still in an experimental phase. In the long term, more concrete requirements will be included in a future AI law and through provincial level regulations. It is predicted that additional regulatory requirements for AI will become more and more refined over time and will likely extend to other sectors.
Emerging with other notable policy developments around data and cybersecurity in the late 2010s, the path towards comprehensive AI regulation has emerged in the context of China's rise as a cyber-power (网络强国). Key to this is the government's intention to set its own rules over cyberspace and leverage the power of digital technologies to achieve its economic and social goals while managing and mitigating known risks. With respect to AI, policymakers in China have for some time viewed the technology as critical for the future development of innovation, smart industrial systems, and digital life.
At least since 2017, the Chinese government has made AI governance a priority and currently explicitly uses the term in its numerous planning documents and regulations. Overtime, three distinct stages of development have come into focus, each representing a different balance of interests and approaches to the issue. China initially prioritised industry self-regulation, with key planning documents promoting the need for industry to invest in the technology and seize market opportunities. With the solidification of China's cybersecurity regime, policymakers then shifted to issuing national standards to encourage industry participation and set the foundation for future rules. Lastly, in recent years, Chinese regulators (led by the CAC) have promulgated concrete rules to target specific industries and technologies. It is expected that this trend will continue with the creation of an AI law in the near future.
Like other countries, China initially took a "wait and see" approach towards governing AI and preferred to frame the issue through the lens of seizing strategic opportunities and developing a globally competitive industry. At the time, while policymakers began to understand the domestic and international compliance risks associated with the technology (mainly in the context of security), the priority was to establish a foundation where the Chinese AI industry could flourish without strict regulatory barriers.
Strategic Plans
Nonetheless, the Chinese government took a keen early interest in AI and begun to lay the groundwork for future regulation. Specifically:
Establishment of Advisory Committees
To implement the 2017 Plan, the Chinese government established two institutions: the AI Strategic Advisory Committee (Committee) (新一代人工智能战略咨询委员会) and the AI Planning and Promotion Office (新一代人工智能规划推进办公室). Led by the Ministry of Science and Technology, these bodies became the first institutions solely responsible for overseeing AI policy on the national level. In 2019, the Committee created an additional specialist body to strengthen research on the legal, ethical, and social issues related to next-generation AI technologies, as well as deepen international co-operation.
Industry Self-Discipline
Despite this, Chinese regulators during this period did not issue any mandatory rules that specifically targeted AI technologies. Due to the lack of regulation, several organisations (mainly internet companies) established their own internal AI management systems. They also co-operated with other trade bodies to draft self-regulatory frameworks and promote guiding principles for the industry. For instance, in 2019 the Shenzhen AI Industry Association launched the New-Generation AI Industry Self-Regulation Convention (新一代人工智能行业自律公约) while the AI Industry Alliance issued the AI Industry Self-Regulation Convention (人工智能行业自律公约).
After several years of unsupervised development, Chinese authorities stepped up their efforts to formulate industry standards to cover algorithmic technologies and machine learning applications. These national standards, which reflect industry best-practice and are non-mandatory, ushered in a new stage of AI regulation by refocusing the role of state organisations over the technologies. This stage saw an increased role of direct government participation in AI governance but little concrete regulatory obligations. In other words, China continued to lack a comprehensive legal architecture through which state bodies would impose direct oversight and enforcement.
In August 2020, key administrative bodies (including the CAC and the Ministry of Industry and Information Technology) jointly issued the Guidelines to the Construction of the National New-Generation AI Standard System 2020, which set out eight structural aspects of the AI standard system to guide technical standards formation. These include basic components of other industrial regulations such as software and hardware platforms, key area technologies, and industrial applications, but also regulatory principles like AI security and ethics. The purpose of this guidance was to prepare technical committees to gain first-hand knowledge of how AI technologies work and issue sector-specific standards from the bottom-up.
Indeed, technical standardisation in China often reflects an experimental regulatory attitude, where officials work directly with industry and academic experts to prepare guidance for future implementation of rules. Following these guidelines, the National Information Security Standardization Technical Committee (TC260) (which is the leading standards body for digital technologies), drafted several national standards on machine learning and algorithm ethics. Additionally, a separate standards body TC28 (that is, the National Information Technology Standardisation Technical Committee) formulated its own AI-related standards, including a code of practice (GB/T 42755-2023) for labelling and annotating input data in machine learning models.
The lack of direct regulatory oversight at this stage can be partially explained by the priority of Chinese legislators at the time to finalise and upgrade China's key data protection laws, including the Personal Information Protection Law 2021 (2021 PIPL) and the Data Security Law 2021, as the supplement of the Cybersecurity Law 2016 (effective 1 June 2017). Regulators had for a while recognised the need to directly address some of the well-known and documented risks of AI but approached combating them through China's larger data governance regime, which was just coming into shape. With the finalisation of these laws, Chinese policymakers could begin to issue AI-specific rules in a more targeted, sector-specific fashion.
In contrast to TC260 and TC28's non-binding standards, the CAC has over the past few years rapidly promulgated technology-specific, mandatory regulations that have drastically changed the landscape of AI governance in China. The three most important of these are:
These measures compliment and expand on other rules currently in force that AI technologies implicate, such as those related to content moderation, intellectual property, and cybersecurity. One important aspect of this stage is the bespoke approach Chinese regulators have taken towards these technologies. Each of the regulations currently in force has a more targeted scope than other laws that govern companies and technology providers broadly. For instance, the 2021 Recommendation Algorithm Provisions applies narrowly to services that push and order content or make recommendations to users via algorithms while the 2023 Generative AI Measures target services that can automatically create human-like textual output.
Another noteworthy aspect of this phase is the increasing number of AI regulatory documents developed by provincial governments. Pilot cities such as Shenzhen and Shanghai have taken the lead in issuing regulations to promote the development of AI and create local-level administrative experiments to attract AI investment and political approval from the central government. Enacted in 2022, the Shanghai Regulations on Promoting the Development of AI Industry 2022 and the Shenzhen Special Economic Zone Regulations on AI Industry Promotion 2022 both call for the creation of AI Ethics Committees to oversee AI development, conduct audits and assessments, and promote industrial parks where input and training data may be traded easily and lawfully.
Besides, as China gradually improves its AI regulatory tools, we have noticed that agencies that play a major role in the second phase have begun to formulate standards to facilitate the implementation of 2023 Generative AI Measures. For example, TC260 released TC260-003 Basic security requirements for generative artificial intelligence service on February 29, 2024, providing enterprises with practical guidance on 2023 Generative AI Measures compliance from aspects such as training data security, model security, internal measures, as well as instructions on conducting the security assessment.
These regulations reflect growing consensus among regulators that the risks posed by AI deserve special regulatory oversight and that industry self-regulation is insufficient to combat the major social and economic issues emerging from digital markets and technologies. Indeed, the emergence of a distinct paradigm for AI governance in China follows the solidification of the country's data governance regime over the past years, and should be seen as a related, but separate extension of that framework. To this end, Chinese policymakers have indicated in the State Council's 2023 Legislative Work Plan that they will formulate a general AI law in the coming years.
China's AI governance framework attempts to address multiple interconnected yet distinct legal issues originating from the technology including:
The first pillar of China’s AI regulatory regime concerns the governance and management of online content. With respect to AI-generated content (such as the output text of an LLM), regulators will prioritise traceability and authenticity of the content to restrict circulation of information that would violate well-established information services regulations. One way of interpreting this goal is that any content created through an AI tool must be traceable by law enforcement both at its source of creation and dissemination.
Compared to content management, China's emerging AI governance system only briefly touches upon personal data protection, leaving much of this area to the current provisions in the 2021 PIPL and other relevant regulations. The 2021 PIPL regulates personal information handlers broadly, a category that all AI service providers fall under if they process personal data. Consequently, the statutory responsibilities of the 2021 PIPL will extend to AI companies, including the need to have a lawful basis and obtain consent to process personal data where necessary (for example, input, training, and output data of AI models), accountability obligations, provisions related to automated decision-making and data subject rights, and processing of sensitive data (for example, facial, voice, or other biometric information).
The goals of this pillar are to ensure that personal data processing does not harm users or otherwise undermine public order (these are objectives well enshrined in China’s data protection framework). With respect to AI services, policymakers will prioritise lawfulness of processing, transparency, sincerity, and accountability. The data protection provisions in the AI regulations currently in force are essentially a restatement of those in the 2021 PIPL. For instance, the 2022 Deep Synthesis Provisions indicate that training data will be subject to the 2021 PIPL and that companies may need to obtain separate consent when providing functions that edit face, voice, or auditory information (Article 14). The 2023 Generative AI Measures also set forth consent and accountability requirements regarding the training and input data of machine learning models (Article 7), while the 2021 Recommendation Algorithm Provisions apply special requirements on the use of personal data for behavioural and targeted advertising purposes (Articles 17-21).
This notwithstanding, data protection by no means will be a less important dimension of AI regulation in China. Indeed, the deference to the 2021 PIPL indicates that policymakers have already contemplated many compliance requirements when formulating the law that will apply specifically to AI services. In order not to conflict with existing law, Chinese AI regulation will be more targeted and focus on different types of products and services offered to businesses and consumers. The 2021 PIPL applies broadly to all personal information handlers and is more horizontal in regulatory approach. By contrast, China’s emerging AI regulations designate a subset of entities on a more vertical basis and apply heightened compliance obligations on them that compliment existing rules under the 2021 PIPL. It is expected that future regulations will continue this trend.
Lastly, while both content governance and data protection implicate the use of algorithms, China's AI governance framework uniquely carves out algorithmic management as a distinct regulatory component. Crucial here is the need for organisations to ensure security, ethicality, and clarity of the algorithms they create and train.
Pillar |
Legal Instrument |
Key Considerations |
Content Moderation |
|
This pillar encourages the use of AI technology to produce, reproduce and publish content encouraged by the state while prohibiting the dissemination of illegal and undesirable information. Some explicit requirements include:
While content governance rules apply to many online service providers, it is expected that policymakers will draft more tailored requirements for certain AI services. |
Data Protection |
|
Organisations must have a lawful basis and obtain consent where necessary (including separate consent if applicable) for the use of personal information in accordance with the 2021 PIPL. Provide users with the ability to select or remove user tags that are specific to their personal characteristics for use in algorithmic recommendation services, including for placement of behavioural advertising or personalised content. Companies must also timely handle requests from individuals, such as those for accessing, making copies of, amending, supplementing, or deleting their personal information. Companies must adhere to the principles of legality, sincerity, transparency, and accountability. |
Algorithmic Governance |
|
Algorithmic governance is an emerging pillar of Chinese regulation that supplements existing laws. Requirements will vary for different types of algorithms but at a minimum, companies must ensure their services do not discriminate based on protected characteristics, perform security assessments for certain algorithms (that is, those involved in generative AI), and file algorithmic information with the CAC. Regulators in China are currently developing more tools to understand and gather information on how algorithms work, including their training protocols, data sets, parameters, and mechanisms. |