The People’s Republic of China broke new ground by announcing draft regulations on the widespread use of algorithmic recommendation technology. The regulations are, according to one expert, the first of their kind globally. And because China will soon exceed one billion internet users — roughly 20% of global internet users — these regulations will cover nearly one in five users on earth.
The Internet Information Service Algorithmic Recommendation Management Provisions were released after passage of China’s EU General Data Protection Regulation-like comprehensive privacy regime, the Personal Information Privacy Law, which is set to go into effect Nov. 1. The regulations also apply to three other Chinese laws: the Cybersecurity Law, the Data Security Law and the Internet Information Services Management Rules. It is not immediately clear when these regulations will go into effect and whether subsequent drafts or official guidance will be published after public comments on the regulations were accepted through Sept. 26.
The regulations are a series of 30 articles with the aim to create comprehensive rules on the widespread use of algorithms online, ranging from search filters, personalized recommendations, information sharing services, user rights and more. They authorize the Cybersecurity Agency of China with “nationwide algorithmic recommendation service supervision, management, and law enforcement,” which could yield useful compliance guidance.
Scope
The regulations are broad in scope and apply to certain types of algorithmic recommendation technology, including generated or synthetic, personalized recommendation, ranking and selections, search filters, dispatching and decision-making, and other such algorithms that provide information content to users.
The question of who the law applies to is likewise broadly defined, referring to covered parties as internet information service providers that use covered algorithmic models within China. Practically speaking, this means almost any organization providing online content services — social media platforms, private online education, entertainment streaming and more — that use algorithmic modeling are covered by these regulations. Like PIPL, the regulations focus mostly on commercial applications of algorithms. They have extraterritorial reach because PIPL, the regulations’ enabling statute, applies to foreign service providers that handle covered Chinese data or are active within China.
Article 7 of the regulations provide several noteworthy legal duties on the part of service providers. They include strengthening information content management, establishing and completing feature databases to identify unlawful and harmful information, optimizing database entry standards, norms and processes, and ensuring that each piece of algorithmically generated or synthetic information has been marked with an indicator prior to dissemination. Service providers also have affirmative duties when they encounter unlawful information within their algorithmic systems, requiring them to immediately cease transmission, delete it, prevent the spread of the information, preserve relevant records and dispatch the CAC with a report regarding the incident.
Purpose
The regulations cite many purposes for regulating the use of algorithmic models. Somewhat obvious purposes include the protection of national security and the standardization of algorithm regulations under the aforementioned internet-related laws. But many of the articles focus on the need to minimize harm to individuals online. Harms include price discrimination (Article 18), manipulating user accounts (Article 13), exploiting gig workers (Article 17) and more. Because this regime is the first of its kind to target such emblematic 21st-century societal woes, important questions surrounding causation, law enforcement and statutory interpretation will remain unanswered for the time being. The regulations’ prohibition on using algorithms to induce user addiction animates the government’s wider attempt to strengthen control on the flow of information online.
Online addiction (which generally includes gaming addiction) has a decades' long history in China and was officially recognized by Chinese authorities and other international bodies as a mental health disorder in 2008. Article 8 of the regulations confronts online addiction by prohibiting the use of algorithmic models that lead users to “addiction or high-value consumption,” though they are not clearly defined. It is also not immediately clear how these phrases will be interpreted by the CAC. Absent agency interpretation or other such guidance, recent context around this issue could at least hint at the types of societal consequences of online addiction that policymakers at the CAC are hoping to avoid, particularly among minors.
China recently announced national rules restricting gameplay hours for minors in an effort to curb a rise in near-sightedness and poor academic performance across broad sections of children across the country. The openings of so-called internet boot camps (where attendees attempt to control and mitigate their dependency) across China have garnered international controversy over allegations of mistreatment of attendees, some of them minors. In fact, minors are most likely to suffer from the kind of internet addiction in China—recent studies indicate adolescents and minors are more likely to develop unhealthy dependencies on internet and gaming usage, the consequences of which can be devastating.
Therefore, it is no surprise the regulations, in addition to a blanket prohibition on the addiction-inducing algorithmic models, elevate anti-addiction and pro-mental health protections for minors. Specifically, the regulations state algorithms may not “use algorithmic recommendation services to lead minors to online addiction” or “lead the minor toward harmful tendencies or may influence minors’ health in other ways” (Article 16). The regulations establish affirmative duties to “make it convenient for minors to obtain information content beneficial to their physical and mental health” and to develop models that serve content suited for minors and that serve the specific characteristics of minors (Article 16).
China’s governance of the use of algorithms as an attempt to control online addiction involves novel concepts the rest of the world’s governments have yet fully explored. How will, for example, addiction be determined in court? What do plaintiffs need to prove to establish actual and proximate cause in a tort action? What will subsequent drafts of the algorithm recommendations say about online addiction? Since the final draft of PIPL unexpectedly included elevated protections for minors from prior drafts, could we expect subsequent drafts of the regulations to contain more provisions aimed at protecting minors?
Expert commentary
Helen Toner of Georgetown University and Rogier Creemers of the DigiChina Cyber Policy Institute at Stanford say these regulations are groundbreaking, as they are the first comprehensive government regulations that squarely target algorithms and recommendation models. Toner says some aspects of the regulations will seem unusual to Western observers because they address China’s controlled information environment with aspirational-sounding mandates of “upholding mainstream value orientations” and disseminating “positive energy” (Article 6). But, Toner says, if elements of these regulations are successful, they are likely to be adopted more widely given international developments in online regulation.
Creemers addresses the fact these regulations reflect an intent to impose a regulatory system classified by type of applications and level of impact for algorithms. In other words, the importance of algorithms will be measured by the extent to which they may have political consequences, number of users, sensitivity of user data and the degree to which they impact users’ activities, he says, adding the CAC is authorized to require public opinion-related algorithms to register with officials. But while the regulations enhance authority and industry oversight at the CAC, Creemers notes the fines for violating certain parts of the regulations appear minimal ($4,600 USD) compared to the size of the multibillion-dollar giants that will be regulated, though Chinese lawmakers may authorize greater fines in the future. (It should also be noted that PIPL imposes hefty fines for noncompliance and potential legal liability for individuals who violate the law.)
Perhaps the most immediate drawback of the regulations is that because they are the first of their kind to regulate an area Toner notes has long been undiscussed in policy circles, the sheer notoriety means it is unclear how the regulations will be applied and enforced, and whether helpful guidance from agencies or China’s judiciary will be made public ahead of PIPL’s Nov. 1 enactment. This concern is echoed by other commentators, who say that while the regulations are earnest attempts to confront serious social problems, codifying state control over content streams and their algorithms remains to be proven as an effective means of controlling new content that emerges organically and becomes influential, despite state control over the use of algorithms that fuel public attention.
Photo by Liam Read on Unsplash