Data Newsletter | New Regulations on Personal Information Collection and Use by Internet Applications Solicits Public Comments
Lusheng Press Editor
06 Mar 2026
————Takeaways————
- The Cyberspace Administration of China (CAC) has issued the Provisions on the Management of Personal Information Collection and Use by Internet Applications (Draft for Comments). The Provisions focus on the collection and use of personal information by internet applications, particularly clarifying the responsibilities of multiple parties, including software development kits (SDKs), distribution platforms, and smart terminals. The latter two are required to carry out review and information archiving obligations. Additionally, when large internet platforms update or amend rules for personal information collection and use, they must publicly solicit opinions at least seven working days in advance.
- In a policy and regulation Q&A, the CAC clarified that for companies that have not met the threshold for mandatory security assessment for cross‑border data transfer but voluntarily apply for such an assessment, the assessment results remain binding, and the companies must conduct their cross‑border data transfer activities in accordance with those results.
- The Ministry of Public Security plans to formulate the Cybercrime Prevention Law to ensure comprehensive prevention and control of cybercrime throughout the full-chain and at both the pre-event and in-event stages. The draft for comments encompasses several hot-button data compliance issues, including the obligation for the identification and labeling of AI-generated content, the duty of app distribution platforms to monitor illegal applications, and the responsibility of important data processors to label data. Violations may lead to fines of up to RMB 5 million or 10 times the illegal gains. Offenders may also face penalties such as revocation of business permits or business licenses.
————Regulative Highlights————
On January 10, 2026, the CAC formally issued the Provisions on the Management of Personal Information Collection and Use by Internet Applications (Draft for Comments), seeking public opinion. These Provisions take internet applications as the primary subjects of governance and specifies the responsibilities of various service providers—such as software development kits (SDKs), distribution platforms, and smart terminals—in relation to the collection and use of personal information by internet applications. The Regulation generally follows the legal framework established under the Personal Information Protection Law, including principles such as legality, legitimacy, necessity, good faith, informed consent, and data minimization. Building on these foundations, it also introduces several new requirements.
- Clarifying responsibility allocation among different entities: Application and SDK operators shall bear the responsibility for the protection of personal information security. Concurrently, application operators are obligated to review the SDKs they embed, distribution platform operators are obligated to review the applications they distribute, and smart terminal manufacturers are obligated to review the applications they pre-install.
- Detailing specific rules for personal information collection and use by internet applications: The operational security management requirements for internet applications constitute the core section of the Provision, comprising 15 articles. These articles set out detailed rules addressing practical issues such as how to present notices for personal information collection and use rules, the procedures for rule updates, and methods for obtaining consent. They also establish specific requirements for invoking sensitive permissions such as access to contacts, camera/photo albums, and location information - for example, the general prohibition against accessing contacts, call logs, or SMS data for the purpose to collect third parties' personal information).In addition, the Provisions reiterates the necessity principle for processing biometric information such as facial and fingerprint data, and requires that personalized recommendation features include an option for users to disable them. Together, these requirements provide operators with detailed compliance guidance.
- Public disclosure obligations for large internet platforms regarding personal information processing rules: Internet applications with more than 50 million registered users or over 10 million monthly active users, and whose business models complex, must publicly solicit opinions on updates to their personal information collection and use rules at least seven working days in advance.
- Responsibilities of other entities: For instance, distribution platforms must review apps before listing them and keep records of, as well as disclose their historical violations. Smart terminals, when supporting applications to invoke permissions such as calendar or call logs, must obtain user consent via pop-up windows and provide options for a more granular authorization modes based on time, frequency, and precision.
CAC Releases Q&A on Data Export Security Management Policies and Regulations (January 2026)
On January 30, 2026, the CAC provided official responses to frequently asked questions concerning data exports. The main contents cover two aspects:
- Coordination rules for different data export compliance paths: For data processors who can legally carry out data export activities via standard contracts or export certification without being required to declare a security assessment under the Provisions on Promoting and Regulating Cross-border Data Flows, if they voluntarily declare such an assessment, they shall still act in accordance with the assessment results. For data processors that have already exported data via standard contracts or certification, if their cumulative annual data export volume subsequently reaches the threshold for a security assessment declaration, the declaration shall cover all personal information exported during that year, including the portion originally exported via standard contracts or certification.
- Scope of application for the standard contract for cross-boundary data flow in the Guangdong-Hong Kong-Macao Greater Bay Area (GBA): The aforementioned standard contract applies specifically to cross-boundary data activities within the GBA. When providing personal information to other overseas organizations or individuals outside this context, entities shall still fulfill compliance obligations including entering into standard contracts, obtaining personal information export certification, or filing for data export security assessments, as applicable to the particular data export activity.
On January 13, 2026, the Ministry of Public Security released the Draft Law. With respect to data‑protection‑related provisions, the draft centers on the prevention and governance of the cybercrime ecosystem. It requires the establishment of a network infrastructure resource management system based on real‑name identity authentication and introduces an ecosystem governance regime targeting the whole illicit industrial chains covering technical support, payment settlement, information acquisition, and traffic‑diversion services. The Draft Law further stipulates that network operators, providers of internet information services, and manufacturers of mobile smart devices must fulfill corresponding obligations relating to data security, artificial intelligence oversight, the protection of minors, and the prevention of online abuse. It also proposes the creation of a mechanism for combating cross‑border cybercrime. Violations may be subject to fines of up to RMB 5 million or ten times the illegal gains, as well as revocation of business permits or business licenses.
————Data Standards————
In 2026, China plans to develop over 30 national standards in the data sector, focusing on cutting-edge industries such as intelligent agents and embodied intelligence. At the same time, it will accelerate the introduction of a batch of urgently needed standards for public data and data infrastructure, and establish catalogs for identifying important data in sectors such as industry, telecommunications, seed industry, and aerospace.
On January 6, the National Standardization Administration issued plans for six recommended national standards regarding cybersecurity, including the Cybersecurity Early Warning Guidelines, Technical Requirements for Mobile Terminal Security Management Platforms, and the Data Security Capability Maturity Model. The deadline for reporting and approval is April 30, 2027.
————Cross-Border Data Transfer————
On January 30, eight departments including the Ministry of Industry and Information Technology (MIIT) issued the Automotive Data Export Security Guidelines (2026 Edition), which provides detailed determination rules for important data in the automotive sector across six major scenarios, such as research and development design, manufacturing, and driving automation.
On January 27, the Shenzhen to Hong Kong SAR secure and convenient cross-boundary data channel was officially opened to the public. Currently, its initial application is in the medical field. Examination images and reports of Hong Kong residents treated at two designated medical institutions in Shenzhen, such as the University of Hong Kong-Shenzhen Hospital, can be transmitted back to Hong Kong's "eHealth" platform via this secure channel. In 2026, it is estimated that this service will benefit over 300,000 Hong Kong residents seeking medical treatment in Shenzhen.
————Artificial Intelligence————
On January 12, six group standards in the field of artificial intelligence and data elements – including the Quality Evaluation Indicators for Artificial Intelligence Datasets - led by the National Industrial Information Security Development Research Center, were officially released. These standards cover areas such as AI dataset quality evaluation, industrial dataset construction, data governance, intelligent agents, large model knowledge bases, and AI management capability maturity. They have been officially implemented since January.
On January 26, the Hangzhou Internet Court concluded the country's first infringement dispute case arising from generative “AI hallucinations”. The ruling clarified that for infringements caused by inaccurate information generated by AI, the general principle of fault-based liability applies to AI service providers. The duty of care for service providers is limited to strictly reviewing information prohibited by law, prominently reminding users of the inherent limitations of AI-generated content that may lead to inaccuracies, and improving the accuracy of generated content through industry-standard technical measures. The generative AI service involved in this case had fulfilled its corresponding duty of care and does not constitute an act of infringement.
On January 9, CAC released the registration information for generative AI services in 2025. As of December 31, 2025, a total of 748 generative AI services had completed registration, and 435 generative AI applications or functions had completed filing.
On January 7, CAC publicly released the fifteenth batch of domestic deep synthesis service algorithm registration information. A total of 572 deep synthesis service algorithms have completed registration in January.
————Data Enforcement————
On January 16, the Shanghai Internet Information Office released eight typical cases of network data security law enforcement in 2025, all based on Regulation on Network Data Security Management. The cases cover sectors including logistics transportation and network technology companies, file query systems, hotel services, coffee enterprises, and SDK network technology. Among them, three cases involved data leaks caused by failure to fulfill primary responsibilities for network data security as required by law; two cases involved illegal cross-border data flows due to failure to comply with network data cross-border security management requirements; and three cases involved infringement of personal information rights due to failure to effectively fulfill personal information security protection obligations.
On January 22, the Supreme People's Procuratorate released six typical cases of public interest litigation for personal information protection. The cases involve excessive use of personal information by smart parking payment systems, improper management of facial recognition information in residential communities, false job advertisements to obtain applicants’ information, online doxing, medical staff illegally selling deceased persons’ information for funeral marketing, and “scalpers” illegally using third-party information to reserve spots at popular tourist sites for resale.
————Mobile Application Regulation————
On January 9, CAC issued the Personal Information Protection Policies and Regulations Q&A (January 2026), clarifying the scope of personal information and sensitive personal information, the content of personal information protection impact assessments for facial recognition technology, and the conditions for designating and reporting a person in charge of personal information protection.
On January 6, MIIT reported the first batch of 2026 (the 54th batch in total) of Apps and SDKs that infringe upon users' rights and interests. This notification involves 22 Apps and SDKs, covering issues such as illegal or excessive collection of personal information, illegal use of personal information, failure to clearly disclose the list of collected personal information, Mandatory / frequent / excessive permission requests by Apps/SDKs, compulsory automatic renewal of subscription, random redirects of information windows, and inadequate disclosure of SDK information.
On January 5, the National Computer Virus Emergency Response Center released 71 mobile applications that illegally collected and used personal information. The notification covers 13 types of non-compliant behaviors, three of which involved more than twenty mobile applications each. These include: failing to list, one by one in the privacy policy, the purposes, methods, and scope of the personal information collected and used by the app (including entrusted third parties or embedded third-party code and plug-ins); failing to provide users with means or channels to withdraw their consent to personal information collection; and failing to adopt corresponding security measures such as encryption and de-identification.
————Worldwide News————
On January 27, the European Commission has started two sets of specification proceedings to assist Google in complying with its obligations under the Digital Markets Act (DMA). The Commission intends to specify how Google should grant third-party AI service providers equally effective access to the same features as those available to Google's own services under the Android operating system, ensuring effective interoperability. The second proceeding, on the other hand, focus on the obligations to grant third-party providers of online search engines access to anonymised search data held by Google Search on fair, reasonable and non-discriminatory (‘FRAND') terms.
Artificial intelligence compliance issues have garnered widespread attention in the European Union. Following investigations initiated in 2023 against X concerning notice-and-action mechanisms, restrictions on illegal content, and personalized recommendations, on January 26, the European Commission launched a new formal investigation against X under the Digital Services Act (DSA), focusing on the risks associated with the deployment of X's AI tool, Grok, including the spread of illegal content and gender-based violence. Prior to this, the Italian data protection authority had issued warnings to AI platforms such as Grok and Gemini regarding the widespread existence of deepfake synthetic content. Now, Italy is cooperating with the Irish Data Protection Commission and may take further enforcement actions against X in the future. Meanwhile, on January 5, the French National Commission on Informatics and Liberty issued a statement clarifying the informing obligations of AI developers under the General Data Protection Regulation, requiring all developers to inform data subjects, within one month of obtaining personal data and in an easily understandable manner, about the purposes of data usage and processing methods.
On January 14, the European Medicines Agency and the U.S. Food and Drug Administration jointly established ten core principles for the use of artificial intelligence in the pharmaceutical sector, including human-centricity, risk-based approach, life cycle management, and data governance and documentation.
On January 14, the U.S. Federal Trade Commission reached a settlement with General Motors (GM). In January 2025, the FTC alleged that GM used a misleading enrollment process to get consumers to sign up for its OnStar connected vehicle service and Smart Driver feature, failed to disclose their collection of consumers’ precise geolocation and driving behavior data, and even sold these data to third parties. The final order approved by the Commission imposes a five-year ban on GM disclosing consumers’ geolocation and driver behavior data to consumer reporting agencies. And for the entire 20-year life of the order, GM must fulfill obligations including obtaining affirmative express consent, providing a personal information request mechanism, and allowing consumers to disable or withdraw authorization for the collection of location and driving behavior data.
On January 7, Singapore's Personal Data Protection Commission issued four enforcement decisions. Four companies — Air Sino-Euro Associates Travel, jewelry brand Goldheart, human resources software provider People Central, and website and email hosting service provider Singapore Data Hub — were fined varying amounts for collectively causing the leak of personal information belonging to one million individuals due to factors such as lacking vulnerability remediation processes, failing to conduct regular security reviews, and using outdated software.