IAPP Global Legislative Predictions 2026
In this annual release, the IAPP gathers insights from around the globe, providing an on-the-ground look and predictions for the year ahead.
Published: 9 Jan. 2017
Last updated: 28 Jan. 2026
Additional Insights
- A look ahead at digital policy in 2026 (Podcast)
- Data Privacy Day: 2026 Predictions (Video)
In 2025, artificial intelligence governance rapidly expanded, the global wave of privacy reform continued, cybersecurity threats led to strengthened regulation, and enforcement activities and capacities continued to grow.
In 2026, AI regulation will continue to evolve as countries transition from drafting to implementing AI regulatory frameworks. While EU member states will focus on AI Act implementation, countries in Latin America are introducing or finalizing risk-based AI laws. Meanwhile, countries in the Asia-Pacific region are adopting approaches ranging from comprehensive statutes to innovation-preserving frameworks.
Countries like Japan and Canada will continue efforts to modernize older privacy laws while others like India and Taiwan will work to implement newly passed legislation. Across Europe, implementation of the NIS2 Directive will continue. Countries including Vietnam and South Korea are working toward new or expanded cybersecurity laws.
As privacy, cybersecurity and AI governance continue to converge, 2026 will undoubtedly be a pivotal year. For an on-the-ground perspective, professionals from 69 countries and jurisdictions share their insights on what lies ahead.
Editor's note: While we try to include as many countries as possible, we recognize this is not a comprehensive list. If you are interested in submitting predictions for a country not yet featured, please reach out to IAPP Associate Editor Jennifer Bryant at jbryant@iapp.org.
Countries and Jurisdictions
Contributor: Mariano Peruzzotti, CIPP/E
Argentina is expected to maintain strong momentum with personal data protection and AI legislative initiatives throughout 2026.
The country's data protection framework, still governed by the Personal Data Protection Law enacted in 2000, remains largely unchanged despite multiple efforts to modernize it. Several bills drafted over the past decade have sought to update the PDPL to address technological advancements and align Argentina with global and regional standards.
Argentina's data protection authority, the Agency of Access to Public Information, recently indicated it is preparing a new draft bill, largely inspired by a 2023 proposal that closely followed the EU General Data Protection Regulation. That earlier bill, however, was never debated in Congress and ultimately lapsed. While the data protection authority's current draft appears to be at an early stage, expectations are high that legislative discussions could resume in 2026.
The protocol amending the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data, known as Convention 108+, may finally enter into force in 2026. The protocol, which has been open for signatures since 2018, only requires five additional ratifications to take effect. Should that occur, Argentina — along with other signatory states — would likely move to align domestic frameworks with the convention's enhanced provisions, potentially reinvigorating local data protection reform efforts.
The debate around personal data protection has also increasingly become intertwined with discussions on AI governance. Multiple AI-related bills are currently under consideration. Some mirror the European approach, seeking to establish a comprehensive regulatory framework that categorizes AI systems according to their potential risks. Other bills focus on specific issues like deepfakes; the use of AI in education, marketing, national security or law enforcement; and the dissemination of misinformation.
These topic-specific bills are expected to continue surfacing in Congress unless a broader AI regulation is enacted, which would likely make these fragmented initiatives redundant. Notwithstanding this, current general laws and principles, in addition to soft law, remain applicable to AI systems.
On that note, the DPA issued a guidance document for the responsible use of AI by public and private entities in September 2024 and has since announced its intent to update it.
In sum, 2026 is likely to be a pivotal year for Argentina's regulatory convergence with international data and AI standards, laying the groundwork for the long-anticipated modernization of the personal data protection regime and the establishment of a coherent policy on AI.
Contributor: Olga Ganopolsky
Australia's privacy laws will build on the developments of 2025 and continue to evolve in 2026.
Although Australia is likely to see further Privacy Act reforms, a key part of the 2026 landscape will be determined by new guidance, the enforcement posture of the Office of the Australian Information Commissioner and emerging case law. These drivers will continue to be underpinned by consumer expectations and individuals' growing recognition that privacy is a human right.
The influence of areas adjacent to privacy will also be relevant. These include cybersecurity legislation; reforms to the Security of Critical Infrastructure Act 2018; the new social media ban under the Online Safety Act 2021, effective December 2025; and consumer protection laws regulating data sharing and digital environments.
Implementation of Privacy Act reforms
Automated decision-making. By 10 Dec. 2026, entities that fall under the Australian Privacy Principles must include what personal information is used to support automated decision-making in their privacy policies.
This means that many entities will be focused on assessing the practices that will come into scope for such disclosures for a large part of the year. This will require the review and reassessment of many data-handling practices in the context of broader data governance and AI-related initiatives.
Statutory tort for serious invasions of privacy. The statutory tort for serious invasions of privacy, which came into effect 10 June 2025, will continue to be developed. Plaintiffs are expected to plead the tort in a broad range of disputes as a cause of action giving rise to compensation or as support for injunction applications to prevent certain disclosures.
For example, in the first court case applying the statutory tort, the District Court of New South Wales granted an injunction on an interlocutory basis to prevent the publication of certain photos on the basis that it would support an alleged "campaign of extortion." In doing so, the court found that materials — such as wedding photos of non-public figures — were to be protected from publication. This decision was based on the grounds that misuse of private photographs that were never intended to be made public may amount to a serious invasion of privacy.
Pursuant to an exemption under the act, the statutory tort does not apply where an invasion of privacy involves the collection, preparation for publication or publication of material by a journalist — and other related parties. Litigation that will test and help define the scope of this exception is likely to emerge.
Proactive enforcement posture of the OAIC
The OAIC will continue to rely on guidance and determinations to establish new standards of compliance for novel uses of technologies and to reinforce the requirements of fairness and transparency. This approach builds on landmark determinations on uses of facial recognition, fairness and transparency requirements, a consistent focus on a culture of compliance and the working combination of technical and organizational measures taken by APP entities to address their requirements.
Though reforms to the Privacy Act have not — at this stage — created a new fair and reasonable test for data handling as previously recommended, the OAIC will be applying existing laws to develop a similar test by relying on a working combination of APPs. These APPs include: APP 3.5, requiring collection of personal information by fair means; APP 5.1, involving notification of data collection; and APP 1.4, which sets out what a privacy policy must include.
The OAIC will rely on its newly expanded power, introduced in December 2024, to issue compliance and infringement notices and impose penalties of up to AUD66,000 (approximately USD44,296). These enforcement actions will seek to ensure APP entities have privacy policies that comply with the act. These actions are expressly foreshadowed in a compliance sweep planned by the OAIC and reinforce the earlier focus on "sectors and technologies that compromise rights and create power and information imbalances including: the rental and property, credit reporting and data brokerage, sectors; advertising technology (Ad tech) such as pixel tracking; practices that erode information access and privacy rights in the application of AI; excessive collection and retention of personal information; and systemic failures to enable timely access to government information."
Enforcement
The OAIC will expand enforcement processes under the civil penalty regime, aligning its approach with the Federal Court's decision in the Clinical Labs case. In this case, the Federal Court, for the first time, enforced a civil penalty of AUD4.2 million (approximately USD3 million) for failure to protect personal information. It also imposed a further AUD800,000 (approximately USD535,800) penalty for failure to adequately investigate a suspected data breach and another AUD800,000 for failing to report the breach once there were reasonable grounds to believe it constituted an eligible data breach under the Privacy Act, requiring notification to the OAIC and impacted individuals.
The OAIC will continue to implement proceedings for serious contraventions of the Privacy Act. The approach to enforcement will reflect that one incident can lead to multiple contraventions of the act and, hence, sizable penalty orders.
Further reforms
Passage of further reforms to the Privacy Act remains a possibility. However, the pace and content of reforms is far from certain.
Policy debates will remain highly influenced by the progress of reforms in adjacent areas, such as cybersecurity and AI, as well as enforcement activity by digital regulators and other agencies including the Australian Securities and Investments Commission, the Australian Competition and Consumer Commission and the Australian Transaction Reports and Analysis Centre.
There appears to be less of an appetite to create new prescriptive standards and mirror high watermark regulations such as the EU GDPR. The thinking will be informed by the European Commission's Digital Omnibus. In December 2025, the federal government announced it would not proceed with mandatory guardrails under the National AI Plan and instead would rely on existing laws.
Conclusion
Regulators will focus on enforcement and compliance with the Privacy Act, especially in data-rich business environments dealing with potentially vulnerable individuals.
For privacy practitioners, this means assisting organizations to ensure policies comply with the Privacy Act and supporting a practice and culture of compliance.
For individuals, privacy remains an important right, supported by regulatory enforcement and new legal action individuals can pursue — the statutory tort for serious invasions of privacy.
Contributor: Andreas Zavadil, AIGP, CIPP/E, CIPM
Several EU legal acts have entered into force, or will shortly, that apply in parallel to the EU GDPR and have implications for data protection law. Notable examples include the already applicable Data Act and Regulation (EU) 2024/900 on the transparency and targeting of political advertising.
The designation of competent authorities under the Artificial Intelligence Act is still pending. It remains unclear to what extent the Austrian Data Protection Authority will play a role in this regard. What is clear, however, is that the AI Act has already raised data protection questions. This is evident, for example, from the fact that the European Data Protection Board has addressed several questions concerning AI models and data protection in "Opinion 28/2024 on certain data protection aspects related to the processing of personal data in the context of AI models."
It is also worth highlighting that Austria has introduced a right to access information held by public authorities through the Freedom of Information Act. This development is significant from a data protection perspective as some decisions about whether certain information must be disclosed by public bodies require careful consideration of privacy concerns.
2026 will therefore continue to be of significant interest in the area of data protection.
Contributor: Charles Helleputte, CIPP/E
While 2026 is expected to resonate with the "simplification" of the EU digital rule book stemming from the Digital Omnibus everyone in the EU bubble is interested in — read a Brussels melting pot that might have vol au vent looking like a solid dish — this wave is unlikely to hit Belgium's domestic digital lasagna the same way.
Indeed, 2026 should be another pivotal year for the implementation and deployment of digital laws that came out of the past European Commission's tenure or that might be approved — or changed — on a fast track.
Belgium's NIS2 law will move from registration and gap-analysis into full-blown supervision and sanctioning. One can expect/hope for secondary acts and guidance from the Centre for Cybersecurity Belgium clarifying incident-notification thresholds, supply chain obligations and the interplay with the Digital Operational Resilience Act and sector rules.
In parallel, Belgium will have to complete the designation of the critical entities within its jurisdiction by 17 July. The EU Cyber Solidarity Act, in force since February 2025, may also translate into measures around the national Security Operations Center's capacity and Belgium's participation in cross-border "Cyber Shield" infrastructures and crisis protocols. A successor to the 2021-25 National Cybersecurity Strategy may be adopted, and one could also see tweaks to the CCB's mandate.
2026 will be a pivotal year for AI governance. The EU AI Act's phased implementation started in 2024 and has required member states to have notifying and market-surveillance authorities in place since August 2025. Although 21 "fundamental rights" authorities have been designated under Article 77, Belgium still had not adopted a full oversight framework as of late 2025. In 2026, one or more "AI framework acts" are expected to designate a central AI authority and coordination body; establish provisions for sanctions, inspections and regulatory sandboxes; and/or embed AI-specific rules into public-procurement and sectoral laws — such as those governing health care, mobility and public-sector use.
On privacy, a more risk-based enforcement framework is anticipated to finally develop, fueled by the wind of simplification blowing in the sails of Berlaymont and the European district. Belgium's data protection authority, the Autorité de protection des données, seems to have dealt with most of its past challenges and seems to be ready to fully deploy — even if its budget is still very insufficient to operate satisfactorily.
This is very much needed at a time when multiple frameworks have been developed to target similar topics — think dark patterns — and where effective interaction at both domestic and European levels are key. Capacity building will be critical, as will trust and open communication between the overflow of stakeholders crowding the digital boat.
So, a few "big bang" acts are expected, but a dense 2026 pipeline of implementing laws, delegated acts and sectoral tweaks should hopefully allow smooth interactions between cybersecurity, AI and privacy.
Contributor: Nancy Volesky, CIPP/US
A new privacy commissioner is anticipated to take the helm of the Office of the Privacy Commissioner for Bermuda in 2026. While a new commissioner's individual style and approach could be anticipated to make for subtle or marked changes, certain previously started initiatives will continue.
PrivCom is expected to expand and enhance its guidance, undertaking further enforcement actions and promoting privacy rights and resources. "The Pink Sandbox," where organizations can receive regulatory oversight and expertise on innovations or projects, will carry on. Monitoring present and emerging issues and technologies — both local and global — for impacts on privacy and cybersecurity remains a priority, and the sandbox helps inform PrivCom of potential challenges in the developing landscape. Cooperation with domestic and international regulatory bodies on several fronts will also be ongoing.
AI continues to be on the regulatory radar. While no AI-specific legislation currently exists, concrete steps taken in 2025 should help advance that goal throughout 2026. In the summer of 2025, the Bermuda Monetary Authority, the local financial services regulator, issued a discussion paper titled "The Responsible Use of Artificial Intelligence in Bermuda's Financial Services Sector," which was designed to solicit input from that community.
Through this discussion process, the BMA seeks to develop an outcomes-based regulatory framework for AI governance and oversight to lead responsible AI adoption and use. Follow-up consultations and industry workshops are expected in the first quarter of the year; a final proposal is anticipated in the third quarter.
The government of Bermuda published its internal AI policy last year to provide a structured framework for safeguarding human rights, privacy and public trust and continues to actively monitor this landscape.
In the area of security, the government will be active with legislative and operational cybersecurity and cybercrime initiatives. Further alignments are planned to meet global standards with the Budapest Convention on Cybercrime. The anticipated introduction or advancement of legislative amendments to other key laws, such as the Electronic Communications Act 2011 and the Criminal Code Act 1907, will strengthen PrivCom's ability to investigate and prosecute cybercrimes.
As part of the full implementation of the Cybersecurity Act 2024, the National Cybersecurity Incident Response Team will complete its key capacity-building phase. The provision of international technical assistance through the International Telecommunications Union is expected to advance critical infrastructure development, including a threat intelligence platform and NCIRT's public facing website, which will enhance the island's ability to proactively detect and analyze cyber threats.
Contributor: Ana Valeria Escobar, CIPP/E
Bolivia enters 2026 on the verge of a long-awaited data protection transformation.
Following the Financial System Supervisory Authority's 2025 fintech regulation, Supreme Decree No. 5384, which introduced EU GDPR-inspired controls into the financial ecosystem, the country is shifting from fragmented rules to a more coherent data governance model. This regulatory momentum has made privacy a new currency of trust in the market.
The Agencia de Gobierno Electrónico y Tecnologías de Información y Comunicación's draft data protection bill remains the foundation for the upcoming law. The draft bill aligns with GDPR principles, including extraterritorial scope, local representation, database registration and a future supervisory authority. While political transitions delayed adoption, the 2026 post-election period is expected to bring either a comprehensive law or a phased framework through an executive decree.
The fintech sector continues to act as a catalyst: Its compliance mechanisms should become a blueprint for other industries facing digital transformations. Yet, the challenge ahead lies in building institutional capacity, particularly a strong and independent data protection authority capable of turning regulation into enforcement.
If successful, 2026 could mark Bolivia's pivot from regulatory aspiration to regional relevance. Privacy, once peripheral, is now positioned to become part of the country's economic infrastructure, a foundation for innovation, cross-border credibility and digital growth.
Contributor: Tiago Neves Furtado, CIPP/E, CIPM, CDPO/BR, FIP
In 2026, Brazil is expected to move from designing its regulatory framework for AI to implementing it.
One of the key developments will be the progress of Bill No. 2338/2023, which was approved in the Senate in 2024 and is currently under review in the Chamber of Deputies. Following a full cycle of hearings and consultations, the Chamber's working group has signaled that a revised text is expected.
The current version of the bill adopts a risk-based approach: Systems considered to present unacceptable risks would be prohibited, and high-risk systems would be subject to governance, transparency and documentation obligations. The bill also establishes a national AI governance system led by Brazil's data protection authority, the Autoridade Nacional de Proteção de Dados, reinforcing the central role of the data protection framework in AI oversight.
During the 2025 consultation period, copyright and the use of copyrighted works for training generative AI models emerged as one of the most debated topics. Discussions focused on whether AI developers should disclose the sources of training datasets and what mechanisms should exist for rights holders to request information or object to certain uses. Other recurring points included proportionality for small- and medium-sized companies and alignment with sector-specific regulatory expectations in areas such as finance, health care and elections.
In parallel, Congress is reviewing two proposals that may shape how data can be used to train AI systems. Bill No. 2370/2023 proposes specific rules for the use of personal data in AI training with enhanced safeguards for data involving children and adolescents. Bill No. 522/2022 introduces the definition of "neural data," focusing on signals or metrics capable of revealing cognitive states. This reflects an emerging concern with cognitive privacy and early-stage neurotechnology applications.
Cybersecurity governance is also evolving. The Legal Framework for Cybersecurity, under Bill No. 4752/2025, seeks to strengthen incident response capabilities and clarify institutional responsibilities, particularly in critical infrastructure environments. Its implementation is expected to closely interact with data protection supervision.
On the regulatory side, the ANPD's 2025–26 agenda indicates which topics are most likely to advance to guidance or regulation. Priority areas include automated decision-making and the right to human review, data protection impact assessments, the processing of children's data, biometric and facial recognition systems, transparency obligations for data brokers and aggregators, and technical criteria for anonymization and pseudonymization.
In parallel, the government advanced a digital competition proposal through Bill No. 4675/2025, which proposes the creation of a Superintendency of Digital Markets within the Administrative Council for Economic Defense and establishes criteria to identify platforms with systemic relevance, requiring local representation and allowing the authority to impose obligations such as transparency measures and interoperability. The bill aims to update competition rules in digital markets, focusing on economic structure rather than content moderation.
At the international level, Brazil and the European Commission advanced toward mutual adequacy. If the adequacy decision is finalized, data transfers between Brazil and the EU will no longer require additional safeguards, increasing legal certainty and facilitating cross-border AI development and deployment.
Contributor: Irena Koleva, CIPP/E
The evolution of digital legislation is poised to continue through 2026, bringing significant changes across multiple domains.
Following the May 2025 election of new members to Bulgaria's data protection authority, the Commission for Personal Data Protection, the direction of work remains to be defined for 2026. However, the finalization of the legal framework governing accreditation, particularly concerning the requirements for certification bodies and bodies monitoring codes of conduct, is expected to be an important area of focus.
Strategically, there is anticipation surrounding the CPDP's potential adoption of a comprehensive strategy addressing personal data protection and the protection of whistleblowers stretching into 2030. Additionally, Bulgaria is on the brink of adopting the drafted Digital Transformation Strategy for 2026 through 2030. This strategy has been proposed by the Ministry of e-Government and is pending adoption by the Council of Ministers, which would signify alignment with the government's priorities for Bulgaria's strategic development at a national level.
Cybersecurity legislation is another area facing delays, particularly concerning the implementation of the NIS2 and the Critical Entities Resilience Directive. A bill proposing amendments and supplements to the Cybersecurity Act passed its first reading in the National Assembly; a final reading is anticipated by early 2026. Following its adoption, the Council of Ministers will be responsible for amending and enacting secondary legislation, such as an ordinance on minimum requirements for network and information security and a methodology for identifying essential and important entities.
A new national cybersecurity strategy is also expected. Furthermore, a draft Law on the Resilience of Critical Entities, a law that transposes the CER Directive, has been released for public consultation and is expected to be adopted in 2026.
On the frontier of digital policy, a Bill on the Use and Development of AI was submitted to the National Assembly in October 2025. Its progress through the legislative process remains uncertain as it awaits parliamentary debate and votes.
Bulgaria is progressing in its efforts to transpose the EU Representative Actions Directive, which establishes requirements for collective actions across various fields, including data protection. Amendments to the Consumer Protection Act, crucial for implementing this directive, are in the legislative pipeline awaiting a final vote.
Approximately two years after the DSA took effect across the EU, Bulgaria has made strides with amendments to the Electronic Communications Act, promulgated 21 Nov. 2025. These amendments establish the administrative framework necessary for the DSA, introducing a multiauthority model. They also anticipate issuance of a cooperation instruction between the Communications Regulation Commission, the Council of Electronic Media and the CPDP to delineate responsibilities among the authorities involved.
Lastly, attention is also directed toward the anticipated progress in implementing the EU's Data Act and Data Governance Act, illustrating Bulgaria's ongoing commitment to aligning with EU regulations and advancing its digital governance infrastructure.
Contributor: Shaun Brown
Federal privacy reform is the biggest open question heading into 2026. With Bill C-27 having died on the Order Paper in early 2025, it is possible the federal government will introduce another bill in 2026 to replace the Personal Information Protection and Electronic Documents Act with new federal privacy legislation.
One of the biggest criticisms of Bill C-27 was its inclusion of the AI and Data Act. Tying two significant and largely unrelated legislative proposals together in one bill meant the passage of new privacy legislation also required federal lawmakers to agree on how to regulate AI. With this in mind, the government is expected to address privacy reform with its own bill.
Alberta recently overhauled its public sector privacy regime with the passage of the Protection of Privacy Act, which came into effect in 2025. The POPA includes several new provisions, including mandatory privacy management programs and automated decision-making transparency that, outside of Quebec, are unique to Canada's public sector. The POPA also has a broader scope, regulating "data derived from personal information" and certain "non-personal data."
Both New Brunswick and Prince Edward Island are currently reviewing their own public sector privacy laws, so it is possible some of these changes will be adopted as part of legislative reform in those jurisdictions in 2026.
With public sector reform complete, it seems likely Alberta will introduce amendments to its private sector legislation, the Personal Information Protection Act, in response to recommendations from the Standing Committee on Resource Stewardship published in early 2025. Among other things, the committee recommended the government introduce administrative monetary penalties, higher penalties for offenses and automated decision-making transparency requirements.
British Columbia may also reengage on private-sector privacy reform in 2026. Although nothing has been formally announced, the British Columbia Personal Information Protection Act is due for an update as it remains the only private sector privacy law in Canada without mandatory breach reporting.
Contributor: Jaime Urzua
Chile's privacy and AI governance will move from design to delivery in 2026.
The Senate will decide the oversight model and governance of Chile's bill regulating AI systems; the Agencia Nacional de Ciberseguridad will translate the cybersecurity framework into enforceable rules; and the new Agencia de Protección de Datos Personales will open its doors and kick off certification and supervision, setting a more complex and risk-based compliance landscape.
AI systems bill
In October 2025, the Chamber of Deputies approved Chile's AI bill, Bulletins 15.869-19 and 16.821-19, in its first constitutional stage and sent the bill to the Senate.
The second stage is expected to revisit core architecture issues. These include the bill's new transparency duty for synthetic audio, image, video and text — labeling obligations beyond "high-risk" uses — and how it will be operationalized, including watermarking or disclosures in user interfaces. They also cover the carve-out for open-source components and how liability attaches if such components are commercialized or used in high-risk systems; the risk taxonomy, including the definition of "significant risk," and alignment with global frameworks; incident-reporting channels and a specialized administrative procedure, which was removed by the lower house; and governance.
With the responsibilities centered in the Ministry of Science, Technology, Knowledge and Innovation, senators will likely test whether the APDP or another body should have supervisory powers and sanctioning procedure — including how economic benefit and recidivism aggravate fines.
Cybersecurity: ANCI's regulations, instructions and 2026 priorities
Throughout 2025, the ANCI focused on elaborating and executing regulations and instructions for the entry into force of the Cybersecurity Framework Law. 2026 will see a focus on operationalizing the cybersecurity framework and defining priorities for essential services providers and operators of vital importance, reporting obligations and sector standards.
The ANCI is expected to focus on incident notification procedures and elaborate technical standards to address emerging security threats while continuing to designate operators of vital importance. These activities will shape compliance requirements for public and private sectors, reinforcing Chile's alignment with global cybersecurity best practices and supporting its ambition to become a reference point in Latin American cybersecurity governance.
APDP set-up and first actions
With the new Personal Data Protection Law entering into force, the APDP should become operational and begin implementing its compliance infrastructure in 2026. Early focus will include certifying, registering and supervising voluntary data protection compliance programs, creating an auditable signal of accountability for adopters.
The PDPL is scheduled to take effect 1 Dec. 2026, so the year will be focused on capacity-building, procedures and tooling — for example, creating portals for certifications, addressing complaints and reporting breaches.
The APDP can be expected to issue practical guidance on designating data protection officers and the minimum contents of programs, including data inventories, internal reporting and whistleblowing channels, risk matrices and protocols. The APDP is also projected to coordinate with the ANCI on incident handling where security and privacy intersect.
Contributor: Barbara Li, CIPP/E
In 2026, China is expected to see major developments across privacy, AI and cybersecurity, accompanied by robust regulatory enforcement.
Amendments to China's Cybersecurity Law will take effect 1 Jan. 2026. While China already regulates specific AI issues, such as algorithms, deepfakes, generative AI, AI labeling and ethics, these amendments represent the first time AI governance falls within the scope of a major national law.
Beyond AI, the amendments also strengthen data protection obligations in critical information infrastructure sectors and impose enhanced cybersecurity requirements relating to supply chains and data breaches. Penalties for noncompliance have been significantly increased.
Since the second quarter of 2024, China has gradually relaxed compliance requirements for cross-border data transfers. While the regulatory approval and filing processes have been simplified and accelerated, certain transfer scenarios are now exempt from formal cross-border data transfer legal mechanisms.
In 2025, the Cyberspace Administration of China, the country's top data regulator, issued a series of question-and-answer guidance documents to address common challenges faced by businesses when conducting cross-border data transfer impact assessments and preparing regulatory approvals and filings.
At the local level, Free Trade Zones have been used as regulatory sandboxes to test innovative approaches aimed at promoting AI, advancing autonomous driving, improving logistics, developing pharmaceuticals and enhancing financial services. Local regulators in Beijing, Shanghai, Hainan, Tianjin and Hangzhou, among others, have released local rules, policies, whitelists and negative data catalogues to further ease cross-border data transfers. This trajectory is expected to continue in 2026.
Further collaboration is also anticipated among data regulators in Guangdong, Hong Kong and Macau to facilitate data transfers within the Greater Bay Area, which covers nine cities in Guangdong Province as well as the Hong Kong and Macau Special Administrative Regions. A model contract for data transfers within the Greater Bay Area was already released with five pilot projects rolled out in 2025. More rules and policies are expected to further enable dataflows and explore building the Greater Bay Area into a "special data zone" in 2026.
Compliance obligations relating to "important data" remain a major challenge for businesses, largely due to the lack of clear and comprehensive rules. While regulators in certain sectors have begun to develop catalogues of important data, a regulatory vacuum persists for many industries. Industry authorities have signaled plans to issue further rules and guidance, making this an area businesses should closely monitor.
On the enforcement front, multiple Chinese regulators have overlapping authority and supervisory power over data protection, AI and cybersecurity; they have shown little hesitation in pursuing investigations and issuing penalties. In 2025, national and local authorities conducted multiple enforcement campaigns targeting mobile apps, e-commerce platforms, financial services, transportation, consumer-facing businesses, cross-border data transfers, data breaches and cyber incidents.
These areas, along with AI compliance, are expected to remain enforcement priorities in 2026. Regulators are likely to focus on the use of AI for misinformation and illegal purposes, intellectual property protection in training datasets for large language models, and compliance with mandatory algorithm and LLM review and filing requirements, where applicable.
According to China's AI Plus Action Plan issued by the State Council in August 2025, the country aims to deeply integrate AI across economic activity and daily life. China is expected to adopt an agile and pragmatic approach, balancing AI innovation with governance and risk management. In 2026, national, local and industry regulators are likely to issue further AI-related rules, standards and guidance, providing businesses with clearer benchmarks and best practices.
Contributor: Luis-Alberto Montezuma, CIPP/C, CIPP/E, CIPP/US, CIPM, FIP
In 2025, Congresswoman Maria Fernanda Carrascal re-introduced Bill 214/2025C to align Colombia's personal data protection system with the EU GDPR. Meanwhile, the Ministry of Commerce and the Ministry of Science, Technology and Innovation proposed Bill 274/2025 to amend the territorial scope of Law 1581 of 2012, Colombia's current personal data protection law.
The bills were approved 28 Oct. 2025 by the House Committee on Constitution and forwarded to the House Floor. If passed, the bills, which have been consolidated through Bill 274/2025, go to the Senate's Committee on Constitution and Justice and then the Senate floor. Any differences between House and Senate versions are resolved by a conference committee, after which both chambers vote on a revised bill.
Before the president's signature, the bill will be subject to prior review by the Constitutional Court and may be declared unconstitutional if found to conflict with provisions of the Constitution. If the bill is not passed in June 2026, it dies.
Bill 274/2025 defines terms such as automated decision-making, neural and genetic data, large-scale data processing and biometric data. The bill establishes contracts, legal obligations and vital interests as grounds for processing personal data while excluding legitimate interests; it also updates rules for data transfers and children's data processing. Bill 274/2025 prohibits controllers from indirectly collecting personal data through computer programs or techniques, including AI, that infer or deduce information from other data. This would prohibit the practice of data scraping in Colombia.
Under the bill, the deputy superintendent for the protection of personal data will continue to be appointed by the Superintendent of Industry and Commerce. However, the position will now have a four-year term and be filled through a public process based on expertise. This aims to ensure independence, efficiency and objectivity, helping Colombia meet the European Commission's standards for adequate data protection.
In 2025, the SIC issued three circulars, including Circular 001/2025 which sets rules for personal data handling within the fintech ecosystem, requiring transparency, compliance with data transfer rules, and data protection assessments. While Law 1581 of 2012 does not mandate disclosure about automated decisions, fintech firms must disclose what information is used, the reasons why, and the effects on individuals. The circular limits the sharing of biometric data with third parties and prohibits data controllers and processors from contributing to centralized biometric databases, except when data processors access and process such data on behalf of the controller.
Circular 002/2025 sets standards for technology transfer agreements involving data processing.
The authority has also adopted the Ibero-American Data Protection Network's Model Contractual Clauses via a circular. Sharing personal data from a controller to a processor abroad is not considered an international transfer in Colombia. However, the circular covers transfers between controllers and to processors in countries lacking adequacy decisions. Use of the RIPD MCCs is optional.
Contributor: León Weinstok
Costa Rica will hold elections in February 2026, so the future composition of Congress remains uncertain. The proposed Personal Data Protection Law, Bill No. 23.097, is expected to be one of the main legislative items under discussion. However, it is unclear whether the new members of Congress will introduce modifications to the bill or draft an entirely new proposal.
If the bill is approved in its current form, organizations should prepare for new data subject rights, legal bases, reporting obligations and cross-border data transfer rules to be enforced by Costa Rica's data protection authority, the Agencia de Protección de Datos de los Habitantes, which is expected to increase its guidance and supervisory activity once the law enters into force.
AI regulation is likely to remain fragmented and highly technical in 2026. Competing bills and sectoral proposals — such as the proposed Law for the Regulation of AI and the Law for the Implementation of AI — are already under discussion, but none appear to have a strong chance of passage. Given the technical complexity of these initiatives and the arrival of a new congressional term, the approval of any AI-specific legislation in 2026 seems unlikely.
Contributor: Krunoslav Smolcic, AIGP, CIPP/E, CIPM, CIPT, FIP
2026 is set to be a defining year for the digital landscape in Croatia, marked by the full implementation of major EU legislative acts and significant national digital transformation initiatives. Driven by the dual pillars of AI governance and enhanced digital responsibility, a period of intense compliance focus across both public and private sectors is anticipated — one that will inevitably reshape the local privacy framework.
Privacy and digital responsibility: A mandate for transparency
The mandatory adoption of new national e-governance standards will drive the most tangible shifts in digital responsibility. Starting 1 Jan. 2026, the introduction of Fiskalizacija 2.0, a comprehensive digital ledger system mandating real-time e-invoicing and digital archiving for all value-added tax payers, will require widespread use of digital certificates and electronic signatures.
This transition, though primarily tax driven, significantly elevates the baseline for data security, data integrity and business-to-business digital transparency across Croatia's economy. Companies will have to implement robust internal data handling procedures to manage this new level of digital security.
In parallel, the gradual rollout of the European Digital Identity Wallet based on the Electronic Identification, Authentication, and Trust Services 2.0 framework is expected. Croatia has already begun technical and organizational preparations by upgrading its National Identification and Authentication System infrastructure. Although the timeline for full availability depends on EU-level project schedules and national implementation decisions, the shift toward user-controlled identity attributes will place additional pressure on data minimization practices and transparency requirements across all digital services.
AI governance: The operational reality of the AI Act
In 2026, the overarching theme for AI governance will be the necessary preparation and initial compliance with the EU AI Act, which enters full application 2 Aug.
Croatia is developing a multi-layer supervisory model for the AI Act. Draft national implementing legislation envisages a distribution of oversight responsibilities among several key players: the Personal Data Protection Agency, which will focus on the intersection of AI and personal data; general and specialized ombudsmen for children, gender equality and individuals with disabilities to ensure protection of vulnerable groups; the State Electoral Commission; and the Agency for Electronic Media.
This distributed enforcement model reflects a clear national intent to ensure comprehensive oversight across all high-risk sectors, including employment, health care, public services and elections.
Addressing the biggest regulatory challenges
Despite the clear regulatory framework, Croatian businesses face two significant hurdles in 2026 that will test the capabilities of both the private sector and regulators.
First, low institutional readiness is a significant challenge. Surveys indicate a vast majority of companies are not yet fully aware of the AI Act's specific classification and documentation requirements for high-risk AI systems. The primary challenge is translating the theoretical act into an operational governance framework, specifically developing internal AI use policies, conducting mandatory fundamental rights impact assessments, and securing technical competencies for oversight of high-risk AI systems. The knowledge gap could lead to early compliance failures.
A further challenge lies in the unresolved question of liability for harm caused by autonomous systems. Discussions are underway regarding potential adjustments to criminal and civil liability frameworks, particularly in sectors such as autonomous mobility, but concrete legislative solutions have not yet been finalized. The key issue for 2026 will be clarifying how responsibility is allocated between developers, importers, deployers and AI system operators.
For Croatia, 2026 will be defined by the simultaneous push for national digital efficiency and the intense, foundational work required to establish a secure, rights-respecting AI regulatory environment as mandated by the EU. The focus will be on transitioning from legislative adoption to the operational reality of compliance across the economy.
Contributor: František Nonnemann
In the Czech Republic, the end of 2025 was marked by major developments in the field of cybersecurity. On 1 Nov. 2025, the Cybersecurity Act entered into force, transposing the NIS2 Directive. In parallel, the Critical Infrastructure Act, which implements the Critical Entities Resilience Directive, is taking effect in stages. These new laws significantly expand the scope of obligated entities required to ensure the continuous availability of their services and systematically manage cybersecurity, information security and resilience.
All implementing regulations under these laws are expected to be adopted in 2026. Regulated subjects will therefore have to fully apply and update their internal frameworks, aligning cybersecurity and operational resilience systems with existing data protection processes. Ensuring coherence between personal data protection, cybersecurity and resilience obligations will become a key compliance priority.
In 2026, a new national law implementing the EU AI Act is also expected to be adopted. The Czech Republic is likely to follow a minimalist approach, introducing no additional obligations for developers, providers or deployers of AI beyond those required under the EU regulation.
The Act on the Digital Economy is another important legislative initiative in progress. This law will regulate the dissemination of commercial communications, the activities of information society service providers, and the recognition of organizations for data altruism. The act will also serve to implement provisions of the EU's Data Governance Act and Digital Services Act.
Among other changes, the proposal introduces new rules for sending electronic commercial communications. For instance, it will limit such communications to a two-year period following the last interaction with a customer. The draft also defines mechanisms for content removal and data disclosure requests addressed to digital service providers, specifying certain aspects of dispute resolution procedures, and regulates access to data held by very large online platforms and very large online search engines. Finally, the act will create a public register of recognized data altruism organizations.
Taken together, these legislative changes mark a continued integration of EU digital, cybersecurity and AI regulatory frameworks into law. 2026 is expected to be a year focused on implementation, compliance and alignment across the key domains of data protection, cybersecurity and digital governance.
Contributor: Niels Torm, AIGP, CIPP/E, CIPM, FIP
Throughout 2025, numerous instances of AI applications emerged, raising significant questions regarding privacy, ethics, and intellectual property rights.
In Denmark, particular emphasis has been placed on addressing the challenges and risks associated with AI adoption. These developments have heightened public awareness and led to the launch of several initiatives, which may result in increased regulation for certain sensitive sectors.
Legislators recognize that the rapid advancement of AI presents unique ethical, IP and privacy challenges both to lawmakers and societies. Lawmakers are looking for ways to balance its development and the public's rising concern. According to many experts and legislators, 2026 is going to be the year where Denmark will have to decide as a country how it wants to handle AI.
The growing amount of AI-generated content across social media platforms has amplified worries about diminishing trust in authorities and, ultimately, in societal cohesion. Addressing these concerns is crucial for politicians as declining confidence in official news sources can foster unrest and societal division. Leaders remain vigilant, observing similar trends in other countries where distrust in media has catalyzed societal fragmentation.
In response to growing public concern, companies that are undertaking or advancing their AI initiatives are expected to place greater emphasis on the ethical and responsible use of AI. This increased attention will inevitably drive a stronger focus on AI governance as a strategic means of managing these challenges.
For privacy professionals, concerns related to AI use are expected to drive privacy initiatives in both the private and public sectors. As a result, there will likely be heightened demand for professionals with expertise in both privacy and AI governance, making certifications in these areas increasingly valuable throughout 2026.
Contributor: Pedro Cordova
Ecuador is headed toward approving legislative advances in personal data protection and AI regulations in 2026.
Throughout 2025, the Superintendency of Personal Data Protection intensified its regulatory activities. Having approved its institutional regulatory plan, the authority issued several key instruments, including the Risk Management and Impact Assessment Guide, standard contractual clauses, the regulation on imposing sanctions for personal data protection infringements and the statute detailing data protection officer appointments and roles.
For 2026, the issue of a new regulatory plan is expected to address matters still pending development through secondary regulations and resolutions, such as establishing technical security standards, issuing guidelines for sectors processing sensitive data, determining retention periods, establishing the National Personal Data Registry for international transfers, and designating countries with an adequate level of data protection. These measures will enable the SPDP to provide clearer operational guidance and facilitate organizational compliance.
In parallel, the normative development of AI governance and digital responsibility has progressed at a slower but deliberate pace. In 2024, three legislative proposals addressing AI regulation were submitted to the National Assembly. Among these, the bill for the Organic Law for the Regulation and Promotion of AI was formally admitted for consideration and is currently undergoing technical analysis through expert roundtables. The bill is expected to advance toward its final debate and potential approval in 2026, thereby establishing Ecuador's first comprehensive legal framework for the development and responsible use of AI.
The proposal seeks to define AI systems in a manner consistent with the EU's AI Act and to adopt a risk-based approach that balances innovation with the protection of fundamental rights.
2026 is projected to be a year of regulatory consolidation, characterized by increased activity from the SPDP and the potential enactment of a law that lays the foundation for the development of AI in Ecuador.
Contributor: Isabelle Roccia, CIPP/E
2026 will see a continued focus on competitiveness and sovereignty in the EU. The European Commission is expected to deliver tangible results to address recommendations for improving European competitiveness within the Draghi report. The simplification efforts launched in 2025 are part of this approach, aimed at alleviating the compliance burden for subject matter experts and streamlining the EU digital rulebook, among others.
The impact of these proposals will depend on two factors: how Brussels calibrates competitiveness and simplification objectives while safeguarding its value-based system, and how it navigates global geopolitics regarding AI, digital trade, dataflows and platform regulation.
Simplification will not mean heavy deregulation. The Commission's objective will be to simplify the application of rules in practice and streamline their interplay rather than drastically lowering the volume of rules. Its proposals touch on data, cybersecurity, privacy and AI but will be fiercely negotiated for months. Some member states will also continue to push issues on the agenda to trigger European action — for example, children's online safety.
Despite a simplification agenda, Brussels will continue to initiate policy and laws impacting digital governance. Several laws will undergo a review — for instance, the Cybersecurity Act and Digital Markets Act — which could introduce sovereignty requirements and expand the initial scope of application.
New initiatives are also expected to trickle down to professionals' desks, including data sharing and access, cloud service provisions, consumer online protection and dark patterns with the Digital Fairness Act; data retention; cross-border enforcement of the EU GDPR; child sexual abuse material; and product liability.
The Commission will continue to use its enforcement competencies under the DMA, DSA, and AI Act. In parallel, the patchwork of agencies involved at the national level will only be tightened, creating an urgency for clear and predictable cooperation mechanisms.
The implementation of existing laws will remain a Gordian knot for organizations and regulators alike. The landscape will remain complex. Some laws still require full implementation, including the AI Act and the NIS2 Directive, designation of authorities under the AI Act, and additional guidelines and standards for the DMA, DSA, and AI Act. Organizations, authorities and courts will increasingly grapple with questions that require them to be more knowledgeable, skilled and thoughtful than ever.
Contributor: Eija Warma-Lehtinen, CIPP/E
Finland's data protection authority, the Office of the Data Protection Ombudsman, has been granted additional funding to significantly increase its resources. This is partly due to criticism regarding the DPA's lengthy processing times, caused by the increased number of notifications and complaints following the EU GDPR's entry into force.
In September 2025, the DPA issued two decisions imposing administrative fines on financial sector operators for various data security deficiencies identified in their operations. In the coming year, the level of adequate data security will be thoroughly evaluated and examined in light of these two cases. Both cases have been appealed to the Helsinki Administrative Court.
For the first time, the DPA published an inspection plan for the next three years. Inspections are particularly targeted at personal data processing that individuals cannot directly monitor themselves. Organizations that process this kind of data are primarily found in the public administration sector.
The Office of the Data Protection Ombudsman is handling its first so-called "click or consent" case in which a media company has started applying this practice on its websites. It will be interesting to see how the case progresses and when and what kind of decision will be issued.
As regulation has increased, the DPA has received several new types of supervisory tasks. In the coming years, it will have to further adapt staffing and learn the effects of new regulation on personal data processing activities.
Data Protection Ombudsman Anu Talus was elected for a new five-year term. In addition to this role, Talus is the Chair of the EDPB.
Contributor: Cécile Martin
In 2026, France is expected to accelerate the regulation of AI use within companies.
Indeed, as employees increasingly use AI, case law is beginning to multiply decisions on the conditions for implementing this new technology in the workplace.
In this regard, as announced with its 2025-28 strategic plan, France's data protection authority, the Commission Nationale de l'Informatique et des Libertés, intends to continue its work to provide legal certainty for stakeholders and promote AI that respects individuals' rights.
To this end, in addition to the education and health care sectors, the CNIL intends to continue its studies and establish recommendations on the use of this new technology in the workplace.
While the EU AI Act has already established safeguards in this area, the CNIL is expected to provide additional guidance on the use of AI in the workplace soon.
Contributor: Ulrich Baumgartner
The wheels continue to turn in Germany with notable developments in data protection law expected to carry on in 2026.
First and foremost, Germany will likely see a change in the structure of data protection law oversight.
Currently, the country's supervisory system is structured federally; each of the 16 German states has its own data protection authority. There is also a Federal Commissioner for Data Protection and Freedom of Information, the Bundesdatenschutzbeauftragte, that is competent for supervising public authorities of the federal government as well as providers of postal or telecommunications services.
Following a decades-long debate over the centralization of the system of data protection supervision in Germany, the current government seems poised to follow words with action. In its coalition agreement, the federal government — which took office in March 2025 — set the goal of centralizing data protection supervision at the BfDI. Currently, it is still unclear whether and to what extent the current federal supervisory authorities will become obsolete or what competences they may retain.
While there is no draft law amending the German national data protection laws in that regard yet, 2026 will likely bring some material changes.
As a harbinger of broader centralization efforts, the government advanced a draft law to implement the EU Data Act into national law. The draft law centralizes supervision over any processing of personal data in the context of the Data Act at the BfDI, thereby depriving the federal supervisory authorities of their actual competence as provided for in Article 37(3).
That said and although neither the national implementing law for the Data Act nor the corresponding national law for the EU AI Act have been formally adopted, it is already clear that the Federal Network Agency, the Bundesnetzagentur, will be the primary supervisory authority for both regulations. Back in 2024, the agency was appointed as the primary supervisory authority for the EU DSA.
This will propel the BNetzA — so far primarily responsible for telecommunication, postal, energy and rail services — to become the new "super authority" for supervision over both the EU Data Act and AI Act, as well as the DSA.
On the cybersecurity side, Germany will finally adopt a national NIS2 implementation law. Among other things, the law will introduce a three-stage reporting system for security incidents.
Regarding substantive law, the government's ambition to adopt a new dedicated Employee Data Protection Act seems to have lost momentum. While the prior government adopted a draft law on employee data protection, the new government has so far shown little desire to move the draft forward.
Contributor: Alexandra Athanasious, CIPP/E
In 2026, Greece is expected to implement a plethora of legislative acts, including its supplementary law to the EU Data Governance Act. Law 5188/2025 will establish the framework and set out conditions for the reuse of data held by public sector bodies that are legally protected — particularly personal data or data protected by intellectual property rights — by regulating the applicable procedure through a single information point.
Further, the expected supplementary law will establish data intermediation services and data altruism organizations to support and facilitate the provision of data by the data subject or holder.
The supplementary law for the EU Data Act will include the designation of the competent supervisory authority, outline its procedures and operational framework, and set the administrative fines that may be imposed in cases of violations.
Other legislative developments will concern the field of AI. These specifically include the establishment of the competent supervisory authority for the EU AI Act and the implementation of the newly established Special Secretariat for AI and Data Governance. The special secretariat will be responsible for opening public-sector data, which, if nonprotected, will be provided automatically. If protected, the public-sector data will be provided through application programming interfaces upon demand.
In addition, the secretariat will be responsible for designing and implementing the national strategy for AI and data, coordinating involved stakeholders, serving as a hub of expertise and capacity-building in AI and data, developing AI projects and providing expertise to bodies wishing to develop AI projects where needed.
At the same time, the National Cybersecurity Authority was established under Law 5086/2024 and is expected to gradually unfold its work and respond to requirements by making decisions related to risk management and ensuring the smooth continuity of the market.
In terms of data protection supervision and enforcement, the head of the Hellenic Data Protection Authority stepped down in the summer of 2025 and has been temporarily replaced by the authority's deputy president. For this reason, the authority has not defined its strategy or action plan for 2026; it is unlikely to undertake major initiatives for market regulation or supervision.
Overall, significant developments are expected in legislation, especially in terms of AI and data regulation.
Contributor: Daniela Meza and León Weinstok
Guatemala continues to lack a comprehensive data protection framework, relying instead on constitutional guarantees, freedom of information rules, and scattered sector-specific provisions. These instruments only provide limited safeguards for individuals and do not define clear obligations for data controllers nor mechanisms for cross-border dataflows.
Historically, data protection efforts in Guatemala have mainly focused on personal data held and processed by the public sector. The Guatemala Constitution and the Law on Access to Public Information set out basic principles for how public institutions collect and disclose personal information. However, these provisions were not designed to address the complexities of today's digital environment where both public and private actors process increasing volumes of personal data. As a result, the private sector operates in a regulatory vacuum, creating legal uncertainty and fragmented compliance practices.
Looking ahead to 2026, data privacy and emerging technologies are expected to return to the legislative agenda. Several new data protection bills have recently been introduced before Congress. These bills aim to recognize fundamental rights of data subjects — including the right to informational self-determination — and to establish rules on consent, impose administrative fines for misuse of personal data, define cross-border data transfer mechanisms and regulate automated decision-making that affects individuals.
In parallel, discussions on AI governance are gaining traction. Although Guatemala does not yet have an AI-specific law, recent policy dialogues emphasize the need to regulate AI systems' transparency, accountability and ethical use, particularly within the public sector and digital financial services. However, a regulatory framework in this regard is not expected to be established soon as the protection of personal data and cybersecurity-related issues are being prioritized in the legislative agenda.
Contributor: Ali Ordonez and León Weinstok
Honduras still does not have a specific law dedicated to regulating privacy matters. Legislation, such as the Consumer Protection Law and the Law on Transparency and Access to Public Information, gives a very small regulatory framework.
In practice, companies tend to apply international standards for data privacy as best as they can to be able to commercialize and operate in regional markets. It is strongly recommended that this be examined considering the Constitution of the Republic of Honduras, which also regulates the right to honor, intimacy and secrecy of communications.
In 2013, a draft Personal Data Protection Law sought to guarantee privacy and the right to informational self-determination for individuals. However, this draft was never approved by the National Congress of Honduras. Rights derived from the processing of personal data are mainly protected by the Constitution of the Republic of Honduras.
To date, the National Congress has not officially announced any discussion of a new law.
In 2018, there was an attempt to pass a law dealing with cybersecurity and protective measures against acts of hate and discrimination on the internet. However, the proposal did not pass the draft stage in the legislative chamber and remains on hold.
The use of AI is not regulated in Honduras, and no possible regulation has been considered yet.
Contributor: Wilfred Ng
When Hong Kong's first cybersecurity legislation, the Protection of Critical Infrastructures (Computer Systems) Ordinance, took effect 1 Jan. 2026, it imposed organizational, preventive and incident reporting and response obligations on designated critical infrastructure operators across different essential service sectors and infrastructures.
The newly established Office of the Commissioner of Critical Infrastructure is the primary regulatory and enforcement authority. Working alongside sectoral regulators — such as banking and telecommunications regulatory authorities — the commissioner has the power to designate critical infrastructure operators, monitor compliance with the ordinance, issue codes of practice, and serve as the primary authority for which security incidents should be reported.
In the concluding months of 2025, the government finalized the much-anticipated codes of practice for the relevant essential services sectors that will serve as practical guidance for compliance.
Regarding data protection, no timeframe has been reported for amending the Personal Data (Privacy) Ordinance to bring it closer in line with international standards. Previously, the Office of the Privacy Commissioner for Personal Data proposed amendments such as establishing a mandatory data breach notification mechanism, granting the PCPD authority to impose administrative fines and introducing direct regulation on data processors.
To the extent the PDPO introduces a mandatory breach notification regime in the future, critical infrastructure operators regulated under the new cybersecurity law should take note of the overlapping reporting requirements that may be triggered by data breaches and security incidents under the two regulatory frameworks.
The PCPD has marked an eventful year in implementing a series of AI-related initiatives since the publication of the "Artificial Intelligence: Model Personal Data Protection Framework" in June 2024. This publication was followed by a compliance check conducted between February and May 2025 by the PCPD concerning 60 local organizations' implementation of the framework.
In March 2025, the PCPD published the "Checklist on Guidelines for the Use of Generative AI by Employees" which aims to assist organizations in developing internal generative AI use policies in compliance with the PDPO. The PCPD joined 20 other data protection authorities in signing the "Joint Statement on Building Trustworthy Data Governance Frameworks to Encourage Development of Innovative and Privacy-protecting AI" in September 2025 at the 47th Global Privacy Assembly.
With the continuing prevalence of generative AI technologies, the PCPD is expected to provide more sectoral or use case specific guidance for carrying out AI-assisted data processing activities, particularly in areas concerning data analytics and profiling.
Contributor: Tamas Bereczki, CIPP/E, and Ádám Liber, CIPP/E, CIPM, FIP
Ongoing EU AI Act implementation
In late 2025, Hungary's Parliament and government adopted the implementing legislation of the EU AI Act, officially known as Act LXXV, designating the competent authorities pursuant to Article 70 of the AI Act.
With the adoption of the legislation, the National Accreditation Authority will act as the notifying authority; the Ministry for National Economy will serve as the solitary market surveillance authority and single point of contact in Hungary. The ministry will also implement the national AI regulatory sandbox with an expected deadline of August 2026.
The AI Act implementation legislation will also establish detailed procedural rules for supervisory authorities and will establish the Hungarian AI Council, which will be tasked with issuing guidelines and opinions regarding the implementation of the EU AI Act.
NIS2 cybersecurity audits
Due to regulatory delays, the first NIS2-related cybersecurity audit deadline was extended to 30 June 2026 in Hungary. However, with over 3,000 entities affected and few accredited auditors available, audit capacity may be insufficient, raising concerns that many organizations could miss the deadline and face potential action from Hungary's cybersecurity supervisory authority, the Supervisory Authority for Regulated Activities.
Delayed Data Act implementation
Although the EU Data Act became applicable 12 Sept. 2025, the legislature has yet to announce the designation of the competent authority and data coordinator in Hungary, pursuant to Article 37 of the act. With no draft proposals currently available, Parliament is expected to adopt the implementing legislation sometime in 2026.
New rules for online financial services contracts
New legislation aiming to strengthen consumer protection in the case of financial contracts concluded at a distance is expected to become applicable in June 2026. The new act is the implementing legislation of Directive (EU) 2023/2673 of the European Parliament and Council; it repeals the current Act XXV of 2005, which governs distance selling contracts for services in the financial sector.
The new legislation aims to strengthen consumer rights in the digital space, update the legislative framework, remove unnecessary burdens on service providers and consumers and provide for equal conditions on competition between financial services providers.
Contributor: Pranav Rai
India's tech legislation is set for major shifts in 2026. After years of debate, the country’s first comprehensive privacy law — the Digital Personal Data Protection Act — is moving from policy design to active enforcement.
The detailed Digital Personal Data Protection Rules, released in November 2025, lay out a phased rollout — over the next 12–18 months — of strict compliance requirements. Regulators are empowered with hefty penalties — up to INR2.5 billion (approximately USD27 million) per breach — to ensure firms toe the line.
In 2026, businesses will focus on compliance, appointing data protection officers, auditing data practices and deploying certified consent manager services. The newly created Data Protection Board of India is taking shape to investigate complaints and penalize violations, signaling that India's era of lax oversight is ending and digital privacy rights, affirmed by the Supreme Court in 2017, are now backed by real consequences.
AI governance will follow a cautious, "light-touch" path. India has no immediate plan for an AI-specific law, leaning instead on guidelines and existing laws. In November 2025, the government released India AI Governance Guidelines — a comprehensive but non-binding framework promoting ethical, transparent AI use while explicitly aiming not to throttle innovation. The guidelines lay out principles like fairness, accountability and "do no harm" but do not impose new legal mandates.
Officials argue that many AI-related risks can be handled under current statutes. For example, a malicious deepfake is essentially misrepresentation, an infraction already punishable under the Information Technology Act's fraud provisions. To tackle emerging concerns, the government is tweaking policies rather than writing new legislation. Notably, in late 2025 the IT Ministry proposed mandatory labeling of AI-generated content on social media — requiring platforms to clearly flag deepfake images, videos and audio so users know when content is synthetic.
This targeted draft rule to curb misinformation exemplifies India's pragmatic approach. Unless a major AI incident forces a rethink, 2026 will see regulators continuing to favor soft governance and industry self-regulation for AI rather than sweeping new laws — a contrast to the more heavy-handed AI regulations in the EU.
Cybersecurity and tech regulation remain a work in progress. The much-anticipated Digital India Act, a proposed overhaul of the country's 23-year-old IT Act, is still in draft form and wasn't passed in 2025. Officials have indicated there is no urgency to rush this omnibus law, noting that existing frameworks address many issues for now.
Instead, the government is pursuing piecemeal measures to strengthen cyber governance in the interim. In practice, this means leaning on and refining current laws: tightening intermediary rules, issuing sector-specific security directives, and using the new DPDPA and updated penal code to punish cyber offenses. The recently proposed deepfake labeling rules are a prime example, leveraging an amendment to the IT Rules, 2021 rather than waiting for a big, new act.
Likewise, other cyber initiatives are likely to advance within the present legal setup. A long-delayed National Cybersecurity Strategy is also expected to finally be unveiled in 2026, setting a policy blueprint for critical infrastructure protection and coordinated responses to threats.
India's legal trajectory in 2026 balances bold implementation with cautious policymaking. Data protection will dominate the agenda as the DPDPA's stringent norms kick in, forcing a cultural shift in how organizations handle personal data. AI governance will be guided by principles and light oversight, signaling an intent to foster innovation under watchful eyes rather than impose premature restrictions. The absence of an immediate, all-in-one cybersecurity law will be offset by incremental reforms and proactive policies to address pressing risks.
It will be a year of consolidation and measured progress — one in which India begins putting its digital laws into action, even as it fine-tunes the rules for the next chapter of its tech-driven future.
Contributor: Glenn Wijaya
Indonesia's personal data protection framework has been overseen by the Directorate General of Digital Space Supervision, an agency established under the Ministry of Communication and Digital Affairs. Pursuant to Presidential Regulation No. 174 of 2024, the DG-DSS formulated and implemented policies on digital space supervision and personal data protection. The DG-DSS has served as the interim authority enforcing the Personal Data Protection Law until the independent Personal Data Protection Agency is established.
A key legal milestone was reached in 2025 through Constitutional Court Decision No. 151/PUU-XXII/2024, which reaffirmed personal data as a constitutional right requiring maximum state protection. The court reviewed Article 53(1) of the PDPL concerning the obligation to appoint a personal data protection officer and ruled that the conjunction "and" must be read as "and/or." This interpretation ensures that the listed criteria — public service processing, large-scale monitoring, or large-scale processing of specific data — operate as independent and alternative grounds, strengthening compliance obligations and legal certainty.
The judge emphasized that data protection is integral to the constitutional right to personal security and privacy under Article 28G(1) of the 1945 Constitution. The court's clarification aligns statutory language with constitutional guarantees and reinforces Indonesia's evolving personal data protection framework.
As of November 2025, the government entered the final stage of preparing the implementing regulation of the PDPL, the Government Regulation. According to Executive Secretary of the DG-DSS Mediodecci Lustarini, the draft completed interministerial harmonization and was submitted for presidential approval by Komdigi through the Minister of State Secretariat. Enactment is expected soon, marking a critical step toward full enforcement.
Once established, the independent PDPA will assume supervisory functions of the DG-DSS, focusing on compliance, enforcement and the protection of digital rights. These developments indicate an accelerated transition from legislative groundwork to institutional implementation.
Further digital governance reform continues through the inclusion of the Bill on Amendments to the PDPL and the Cybersecurity and Resilience Bill in the 2026 Priority National Legislation Program, signaling continued modernization of Indonesia's data protection regime.
On the AI front, Komdigi finalized the national AI roadmap, which will soon be enacted through a presidential regulation. The roadmap provides guidance for safe, ethical and sustainable AI development supported by a forthcoming regulation on AI security and safety. Vice Minister of Communication and Informatics Nezar Patria stated the policy aims to balance innovation with risk mitigation. Developed through 21 public consultations involving over 400 stakeholders, the roadmap emphasizes accountability, transparency and respect for creators' rights across sectors such as health care, education, finance and transportation.
Contributor: Dan Or-Hof, CIPP/E, CIPP/US, CIPM, FIP
On 14 Aug. 2025, Israel entered a new era in data protection. The coming into force of Amendment 13 to the Protection of Privacy Law marked a decisive shift, modernizing the country's privacy framework, partially aligning it with global standards while continuing its strong focus on data security. The reform not only updated statutory obligations but also granted the Protection of Privacy Authority sweeping enforcement powers, setting the stage for a year of assertive regulatory activity.
2026 will be the year when the PPA's enhanced authority is felt across Israel's landscape. The regulator now holds the ability to impose substantial financial penalties, issue administrative orders, and initiate criminal investigations for breaches of the law. Organizations can expect rigorous supervision, especially in areas involving large-scale or sensitive data processing, and the use of advanced technologies, including AI.
A key feature of Amendment 13 is the mandatory appointment of data protection officers. The requirement, clarified in draft guidelines published just weeks before the amendment took effect, applies broadly to public bodies, data brokers and any entity whose core activities involve systematic monitoring or processing of sensitive data on a large scale.
The PPA's interpretation, as set out in the guidelines, expands the scope of organizations required to appoint a DPO, emphasizing independence, expertise in Israel's privacy laws, knowledge in information technology and cybersecurity and direct access to senior management. In 2026, a surge in DPO appointments is expected as organizations adapt their governance structures to meet these new expectations.
The PPA's guidelines also signal a stricter approach to compliance. Consent mechanisms must be granular, explicit and tailored to the context with layered notices and clear options for individuals to control their data. The regulator's broad interpretation of the law means organizations must reassess their privacy practices not only to avoid sanctions and class actions, but to align with evolving standards of transparency and accountability.
AI regulation is developing in parallel. While Israel has no intention of enacting a horizontal AI statute in the foreseeable future, the PPA has taken the lead by publishing draft guidance on personal data processing in AI systems. These guidelines clarify that privacy law applies throughout the AI lifecycle — from data collection and scraping to model training and deployment — requiring organizations to conduct risk assessments, ensure transparency and implement safeguards against bias and discrimination. Sectoral regulators, such as the Supervisor of Banks and other governmental entities like the Israel National Digital Agency, have also issued their own positions, contributing to a multi-layered regulatory environment.
Looking ahead, 2026 will be defined by active enforcement; more litigation, including class actions; organizational adaptation; and the gradual emergence of a coordinated approach to privacy, cybersecurity and AI governance.
Contributor: Poojan Bulani, CIPP/E, and Kate Colleary, CIPP/E, CIPM, FIP
Ireland's Data Protection Commission reached its full complement of three commissioners in 2025 with the appointment of Niamh Sweeney, who joined commissioners Des Hogan and Dale Sunderland. Sweeney's appointment coincided with the DPC's expectation of expanded responsibilities, including significant market surveillance authority for certain high-risk AI systems, particularly those used in law enforcement and biometric applications, beginning in 2026.
The DPC will assume a more prominent role in AI regulation in 2026. In September 2025, Ireland designated 15 National Competent Authorities under the EU AI Act, becoming one of the first six member states to reach this milestone. The DPC is expected to play a significant role as one of these designated authorities, particularly where AI systems intersect with data protection and fundamental rights.
In 2025, the DPC continued its pattern of significant enforcement actions. The most substantial fine was imposed in February 2025 on TikTok — totaling 530 million euros — for violations related to the lawfulness of transfers of European user data to China and transparency requirements. The decision addressed TikTok's failure to adequately assess applicable law in China and to verify that supplementary measures and standard contractual clauses were effective in ensuring data protection equivalent to EU standards.
The DPC's approach to AI reveals the most about its evolving strategy. In March 2025, LinkedIn informed Ireland's DPA of its intent to train proprietary generative AI models using the personal data of LinkedIn members in the EU and European Economic Area. Following detailed review and extensive engagement, the DPC identified multiple risks and issues, leading LinkedIn to adopt significant changes including improved transparency notices, reduced data processing scope and enhanced protections for users under the age of 18.
This interventionist approach signals a shift from reactive enforcement to collaborative pre-deployment engagement with AI developers. This is a model the DPC has used with Meta and other large language model providers. The question for 2026 is whether this model will become the norm. Organizations can expect continued early-stage scrutiny of AI systems.
Cybersecurity regulation will also advance significantly in 2026. While Ireland missed the October 2024 transposition deadline for the NIS2 Directive, implementation is expected by early 2026; enforcement is anticipated to begin in 2026.
As Ireland positions itself at the forefront of responsible AI governance in Europe, organizations that proactively engage with the DPC's pre-deployment framework may find themselves better positioned to not only meet regulatory requirements, but also lead in an era where privacy-respecting innovation becomes a competitive advantage. Against this, providers of LLMs may be wary of voluntarily attracting regulatory scrutiny.
The question isn't whether 2026 will bring heightened scrutiny but whether organizations will seize this moment to demonstrate that cutting-edge technology and robust data protection can advance hand in hand.
Contributor: Rocco Panetta, CIPP/E
For Italy, 2026 will likely unfold with enthusiasm and responsibility, but also uncertainty in the realm of the data economy and AI.
Enthusiasm
Italy is entering a new era of compliance. There are new interpretations of the EU GDPR to be implemented and tested, both in relation to well-established concepts — such as that of personal data — and concerning the application of the legal framework to AI systems.
Moreover, the AI Act's provisions will become largely applicable in 2026. Organizations operating in Italy will need to be adequately prepared, ensuring timely — and, where possible, proactive — compliance. Italy has also enacted its own national legislation on AI, Law No. 132/2025, which establishes a framework of key guiding principles and addresses several relevant sectoral issues through dedicated provisions. The law also identifies new national regulatory authorities for AI, the Agency for Digital Italy and the National Cybersecurity Agency, without prejudice to the powers of Italy's DPA, the Garante.
The law entered into force 10 Oct. 2025; organizations operating in Italy will be required to comply with its new obligations and regulatory requirements. In addition, the national AI law will be further refined and supplemented throughout 2026 via the adoption of implementing decrees and secondary legislation.
As a result, entities active in the economic ecosystem will face a multilayered and evolving regulatory environment in the fields of the data economy and AI — an undoubtedly exciting challenge.
Responsibility
Both private and public entities will need to confront these new challenges with a strategic and operational vision. The most effective approach will be one that recognizes the competitive potential of regulatory compliance, striking a balance between technological innovation and the protection of individuals' fundamental rights and freedoms.
In this context, the role of data economy professionals and data protection officers will be even more crucial.
Uncertainty
In 2026, it will become apparent whether — and to what extent — the anticipated regulatory simplification promoted at the EU level will produce effects at the national level. This is a development worth closely monitoring because it may have significant implications for legal certainty, market dynamics and national competitiveness.
Both the AI Act and the GDPR are at stake. Every step towards simplification is welcome, provided it does not lead to deregulation devoid of proportionality.
Contributor: Hiroyuki Tanaka
It is highly likely a bill amending Japan's data privacy law, the Act on the Protection of Personal Information, will be proposed in 2026 since one was not submitted in 2025.
On 9 Jan. 2026, the Personal Information Protection Commission published "Policy for System Reform Regarding the So-Called Triennial Review of the APPI," which includes the following points:
- Relaxing certain formal consent requirements. For example, under the current law, provision of personal data to third parties and the collection of sensitive personal information from data subjects generally requires the data subjects' consent — even when the purpose is to create statistical information. The proposed amendment seeks to relax these rules. When data is used solely for creating statistical information — including AI development that can reasonably be regarded as a form of statistical creation — and it is ensured the data will be exclusively used for the creation of such information, data subjects' consent would no longer be required for the provision of personal data to third parties or for collection of publicly available sensitive personal information.
- Relaxing, in some cases, the notification obligation regarding personal data breaches. The current law, which requires notification to the data subject when reporting to the PPC is necessary, would be revised so that notification to the data subject would not be required when there is little likelihood the protection of their rights and interests would be impaired, even without notification.
- Introducing regulations regarding children's personal information. For example, when the data subject is under 16, obtaining consent and providing required notices must, in principle, be directed to the minor's legal guardians. In addition, a request for the suspension of use of retained personal data may be made without the need to demonstrate any unlawful conduct concerning personal data processing of data subjects under 16. A provision will be introduced mandating that priority consideration be given to the best interests of children in the handling of their personal information.
- Revising obligations relating to the proper handling of personal data entrusted to contractors. Specifically, an explicit statutory obligation shall be imposed on contractors not to handle personal data beyond the scope necessary to perform the entrusted services. If a contractor does not determine the means of handling personal information and the entrusting party and contractor agree on all aspects of the means of handling and the measures necessary for the entrusting party to ascertain the status of handling by the contractor, the obligation shall, in principle, be exempted with respect to that contractor.
- Prohibiting the improper collection and use of personally referrable information — such as a cookie identifier — that enables communication with a specific individual.
- Strengthening regulation concerning facial feature data and similar data.
- Mandating verification of the identity of the recipient and the purpose of use when personal data is provided to a third party under the opt-out scheme.
- Introducing an administrative fine system for certain APPI violations. The administrative fine amount shall be equivalent to the monetary or other financial benefit the business operator gained from the relevant act or would gain by discontinuing it. The applicable acts are limited to certain specified obligations, and the conditions for imposing administrative fines are also prescribed.
- Enhancing the effectiveness of recommendations and orders.
- Expanding the scope of criminal penalties and implementing harsher penalties.
A bill to amend the APPI based on this policy was expected to be promptly addressed in the ordinary Diet session beginning in January, but the Diet's schedule has become uncertain with the prime minister's plan to dissolve the House of Representatives and appeal for a snap general election.
If an amended law were to be enacted in 2026, it is highly likely it would come into effect in 2027 or 2028.
Contributor: Mugambi Laibuta, CIPM
Kenya's 2026 data and technology governance landscape will be defined by transition, regulation and heightened electoral sensitivity. The aftereffects of the Worldcoin judgment, the implementation of the Virtual Asset Service Providers Act, 2025, and the end of the current data commissioner's term will converge just as Kenya gears up for the 2027 general elections.
The Worldcoin decision will continue shaping Kenya's position on high-risk biometric and crypto-linked data operations. The High Court at Nairobi's findings against large-scale data harvesting without proper data protection impact assessments, valid consent or transparent dataflows will set the compliance tone for 2026. Any new digital identity or AI-based system introduced ahead of the elections will be measured against the Worldcoin precedent, especially regarding voter profiling, digital registration and surveillance.
Simultaneously, the Virtual Asset Service Providers Act will begin full enforcement. Crypto exchanges, blockchain firms and token platforms will need to demonstrate data protection, cybersecurity and anti-money laundering compliance before licensing. This will mark Kenya's emergence as a leading jurisdiction in Africa, integrating data privacy with financial technology regulation and aligning its framework with global virtual asset governance standards.
To close compliance gaps highlighted by the courts, the Office of the Data Protection Commissioner will likely issue new guidance notes on biometric data, data protection officers, closed-circuit television, AI and cross-border data transfers. The anticipated review of the Data Protection Act could introduce a restructured ODPC governance model. With a new data commissioner expected at year-end, continuity and institutional identity will be under scrutiny.
The Computer Misuse and Cybercrimes (Amendment) Act, enacted in 2025, gives the government and courts more power to block or remove websites and online content linked to illegal or extremist activity. While this may improve cybersecurity, it may affect privacy and data protection. Authorities could demand access to user data, communication records or platform logs to identify offenders or verify harmful content; this potential action raises concerns about surveillance and the misuse of personal information. Internet platforms and service providers will need to share or process user data more frequently to comply with court orders. Without strong data-handling safeguards, these measures could expose users to privacy violations and erode trust in Kenya's digital space.
As Kenya enters the 2027 election cycle, issues such as voter data use, political profiling, targeted advertising, misinformation tracking and the protection of journalists' and activists' data will dominate the privacy agenda. Thus, 2026 will be a pivotal year, balancing regulatory maturity, political transition and the protection of democratic digital rights.
Contributor: Vincent Wellens, Yoann E. A. Le Bihan, CIPP/E, CIPM, FIP
2026 will mark a turning point for Luxembourg's data regulatory landscape, as most of the EU AI Act's provisions become fully applicable 2 Aug. 2026 and data related acts — such as the Data Act and the Data Governance Act — are expected to be further implemented on the national level.
The EU AI Act will reshape the governance of AI systems, introducing strict obligations for "high-risk" applications in sectors such as finance, health care and public administration. Luxembourg has already designated its national data protection authority, the National Commission for Data Protection, as the main supervisory authority for AI Act enforcement — thereby significantly expanding its role.
Under its broader mandate, the CNPD will act as Luxembourg's market surveillance authority, operate AI regulatory sandboxes and serve as the national point of contact for AI-related oversight. Its responsibilities will include assessing high-risk AI systems, ensuring compliance with transparency and human oversight obligations, and monitoring potential violations of rights. The CNPD's enforcement activity is therefore expected to rise sharply, especially in the financial sector where the use of AI tools for risk and fraud management is already very common.
Other converging frameworks, such as the Data Act, NIS2 Directive, and DGA, are expected to further complexify compliance efforts, requiring an integrated approach across data governance, cybersecurity and AI ethics. The DGA has especially fueled discussion about the transfer of data between public authorities and entities and the implementation of the so-called "once only" principle in the public sector, which is one of the key data topics in the governmental coalition program. This principle must ensure that, instead of asking the concerned citizen, an administrator should look at whether the requested data cannot be received from another administrator which already holds it.
In parallel, the CNPD is expected to pursue its existing domestic agenda, including ensuring compliance on video surveillance rules, safeguarding the independence of data protection officers, and maintaining registers of processing activities. These priorities reflect ongoing challenges with the EU GDPR and the preparation of Luxembourg's landscape to the forthcoming AI regulatory regime.
Overall, 2026 is expected to bring a more holistic approach to regulatory compliance and the transformation of the CNPD from a traditional data protection authority to a dual data and AI regulator with a broader scope of operation.
Contributor: Poh Teck Lim, AIGP, CIPP/A, CIPP/E, CIPM, CIPT, FIP
The Personal Data Protection (Amendment) Act 2024 took effect in 2025, marking a significant milestone in Malaysia's efforts to enhance its privacy landscape. Looking ahead to 2026, the country's privacy scene will likely undergo further consolidation and refinement, driven by the need to align with international standards.
Enhanced enforcement and breach reporting
The revised breach penalty and mandatory breach reporting requirements under the PDPA Amendment are likely to result in increased scrutiny and accountability for organizations handling personal data. The first enforcement actions under these revised requirements may occur, serving as a deterrent to non-compliant businesses.
AI and data protection: A new era of collaboration
In 2026, the Department of Personal Data Protection is expected to collaborate closely with the National AI Office. The office was established to spearhead AI development and integration across various sectors in Malaysia and to develop AI frameworks and guidance for key industry sectors that prioritize data protection and privacy.
A pivotal year
In conclusion, 2026 will be a pivotal year for Malaysia's privacy landscape. With the enforcement of new regulations, collaboration on AI initiatives and a focus on data protection, the country is poised to make significant strides in protecting personal data and promoting digital trust.
Contributor: Gabriela Espinosa Cantú, CIPP/US, CIPM
Mexico's privacy regulatory landscape underwent a significant transformation in 2025.
A new data protection authority, the Secretariat of Anti-Corruption and Good Governance, was established due to the dissolution of the National Institute for Transparency, Access to Information and Protection of Personal Data. In March, new data protection laws for both the private and public sectors were enacted in the General Law on Transparency and Access to Public Information, the General Law on the Protection of Personal Data Held by Public Sector Entities and the Federal Law on the Protection of Personal Data Held by Private Parties.
While the core principles and obligations remain largely consistent with the previous legal framework, it is expected that the regulations may introduce additional or expanded requirements. There is also a possibility that the new authority will review and propose amendments to the current laws once it has fully settled in its responsibilities.
Several new legal initiatives in Congress intend to advance a legal framework to regulate the use of AI, strengthen privacy and data protection and address digital rights challenges in the context of technological developments and their risks. The proposed reforms reflect a growing concern with the potential misuse of AI and the intent to incorporate ethical principles around its use.
Integration of AI in public security
The proposal introduces the use of AI within public security institutions to support the investigation and analysis of organized crime. Applications would include biometric recognition, financial network analysis, predictive risk assessment and automated criminal investigation systems.
Legal framework for AI governance
Several initiatives suggest providing Congress faculties to enact new federal laws envisioned to regulate the design, development and use of AI systems. These initiatives include ensuring that AI operates under principles of fairness, transparency and privacy, as well as establishing a regulatory authority for its implementation.
Digital rights, children's protection and cybersecurity
One initiative considers cyber harassment a felony; criminal penalties are proposed for the unauthorized creation or distribution of AI-manipulated content. Another proposal seeks to establish the government's obligation to ensure secure internet access and to prevent AI-driven digital violence against minors, protecting their privacy, dignity and security.
Copyright
An initiative proposes recognizing the validity of works created with the assistance of AI systems, provided there is substantial and creative human participation in their development. Works created exclusively by AI without meaningful human input or those that replicate or imitate preexisting works would be excluded from such protection.
The Senate created a Commission for Analysis, Monitoring and Evaluation on the Application and Development of AI in Mexico, which has held several sessions to review different initiatives. However, there's still no clear direction on which of these will move forward through the legislative process and become effective laws.
Overall, Mexico's legislative agenda shows a growing interest in regulating AI and digital rights, but progress remains slow and uncertain. The current political agenda in Mexico is largely driven by the ruling party, so these topics will be pushed into law when they become a priority.
Contributor: Abraham Mouritz, AIGP, CIPP/E, CIPP/US, CIPM, CIPT, FIP
2025 saw many developments that will further shape the legal data landscape in the Netherlands for 2026.
TikTok decision
On 7 Oct. 2025 the Amsterdam Court of Appeal rendered a key class action decision against TikTok under the Dutch Act on the Settlement of Mass Damages in Collective Actions, as known as the WAMCA regime.
The court set aside the District Court's prior finding that claims for non-material damages were unsuitable for a collective action due to their individual nature. The Court of Appeal ruled that the claims for non-material damages are, in fact, based on the same factual violations and legal standards and therefore can be assessed collectively.
It also affirmed court jurisdiction over all non-GDPR-based claims and ruled that the lead claimant could represent minors who would join the action later. This ruling is highly relevant for other collective actions in the Netherlands, particularly those concerning privacy and consumer rights against Big Tech companies.
The Appellate Court's decision to allow collective claims for non-material damages will significantly lower the admissibility hurdle for future data privacy WAMCA cases. This is consistent with the Court of Appeals' September 2024 ruling in the joint case against Oracle and Salesforce. The ability to claim all damages — both material and non-material — in a single collective procedure is seen as a way to promote judicial efficiency; it will likely lead to an increase in WAMCA class actions seeking compensation for privacy violations.
NIS2 and e-Evidence Directives
EU directives require implementation in national laws and, therefore, leave more room for country-to-country differences when compared to other EU regulations. The transposition of the e-Evidence Directive and the NIS2 Directive into law will present distinct challenges and nuances.
The Netherlands will not meet the original 2024 transposition deadline; the new Netherlands Cybersecurity Act is now expected to enter into force in the second quarter of 2026. This delay places the country among a group of slower-moving EU states, offering a temporary grace period for companies newly within scope.
In 2026, the Cybersecurity Act's supporting legislation will likely mandate stricter, sector-specific security standards in areas like health care and life sciences. This focus on sector-specific stringency will result in higher compliance costs for certain entities in the Netherlands compared to other EU countries opting for minimum compliance.
The e-Evidence Regulation, related to the e-Evidence Directive, is set to apply 18 Aug. 2026, with the directive transposition deadline in February 2026.
The transposition of the e-Evidence Directive will heavily emphasize procedural safeguards in 2026. A more rigorous national approach to protecting legal professional privilege driven by strong domestic concerns and court rulings on covert production orders is expected. This emphasis on legal professional privilege protection will make the judicial process a stronger guarantor of fundamental rights in cross-border evidence requests, uniquely positioning the country against EU member states with less rigorous or delayed filtering mechanisms.
Contributor: Daimhin Warner, CIPP/E
Biometrics, or, more specifically, the regulation of biometric processing will be the key theme for 2026 in New Zealand. While the rest of the world grappled with how best to regulate emerging technology, New Zealand quietly got on and took concrete steps to do just that.
On 21 July 2025, The Office of the Privacy Commissioner issued the Biometric Processing Privacy Code 2025, which regulates how organizations use biometric technologies to collect and process biometric information.
The code is law, issued by the privacy commissioner under Part 3 of the Privacy Act 2020. It came into force 3 Nov. 2025, but organizations already using biometric systems have a nine-month grace period to comply with the new rules. That period ends 3 Aug. 2026.
The code broadly applies to all organizations in the public and private sector that collect biometric information for biometric processing. It adapts the Privacy Act's existing 13 information privacy principles to specifically apply to biometric processing. The code alters the IPPs in some very material ways, intended to address the specific risks of biometric processing, including the introduction of necessity and proportionality assessments.
The OPC has published extensive guidance on the code and, specifically, the completion of proportionality assessments, which it accepts will be new territory for many organizations. However, the OPC has indicated it will be closely watching compliance, using its inquiry powers where necessary to investigate systemic non-compliance of the code. Privacy Commissioner Michael Webster has stated he will be looking for evidence and data to back up proportionality assessments.
It will be worth watching New Zealand in the coming year as both organizations and the OPC grow accustomed to the regulation of biometrics, offering a potential source of guidance, precedent and expertise in this area.
On 1 May 2026, a new IPP3A, inserted by the Privacy Amendment Act 2025, will come into force. Information Privacy Principle 3 of the Privacy Act only currently applies when an organization collects personal information directly from the person concerned. Information Privacy Principle 3A expands the transparency obligation to also apply to the collection of personal information from third parties, referred to as "indirect collection."
While not as consequential as the Biometric Processing Privacy Code, this amendment — required in part to ensure New Zealand could maintain its EU adequacy status — is important to the OPC, which will be monitoring compliance in 2026 and beyond.
It is unlikely 2026 will see any further major legislative changes, which is fortunate, as organizations will have more than enough to contend with managing the new transparency obligations and getting biometrics use in line with the new rules.
It is possible, however, that there will be some policy maneuvering — a necessary precursor to law change — related to the contentious topic of child access to social media based on Australia's experiences over the coming months.
Contributor: Juan Castillo and León Weinstok
In 2026, Nicaragua is expected to continue developing its personal data protection framework, primarily grounded in Law No. 787, the Law on the Protection of Personal Data, which was enacted in 2012.
The law serves as the principal legal framework regulating the processing of personal data in the country. Despite its comprehensive provisions, practical enforcement remains limited, as the law has yet to establish a national data protection authority. Sanctions for non-compliance are administrative in nature and mainly address violations such as data falsification and security breaches.
Complementary sectoral and constitutional provisions support this framework but remain fragmented, leading to incomplete and inconsistent protection — particularly regarding cross-border data transfers and the clear delineation of data controller responsibilities. Ongoing discussions within the telecommunications, banking and health care sectors reflect a growing awareness of data governance challenges.
Looking ahead, Nicaragua's digital transformation — driven by the expansion of digital financial services and e-government initiatives — is accelerating the need for coherent public policy on privacy and cybersecurity. Although comprehensive legislative reform is not anticipated in 2026, the continued expansion of these digital services may catalyze the creation of a national data protection authority and the adoption of stronger compliance mechanisms.
Collectively, these developments would enhance Nicaragua's alignment with modern data protection principles and better position the country for more effective privacy regulation in the near future.
Contributor: Ridwan Oloyede, CIPP/E, CIPM, FIP, Precious Nwadike, CIPP/E, CIPM, and Victoria Adaramola, CIPP/E
After a year defined by record enforcement and the landmark Nigeria Data Protection Act General Application and Implementation Directive, the country's data protection landscape in 2026 will be characterized by a maturation of oversight and a flurry of complex legislative activity. The era of passive compliance is ending as organizations shift toward operationalizing rules and navigating a multiregulator environment.
The Nigeria Data Protection Commission is set to intensify its enforcement, leveraging the GAID. Its actions suggest a move toward a more risk-based strategy, prioritizing high-risk sectors and imposing stricter measures on the private sector, potentially leveraging "naming and shaming" tactics. Businesses also anticipate crucial guidance on ambiguous areas, particularly international data transfers. This clarity is expected as Nigeria operationalizes its recent admission as an associate member of the Global Cross-Border Privacy Rules Framework.
Legislative activity will be a critical focal point. Finalization is expected of the long-touted National AI Strategy; progress is anticipated on various AI-specific bills such as the Digital Sovereignty and Fair Data Compensation Bill, National Digital Economy and E-Governance Bill, National AI Commission (Establishment) Bill and four consolidated AI bills.
Concurrently, two proposals to amend the NDPA — targeting foreign social media platforms and app developers — are expected to advance alongside a new proposal for a cybersecurity law and the potential reintroduction of the Digital Rights and Freedom Bill.
A significant push for child online safety is anticipated with expected progress on the Child Online Access Protection Bill, amendments to the Cybercrimes Act and new rules from the Nigerian Communications Commission, including its draft Internet Code of Practice and Standard Operating Procedure for Child Online Protection.
Additional layers are expected as domain and sector-specific regulators and court interpretations contribute to the framework. 2026 will challenge organizations to move beyond check-box compliance and embed data governance as a core strategic function.
Contributor: Lia P. Hernández Pérez
In 2025, the use and regulation of AI was the topic marking legislative predictions on data protection and privacy in the Republic of Panama.
Currently, three draft bills were presented by three congressmen from three different parliamentary groups and are awaiting discussion in the National Assembly. None of the drafts have been approved or reached committee stage for debate.
The first bill was presented in August 2024 by a group of deputies led by Deputy Manuel Cheng; it establishes the legal framework, promotion and development of AI. Presented in early July 2025 by Deputy Ernesto Cedeño, the second bill seeks to regulate AI. The third bill, presented through the citizen participation procedure, establishes a framework law for the protection of citizens' digital rights in relation to AI.
For its part, the National Secretariat for Science, Technology and Innovation, with the support of the Georgia Institute of Technology, has initiated a process to develop the first National AI Strategy. This initiative involves training various sectors of the country and developing the strategy through multisectoral working groups to address the needs of all segments of society in relation to the use of this emerging technology.
It should be noted that a group of university students has presented a bill to establish a digital protection system and restrictions on access to social media information for children and adolescents. The National Assembly's Commission on Women, Children, and the Family has begun to debate the bill, and it is currently before an ad hoc commission for changes before continuing the legislative process.
Four years after it entered into force, Law 81, also known as Panama's Personal Data Protection Law, continues to be passively applied by some sectors. The insurance and reinsurance sector has adapted its data protection regulations after the Superintendence of Insurance and Reinsurance issued Agreement No. 5 in August 2025, establishing privacy guidelines.
As predicted in the IAPP Global Legislative Predictions 2025, Panama finally adapted Law 478 through the approval and publication in the Official Gazette. This law brings national legislation into alignment with the Budapest Convention on Cybercrime, addressing offenses such as online extortion, the dissemination of non-consensual intimate images and online identity theft.
Here's hoping that Panama will finally have some legal text, a national strategy or law on AI by 2026.
Contributor: Cecilia Abente Stewart
In November 2025, Congress finally passed the long-awaited Law No. 7593/2025 on Personal Data Protection in Paraguay, a milestone in the country's efforts to build a modern privacy framework. The law was subsequently promulgated by the Executive Branch, completing the legislative process; the law will enter into force in 2027.
The new law draws inspiration from the EU GDPR and establishes fundamental principles for data protection, including a comprehensive set of rights for data subjects, obligations for data controllers and processors, rules governing international data transfers and the establishment of an independent supervisory authority.
Law No. 7593/2025 comes into force in two years, giving both the public and private sectors time to prepare for compliance. The coming months will be crucial for establishing the new authority, drafting the regulatory decree and helping organizations gradually adapt to the new requirements. Awareness and training initiatives are expected to expand significantly throughout 2026.
After several years of debate, Paraguay is on track to join the growing group of countries in Latin America with comprehensive data protection laws, reinforcing the country's commitment to accountability and good governance.
Contributor: Catherine Escobedo Paredes
Peru enters 2026 amid significant political and security turbulence. In October 2025, Congress unanimously impeached the sitting president following a severe crisis marked by rising crime and gang violence. General elections are scheduled for 12 April 2026.
Compounding these challenges, a series of major data breaches in 2025 exposed critical weaknesses in Peru's digital infrastructure. The April 2025 breach of the National Registry of Identification and Civil Status compromised 146,199 biometric facial recognition images and was followed months later by the publication of the initial electoral roll containing the home addresses of more than 27 million Peruvians — an alarming incident in a context of rising extortion and organized crime.
Earlier breaches affecting a major bank and a telecommunications provider intensified pressure on lawmakers. Combined with the electoral cycle, these failures are driving renewed legislative urgency around data protection, cybersecurity and emerging technology governance.
The new Regulations of the Personal Data Protection Law took effect in early 2025, introducing the obligation to designate data protection officers with phased deadlines based on company size — the first of which fell on 30 Nov. 2025. However, the government only published detailed directives on DPO designation — including risk-based frameworks and phased timelines — on 31 Dec. 2025, leaving early adopters to designate data protection officers without complete guidance. Significant gaps remain around breach notification procedures and the status of existing DPO designations.
Meanwhile, Ministerial Resolution No. 342-2025-JUS, published in October 2025, should become binding in early 2026, finally providing the National Authority for Personal Data Protection with a predictable framework for calculating fines related to violations. As organizations face their first compliance audits under these new rules, regulatory clarifications addressing implementation challenges are likely to emerge throughout 2026.
Institutional reform debates are also intensifying. Draft Bill 12926/2025-CR, which would create a powerful Digital Transformation Superintendency, absorbing the ANPD and other digital governance functions, is widely viewed by stakeholders as overly broad and institutionally overreaching. The proposal is expected to face significant debate and is unlikely to advance as drafted.
On the cybersecurity front, Peru's long-delayed proposed cybersecurity law, Bill 9906/2024-CR, faces an uncertain path despite being mandated by the 2026–28 National Cybersecurity Strategy. Poor coordination with recently enacted regulations — the PDPL Regulation and the Digital Trust Framework Regulation — creates overlapping incident-notification and governance requirements. Significant amendments are likely before any plausible adoption in 2026.
AI-related legislative activity remains intense, with more than 35 bills under discussion, placing Peru among the regional leaders in AI legislation.
A highly controversial proposal to establish a national DNA database for all criminal suspects and prisoners appears poised for passage despite weak safeguards and international criticism. Similarly, bills being discussed by the Justice and Human Rights Commission of the Congress would modify the Criminal Procedural Code to incorporate biometric fingerprint and facial verification as police identity control tools. One proposal includes a modification to Article 13 of the PDPL allowing such processing without consent. Both measures reflect growing public pressure for security-focused solutions that override privacy safeguards.
Other proposals, such as a draft bill on AI supervision and transparency, remain stalled and are unlikely to advance in 2026 amid legislative overload and overlapping initiatives.
Overall, legislative activity in 2026 is expected to focus on implementing the newly clarified PPDPL regulations
Contributor: Irish Salandanan-Almeida, CIPM
The Philippines' data protection authority, the National Privacy Commission, welcomed a new privacy commissioner and chairman, Johann Carlos S. Barcena, and a new deputy commissioner, Jose Sutton Belarmino II in 2025. Barcena is a career public servant and governance professional, having previously served as executive director at the Governance Commission for Government-Owned or Controlled Corporations. His extensive work in institutional reform and oversight offers a strategic foundation for advancing the country's data protection framework.
Meanwhile, Belarmino brings extensive expertise and deep institutional knowledge in data privacy and security to the NPC. He previously served in the public sector in the Philippines and the private sector in Europe, focusing on global privacy compliance.
In October 2025, the NPC issued a cease-and-desist order against a technology company for collecting biometric data in exchange for monetary incentives. While the NPC supports technological innovations, it emphasized that, "The promise of technology must never come at the expense of human dignity and data protection."
As AI continues to emerge as a top priority across industries, the NPC reminded organizations "that data privacy is a fundamental right, and that innovations involving personal data such as biometric data, must operate within the bounds of lawful, fair, and transparent processing."
Cybersecurity remains a key priority with the NPC urging heightened public vigilance following reports that user data from a financial technology company was exposed on the dark web in the fall of 2025. While the NPC's investigation concluded there was no data breach and that user data remains secure, it issued a warning to individuals and groups engaging in the unauthorized access, sale or distribution of personal data. The NPC stated such acts constitute clear violations of the Data Privacy Act of 2012 and are punishable under the law.
With the proliferation of SMS scams, particularly those using international mobile subscriber identities, catchers or fake base stations that mimic legitimate cell towers, the government and the private sector continue to intensify protection efforts. Organizations have begun leveraging AI, analytics and automation to detect and block scam activities. Amplified awareness campaigns remain key in successfully informing the public about how to protect themselves from scams. A regional approach to intelligence sharing and incident reporting is likewise being considered.
Contributor: Piotr Lada, CIPP/E, CIPP/US, CIPM, CIPT, FIP
In 2026, Poland is expected to see notable developments in data protection, cybersecurity and AI regulations, both through national updates and the implementation of new European frameworks.
In terms of national regulations, Poland's data protection authority, Urząd Ochrony Danych Osobowych, took steps to initiate legislative changes that may materialize in 2026. This applies to regulations relating to, among other things:
- The role of the DPO in law enforcement agencies.
- The functioning of the National Police Information System and the use of operational control by the police.
- Responsibility for the dissemination of false content in the form of deepfakes and ensuring effective protection of personal data.
- Disclosure of medical records for scientific purposes and access to the personal data required to be entered in the Register of Entities Performing Medical Activities.
- Law enforcement access to data covered by telecommunications protections in the Electronic Communications Law to safeguard privacy and personal data protections.
- A model of universal access to entrepreneurs' personal data stored in the Central Register and Information on Economic Activity.
- Reservation of a minor's national identification number by parents or legal guardians to prevent unauthorized use, such as fraud or identity theft.
- Increasing the level of protection of personal data contained in electronic land and mortgage registers.
In 2026, the Supreme Administrative Court of Poland is expected to rule on a precedent-setting case concerning whether national statutes of limitations — under the Polish Code of Administrative Procedure — can be applied to administrative fines resulting from GDPR provisions. Poland's DPA argues the GDPR does not provide for a statute of limitations on administrative fines and does not delegate the power to introduce one to member states, thus applying national statutes of limitations may violate the principles of uniformity and effectiveness of EU law.
The DPA also requested that the Supreme Administrative Court consider referring a preliminary ruling to the Court of Justice of the EU to ensure a uniform interpretation of EU law.
In 2026, Poland's DPA is expected to publish updated guidelines on data processing in employment, which will help employers ensure compliance with the GDPR and labor laws related to personal data processing during recruitment and employment. The guidelines will consider technological changes, including those resulting from AI.
The planned employment guidelines reflect a broader trend of heightened DPA activity in this area. In 2025, it published guidelines on reporting personal data breaches and data protection impact assessments.
In the European context, Poland is still awaiting adoption of a national law implementing the NIS2 Directive. The government published the ninth version of the draft law amendment in October 2025.
Legislative work also continues on the adoption of national regulations implementing the EU Data Act. The Data Act was directly applicable in all EU member states beginning 12 Sept. 2025, with some exceptions. However, the adoption of the national law will enable its full and effective application in Poland, primarily by creating an institutional framework and a system of sanctions.
In August 2025, the government published a draft law on AI systems, implementing the AI Act, which is already in force in the EU. The draft law specifies, among other provisions, the organization and oversight of the market for AI systems, as well as the procedures addressing non-compliance with national and EU AI regulations.
In summary, Poland is actively working on national legislation to complement and align with the GDPR and regulations implementing EU applicable legislation and frameworks. The results of these legislative processes are expected to materialize in 2026 — creating new challenges for private and public entities while appropriately addressing personal data protection and compliance in respective areas.
Contributor: Pedro Marques Gaspar, AIGP, CIPP/A, CIPP/C, CIPP/E, CIPP/US, CIPM, CDPO/BR, FIP
Portugal is expected to take a deliberate, forward-looking approach to implement the EU AI Act. The country will likely observe regulatory developments in other EU markets before consolidating its national framework — much as it did during the early years of EU GDPR enforcement. While the AI Act entered into force 1 Aug. 2024 and will be largely applicable by 2 Aug. 2026, the coming year will be a decisive year for Portugal's institutional and regulatory readiness.
Throughout 2026, national efforts are expected to focus on building the administrative capacity and interagency coordination required for the AI Act's operational phase. The National Communications Authority has been designated as one of the authorities responsible for AI supervision. Oversight, however, will be shared among several entities, including Portugal's data protection authority, the National Data Protection Commission, and other sectoral regulators designated under Article 77 of the AI Act.
The CNPD is expected to retain a leading role where AI systems involve the processing of personal data or the protection of fundamental rights. Coordination between ANACOM, the CNPD and other regulators will be essential to manage overlaps between the AI Act, GDPR, NIS2 Directive and Data Services Act.
In parallel, 2026 is likely to mark the first phase of national guidance and enforcement preparedness. Regulators will hopefully publish interpretative materials, sectoral guidelines and possibly launch regulatory sandboxes to support compliant innovation. The National Communications Authority and the CNPD may also collaborate with the National Cybersecurity Centre to harmonize approaches to AI-related risk management, incident reporting and digital resilience. These efforts will shape how Portugal operationalizes the AI Act's risk-based framework and ensure consistent supervision across sectors.
Data protection impact assessments will remain central to compliance for high-risk AI systems involving personal data, embedding privacy-by-design principles into technical development. In parallel, providers and deployers will be required to carry out conformity assessments before market placement, including comprehensive documentation such as model cards, decision logs and risk registers. These mechanisms are expected to become best practices demonstrating accountability, transparency and human oversight in AI governance.
Moreover, Portugal's strategic vision for ethical and human-centric AI, as articulated in the AI Portugal 2030 Vision strategy, is likely to advance through sector-specific initiatives. Key industries such as health care, financial services, urban transformation and sustainable energy are expected to feature prominently, reflecting the government's ambition to position the country as a testbed for responsible AI experimentation under EU-wide standards.
The revised EU Product Liability Directive, already in force, must be transposed by 9 Dec. 2026, reinforcing accountability and consumer protection in AI-driven products and services. The EU's proposed AI Liability Directive, still under negotiation, is expected to complement this framework in the years ahead.
Together, these developments will define Portugal's regulatory scene for AI and digital services as the country transitions from strategic planning to operational enforcement in 2026.
Contributor: Adriana Neagu, AIGP, CIPP/E, CIPP/US, CIPM, FIP
Romania has undertaken several legislative initiatives to regulate areas that have gained traction on the public agenda, including establishing guidelines for AI, preventing the dissemination of harmful or toxic content on social media platforms and regulating children's access to digital content and services.
Among these initiatives, those with the strongest likelihood of becoming law are proposals aiming to enforce stricter regulations for children online, following models in countries such as Germany and the U.K. Two proposals — a draft age of consent law that addresses online service providers and web pages and a draft law on the protection of minors under 18 from harmful content distributed on very large online platforms — are progressing through the legislative process.
The first law grants parents or guardians the right to request that online service providers or website administrators suspend or delete accounts of children under 16 or restrict their access to web pages with harmful content. Harmful content includes commonly recognized categories such as tobacco, alcohol, hate speech and self-harm, but it also encompasses some areas interpreted as health-damaging behaviors or any other forms of content that may pose a significant risk to physical or emotional development.
The proposal targeting VLOPs introduces additional specific obligations, such as implementing active parental control measures, establishing age verification protocols and ensuring transparency in the Romanian language. Furthermore, the law proposes prohibiting the monetization of content featuring minors without the consent of parents or guardians.
An additional proposal has been put forward regarding VLOPs to address the dissemination of illegal content, including material that incites hatred or manipulates major national issues through malicious technological means intended to mislead public opinion. The proposed obligation requires the removal of any illegal content within 15 minutes of its publication by employing specialized identification algorithms. Critics have argued this requirement is nearly impossible to fulfill in practice. Further provisions seek to limit the reach of such posts.
Contributor: Fabiola Uwera
Rwanda continues to cement its status as a leading technology hub in Africa. Following a year defined by the launch of the continent's first AI Scaling Hub and the publication of the National Data Sharing Policy, 2026 will see the government pivot from foundational initiatives to robust operationalization. The focus now turns to embedding AI and data governance across the public and private sectors.
Legislative reform will be a primary driver of this shift. Completion of a comprehensive review of the information and communication technology legal framework is expected to update provisions that currently lack specific oversight of emerging technologies. This overhaul is anticipated to bring significant changes to legislation governing electronic technologies, cybercrimes and cybersecurity, among others. These updates will likely introduce dedicated provisions for AI and other emerging technologies, ensuring the regulatory environment keeps pace with the digital landscape.
The Data Protection and Privacy Office, housed within the National Cyber Security Authority, is expected to transition from an awareness-building phase to active enforcement. After a year of extensive public consultation, new sector-specific guidance notes are expected to be released. Crucially, the DPO is poised to enforce strict compliance obligations, including mandatory registration and authorization for cross-border data transfers. It is also expected to clarify data subject rights and set clear breach response protocols.
The establishment of the Responsible AI Office under the National AI Policy signals a move toward agile regulation. Aligned with the 2023 policy, the government has been revising its legal framework and holding consultations to gather firsthand input on ethical AI guidelines under development. In 2026, a new law is expected to be introduced that creates a coordinating authority to oversee AI deployment and align national standards with international best practices. This domestic effort will be bolstered by Rwanda's continued global leadership, building on the success of the Global AI Summit, Africa Declaration on AI, and AI Playbook for Small States developed in partnership with Singapore.
Finally, the continued rollout of the national digital identity project will remain a priority with expected guidance on safeguards to ensure privacy and security.
Overall, 2026 promises to be a year in which Rwanda balances rapid innovation with accountability. By integrating updated regulations with robust ethical frameworks, the country is set to influence not only national policy but also the broader continental approach to responsible digital governance.
Contributor: Muneeb Imran Shaikh
Data privacy
Saudi Arabia issued its Personal Data Protection Law and Implementing Regulations in 2023, and the law was fully enforceable by September 2024. Recent years have been marked by continuous enhancement and issuance of regulatory guidelines to help organizations understand and embed data privacy within their operations.
2026 will be characterized by increased enforcement of the PDPL by the Saudi Data and AI Authority. This includes increased registration of controllers, particularly public bodies and entities processing personal data.
In recent years, the SDAIA has also launched services on its national data governance platform. Entities processing personally identifiable information would likely be required to submit privacy impact assessments and report any data breach cases as part of the data protection authority's enforcement function.
With the increased adoption of cloud applications, platforms and infrastructure globally, companies are often compelled to host their data outside their jurisdiction. The SDAIA issued regulations and risk-assessment guidelines on PII transfer outside Saudi Arabia. It is likely the DPA will issue a list of countries and jurisdictions deemed adequate, supporting economic growth and facilitating organizations' ability to conduct a sound transfer impact assessment.
AI governance
Under its Public Investment Fund, Saudi Arabia launched the AI company Humain to develop AI solutions and technologies and to invest in the broader AI ecosystem. This initiative will give a huge boost to AI growth and adoption within the country, meaning associated standards and regulations regarding AI ethics and governance are likely to be developed and made enforceable in the coming years.
It is important to mention that the SDAIA has already published AI ethics principles and controls. With the increased adoption of AI within Saudi Arabia, the SDAIA and other regulators are likely to enhance requirements around transparency, recordkeeping, explainability and auditability of AI systems.
This is expected to prompt various organizations to either create independent functions for AI ethics and governance or empower existing functional units within organizations to establish and implement AI governance.
Contributor: Petar Mijatović
Building on the strategic frameworks adopted in recent years, Serbia is expected to make further progress in the areas of privacy protection, AI governance and digital responsibility throughout 2026.
The Personal Data Protection Strategy for 2023–30 outlines key goals, including the regulation of automated processing of genetic and biometric data and oversight of audio and video surveillance. While substantial implementation of these measures has not yet occurred, more concrete actions are expected in 2026, such as amendments to the Law on Personal Data Protection or the introduction of special regulations addressing emerging technologies.
Public authorities, including the Commissioner for Information of Public Importance and Personal Data Protection, may further engage citizens through media, social networks and open data platforms to promote awareness and compliance.
The AI Development Strategy for 2025–30 establishes goals for responsible AI use, focusing on strengthening institutional and legal frameworks, investing in human capital and supporting technological innovation.
Based on these strategies, 2026 is likely to be a foundational year for AI governance in Serbia. Preparatory work for drafting a dedicated AI law and forming specialized working groups may commence, guided by alignment with EU regulations and United Nations Educational, Scientific and Cultural Organization guidelines. These steps aim to regulate market applications, prevent misuse and promote the ethical development of generative AI.
Digital responsibility is increasingly emphasized in both the AI and personal data protection strategies with human-centric principles — including ethics, human rights, non-discrimination and proactive risk mitigation — guiding AI deployment. In 2026, it is anticipated that Serbia may begin initiatives such as public awareness campaigns, industry guidelines and monitoring tools to foster responsible digital practices. Investments in education and training may further support the development of skills needed to responsibly navigate AI and digital technologies.
Overall, while the exact outcomes cannot be guaranteed, 2026 is projected to be a critical year in moving from strategic planning toward tangible governance actions. By reinforcing privacy protections, establishing robust AI governance structures, and promoting digital responsibility, Serbia is likely to advance its commitment to safe, ethical, and human-centric technological development.
Contributor: Pranav Rai
Singapore's 2026 digital policy agenda signals a calibrated shift toward trust, accountability and global interoperability while preserving its innovation-driven reputation. Expect refinements across privacy, AI governance, and cybersecurity, with legislation introduced only where essential.
The Personal Data Protection Act will continue to be the cornerstone of the country's privacy regime; enforcement powers now allow penalties of up to SGD1 million (approximately USD776,602) or 10% of annual turnover. Although data portability has long remained dormant, 2026 could mark its activation, bringing it in line with global standards.
The elevation of the Data Protection Trustmark to a national standard underscores certification-based accountability. Cross-border data transfer frameworks will deepen, building on the Association of Southeast Asian Nations-European Union cross-border dataflows initiative. Rather than introducing sweeping new laws, Singapore's Personal Data Protection Commission is expected to issue targeted guidance — such as the Advisory Guidelines on use of Personal Data in AI Recommendation and Decisions Systems — to clarify obligations in emerging contexts like generative AI.
In AI governance, Singapore will maintain its light-yet-vigilant approach. The Model AI Governance Framework for Generative AI and the AI Verify sandbox allow companies to test systems against 11 trustworthy principles mapped to global standards. These voluntary programs are expected to inform technical standards and sectoral guidelines. Legislative intervention will remain narrow, exemplified by the Elections (Integrity of Online Advertising) (Amendment) Act 2024, which bans deepfakes during campaigns.
In 2026, anticipate co-regulatory models that blend voluntary compliance with enforceable norms for high-risk AI applications.
Amendments to the Cybersecurity Act expand oversight beyond critical information infrastructure to include cloud and digital service providers, introducing mandatory incident reporting and codes of practice. A civil penalty regime — with fines up to 10% of turnover — is now operational. The Operational Technology Cybersecurity Masterplan 2024 extends resilience to non-CII sectors such as health care and utilities, signaling broader systemic safeguards.
Singapore is expected to double down on principled regulation. Privacy enforcement will sharpen; AI governance will evolve through co-regulation; and cybersecurity will broaden in scope. The city-state will continue to only legislate where necessary, reinforcing its global reputation as a trusted, innovation-ready digital hub.
Contributor: Lukáš Mrázik, CIPP/E, CIPP/US, CIPT, and Lucia Korba
Slovakia is preparing for a year of significant legislative developments, particularly in the areas of data protection, AI, cybersecurity and related criminal law. These changes reflect both the country's internal priorities and its efforts to align with evolving EU regulations.
While Slovakia's Personal Data Protection Act is expected to be replaced; the impact of the changes is anticipated to be mostly minor and technical. More importantly, a considerable shift is predicted in enforcement practices. The Office for Personal Data Protection has become more active, especially in imposing higher fines. While penalties have traditionally ranged in the lower thousands, a substantial increase in the amounts imposed is expected, signaling a stricter regulatory approach.
2025 saw the implementation of the NIS2 Directive into Slovakia's Cybersecurity Act. This expanded the scope of regulated entities and introduced stricter obligations. In 2026, the National Security Authority is expected to intensify inspections and oversight, focusing on compliance with the cybersecurity law.
AI is another area undergoing rapid legislative evolution. The country is working on a national framework to support the adoption of AI in both public and private organizations. Alongside this, initiatives related to the AI Act, Data Act and DGA are progressing.
Criminal law is evolving to address emerging technological threats. The criminalization of harmful deepfakes is a key development; the legislation targets cases where such content causes substantial damage. In addition, new legislation is being introduced to reshape data-sharing practices with law enforcement in the digital space. These proposals include expanded obligations for sharing data during criminal investigations; implementation is expected within the next year.
Overall, 2026 promises to be a transformative year for Slovakia's digital and legal landscape in the areas of privacy, digital responsibility and AI governance.
Contributor: Armand Swart, CIPP/E, CIPM
South Africa's privacy landscape is set for a pivotal year in 2026 as initiatives from 2024–25 crystalize into binding obligations, sectorspecific rules and more stringent enforcement.
Direct marketing reset
South Africa's Information Regulator has issued a guidance note and amended regulations addressing direct electronic marketing, expressly including voice calls. These instruments tighten consent standards and strengthen optout mechanisms. They also clarify the respective rights and obligations of organizations and data subjects and signal how the IR intends to enforce these requirements.
Increased access and monitoring
Newly amended regulations to South Africa's data privacy law, the Protection of Personal Information Act, simplify and clarify the exercise of data subject rights. For example, objections to personal data processing may now be lodged free of charge via multiple channels, including hand delivery, post, fax, email, SMS, WhatsApp or telephone.
The IR's eServices portal, launched in 2024, will underpin more visible compliance monitoring into 2026, particularly around information officer registration, breach reporting and access to information obligations.
A prevalent theme: Data breaches
While 2025 saw fewer published enforcement notices for breaches, data breach enforcement is expected to remain prominent in 2026 given the risks involved. The IR's communication executive reported that the IR received 1,607 breach reports between April and September 2025, a 60% increase from 2024. This increase may partly be attributed to the mandatory use of the eServices portal for breach reporting.
Cybersecurity for financial institutions
The joint standard on cybersecurity and cyber resilience standards for financial regulators — effective June 2025 — applies to banks, insurers, exchanges, retirement funds and investment administrators. Institutions have until Dec. 2026 to implement enhanced cybersecurity measures, including breach notifications, incident responses, risk assessments, and security controls. Increased enforcement and potentially larger fines are anticipated.
Health care data: Critical care required
Draft regulations on processing health care data published in 2025 would apply to insurers, medical schemes, employers and pension fund administrators. A final version is expected in 2026. The draft imposes stricter requirements on the retention and destruction of personal data, crossborder transfers and consent. Critics raised concerns about burdensome consent provisions and the reliance on legitimate interests for processing health care data, which the POPIA does not permit.
Cloud computing
While the government's National Policy on Data and Cloud 2024 primarily reinforces principles in existing legislation — such as the POPIA and the various patchwork of cybersecurity laws and directives — its implementation is likely to drive more structured approaches to data residency, sovereignty and public sector procurement. Organizations should expect closer alignment of cloud deployments with the POPIA's crossborder transfer rules, clearer shared-responsibility expectations between customers and providers, and potential requirements for transparency, auditability and incident reporting.
Lacunae: AI and the protection of children
Despite the publication of the National AI Policy Framework in 2024, there was no material movement on AI in 2025. While the absence of regulation may foster innovation, clarity would be welcome. Similarly, domestic efforts to protect children's data and technology use have not met global advances.
Contributor: Kyoungjin Choi
In 2025, South Korea experienced a period of political upheaval following a presidential impeachment. However, with the inauguration of the new administration, stability was restored and the government placed AI at the top of its national policy priorities, mobilizing all resources to achieve the goal of earning "AI G3 status."
South Korea enacted its own comprehensive AI Framework Act, making it the second country in the world — following the EU and its AI Act — to enact a comprehensive regulatory law on AI.
In the field of data privacy, the Personal Information Protection Commission released the "Guidelines on the Processing of Personal Information for the Development and Use of Generative AI" and introduced a "Preliminary Adequacy Review System" to ensure safe data utilization at the planning and development stages of new technologies and services, including AI. This action demonstrates a strong commitment to fostering AI innovation.
At the same time, a series of hacking and personal data breach incidents affected not only credit card companies and major telecommunications companies, but also public institutions. These events brought data security and privacy protection to the forefront as national issues, highlighting their role as the foundation of safety and trust in the AI era.
In response, the government announced a comprehensive pan-ministerial information security strategy outlining key initiatives: Conducting a large-scale security audit of critical IT systems closely linked to citizens' daily lives; strengthening the effectiveness of consumer-centered incident response systems and recurrence prevention measures; enhancing information protection capabilities across both public and private sectors; building an information security environment aligned with international standards while fostering the information security industry, workforce and technologies; and reinforcing a nationwide cybersecurity cooperation framework.
In alignment with this policy direction, the PIPC also announced a strategic shift from post-incident sanctions to preemptive risk prevention. Key plans include strengthening the responsibilities of CEOs and expanding the authority of CPOs, establishing legal grounds for regular inspections of large-scale personal data processing sectors, providing incentives for companies' data protection efforts such as encryption and authentication, and operating the adequacy review system to help data controllers prevent data breaches in advance.
From the perspective of enhancing individuals' data autonomy, the PIPC also plans to expand the data portability right to 10 major sectors, enact a legal framework for the "right to be forgotten," and establish a comprehensive regulatory framework for personal video information.
Regarding enforcement, the commission intends to impose higher fines — potentially of up to 10% of a company's total turnover — on companies with repeated or serious data breaches. The PIPC also plans to introduce criminal penalties for the illegal distribution of personal data online.
Based on developments in 2025, it is anticipated that two potentially conflicting legislative directions will advance simultaneously in 2026: data regulatory innovation laws that support the national G3 initiative and regulatory reinforcement legislation aimed at preventing hacking and data breaches by strengthening accountability and compliance obligations for data controllers.
From the innovation perspective, legislative efforts such as the AI data regulatory sandbox — envisaged to be implemented through amendments to the Personal Information Protection Act, Data Industry Act or AI Framework Act — are expected to be actively pursued. This initiative is designed to permit the use of original personal data in secure environments for AI innovation. In addition, the institutionalization of the Preliminary Adequacy Review System for broader use of pseudonymized data is also anticipated.
Conversely, regulatory tightening is expected to focus on strengthening CEO and CPO accountability, introducing punitive fines and establishing criminal penalties for illegal data trading. Moreover, to encourage corporate investment in data protection, amendments to allow reductions in fines or penalties as incentives may also be introduced.
Ultimately, 2026 will be a pivotal year for South Korea — a year in which the nation must accomplish what might seem impossible: harmonizing its top national goal of achieving AI G3 status with the equally critical task of ensuring the protection of citizens' personal data and privacy as the fundamental safety net of the AI era.
Contributor: Joanna Rozanska, CIPP/E, CIPP/US
Spain's data protection and digital enforcement landscape will remain highly active in 2026. Under its newly appointed president, the data protection authority, the Agencia Española de Protección de Datos, is expected to continue prioritizing cases involving emerging technologies. In 2025, the agency issued extensive guidance on topics such as the European Digital Identity Wallet under the European Digital Identity Regulation and the intersection between AI and data protection. The DPA is now preparing further materials on the use of neurotechnology and neurological data.
The AEPD plans to carry out proactive analyses of high-impact sectors — including health care, digital education, platform marketing to minors, biometric and neurotechnology systems, workplace monitoring tools and digitalized public administrations — to identify risks early and issue preventive recommendations or audits. The DPA has also made clear that it can, and will, act against unlawful AI-related data processing even before the EU AI Act becomes fully applicable. Enforcement in 2025, particularly in deepfake-related cases, already set a precedent for intervention where personal data is manipulated or generated without a lawful basis.
Child and youth protection in the digital environment remains another key area. Spain is advancing a draft bill for the protection of minors in digital environments, requiring online platforms to implement age-verification systems, establishing reporting channels for harmful content, and potentially raising the minimum age to access social media. In parallel, a "sharenting" bill will regulate how parents and guardians share images of minors online. Both initiatives align with the EU DSA framework and reinforce accountability for how platforms handle children's data and digital well-being.
On the legislative side, Spain is lagging behind other countries in transposing the NIS2 Directive. Although the bill was granted urgent status to accelerate adoption — now expected in early 2026 — political fragmentation in Parliament has slowed progress and further delays cannot be excluded.
Likewise, while the draft national AI law has been approved by the government, its final passage through Parliament may face similar timing challenges. At the same time — and in a move that quietly places Spain at the forefront of EU AI governance — the Spanish Agency for the Supervision of AI published 16 practical guidelines on the EU AI Act, establishing some of the earliest interpretative standards for key aspects of the regulation and potentially influencing future supervisory and enforcement approaches across the EU.
Internally, the AEPD intends to adopt an "AI-first" policy across its operations to automate internal processes, improve case management and detect risks earlier. It will also deploy automated tools to analyze privacy policies, cookie configurations and public code or breach databases — making public-facing documentation and technical disclosures increasingly critical from a compliance perspective.
Finally, although not formally endorsed by the AEPD, Spain is expected to align with the European Commission's call for a more flexible interpretation of privacy obligations. This will potentially ease compliance for small- to medium-sized enterprises and signal a shift toward a more balanced, innovation-friendly privacy landscape.
Contributor: Ashwini Natesan
Sri Lanka's comprehensive Personal Data Protection Act, which passed in 2022, was initially expected to come into operation in March 2025. Administrative challenges surrounding its implementation — including appointing the data protection authority's staff and other preparatory requirements — resulted in a subsequent gazette notification rescinding the earlier operational date.
On 30 Oct. 2025, Parliament passed several amendments to the PDPA. Notably, these removed the requirement for an adequacy decision in relation to cross-border data transfers. Under the revised framework, personal data may be transferred outside Sri Lanka where "appropriate safeguards" are met or where any of the other conditions for lawful processing under the PDPA are fulfilled. The amendments also clarify the definition of "public authority," closing previous gaps that left room for interpretive ambiguity.
As per the amendment, the PDPA will come into operation on a date to be announced by the minister in charge; there has yet to be indication of when that will be. The operationalization and implementation of the PDPA could be phased.
The previously appointed Data Protection Authority of Sri Lanka has a substantial mandate ahead. Public advertisements have been issued to fill key vacancies at the DPA; these appointments are anticipated to accelerate the process of operationalizing the act. The PDPA's operationalization comes at a critical juncture as Sri Lanka begins rolling out several major digital initiatives under the Ministry of Digital Economy, most prominently the Unique Digital Identity project.
Once in force, the PDPA will provide citizens with enforceable rights over their personal data and require the state, as a controller, to adopt responsible and rights-respecting data processing practices.
The ministry is also advancing other digital transformation efforts, including GovTech — the government's implementation agency for national digital transformation that replaces the Information Communication Technology Agency of Sri Lanka — and GovPay, a platform that enables online payment for various government-related transactions. In 2026, the ministry can be expected to further advance measures in this regard.
Sri Lanka's National Strategy on AI was released in 2025. Among other things, the strategy places strong emphasis on data governance, committing to the development of a comprehensive national data strategy to strengthen the availability, integrity, quality and protection of AI-ready data. It proposes new data-sharing mechanisms, revitalizes open data platforms and establishes clear guidelines for data management and protection to support evidence-based policymaking and drive private-sector innovation. It remains to be seen how the AI Strategy will move forward.
The ministry is also responsible for the Online Safety Act 2024 and the Online Safety Commission established under it. A public call for comments was issued in 2025 and reforms or amendments to the Online Safety Act can be expected in 2026. The act remains central to ongoing debates on online expression, penalties and intermediary liability. Any reforms will have significant implications for digital rights and platform governance in Sri Lanka.
Contributor: Sofia Edvardsen, AIGP
Sweden enters 2026 with an almost complete regulatory architecture for the EU's digital and AI frameworks. Under a coordinated but decentralized model led by the Swedish Post and Telecom Authority and the Civil Contingencies Agency, the country continues to rely on sectoral expertise, inter agency cooperation and institutional independence. The year ahead will test how this tradition of trust-based governance scales across AI, cybersecurity, data and financial regulation.
AI Act — A multiagency model
Following SOU 2025:101, Anpassningar till AI-förordningen, and the government's 6 Oct. 2025 press conference, Sweden has a proposed national framework under Article 70 of the EU AI Act. The framework designates the PTS as the single point of contact and lead market-surveillance authority, the Swedish Board for Accreditation and Conformity Assessment as notifying authority, and a network of sectoral regulators — including the Financial Supervisory Authority, the Work Environment Authority, the Transport Agency, the Consumer Agency and Sweden's data protection authority, the Integritetsskyddsmyndigheten, and others — coordinated by PTS. A government bill is expected in the spring of 2026 to enable formal designations by 2 Aug. 2026.
Sweden's decentralized model may also set the tone for its Nordic neighbors. Denmark and Finland are expected to adopt similar sector-based structures, signaling an emerging "Nordic model" of AI governance — coordinated, expertise-driven and rooted in agency independence.
This approach mirrors coordination frameworks under the NIS2 Directive and the DSA. Enforcement will rely on collective institutional responsibility rather than individual liability, reflecting Sweden's constitutional preference for autonomy and proportionality in supervision.
Cybersecurity — NIS2 implementation
The Swedish Cybersecurity Act enters into force 15 Jan. 2026, expanding supervision to 18 sectors. The Swedish Civil Contingencies Agency is the national coordinator and EU contact point while sector regulators retain domain oversight. Sweden thus continues its multiagency approach, emphasizing sectoral expertise and institutional independence over centralization.
Cyber Resilience Act
A government directive proposes adjustments to the implementation of the EU Cyber Resilience Act, including small and medium-sized enterprise support. Implementation is expected to mirror Sweden's NIS2 implementation with national coordination and sector regulators enforcing compliance within their respective domains. A government bill will follow in 2026.
Data and platforms
The EU Data Act became applicable 12 Sept. 2025. Sweden has not yet designated a competent authority, but PTS is the likely candidate given its digital infrastructure and coordination roles. A government ordinance assigning enforcement responsibilities is expected in early 2026.
Under the DSA, PTS serves as Digital Service Coordinator alongside the Swedish Consumer Agency and Swedish Agency for the Media. This cooperative, non hierarchical setup reflects Sweden's constitutional limits on ministerial control.
Financial sector — DORA
The Financial Supervisory Authority is the competent authority under the national Digital Operational Resilience Act. Sweden's central bank, Riksbanken, and FI coordinate on threat-led penetration testing and oversight of critical information and communications technology providers. This activity will intensify through 2026.
Contributor: Stéphane Droxler, AIGP, CIPP/E, CIPM
Switzerland enters 2026 with an agenda shaped by pragmatism and public skepticism. Although highly connected and technologically advanced, the country continues to move cautiously when it comes to state-led digital transformation and data-driven regulation. Three areas are expected to dominate the national debate: the implementation of the new electronic identity, the regulation of digital platforms and the government's measured approach to AI.
A second chance for the e-ID
After a first failed attempt in 2021, the electorate has now narrowly approved the revised Federal Act on Electronic Identification Services, also known as the e-ID Act. This electronic identity, expected to be available in 2026, is intended to provide citizens with a secure, state-issued digital credential to access online public and private services. However, many observers remain doubtful about the federal administration's ability to deliver a reliable and user-friendly system.
The shadow of the electronic patient record — plagued by delays, usability issues and lack of public confidence — still looms large. Whether the e-ID will become a milestone in digital trust or another example of misplaced ambition will depend on the government's capacity to combine transparency, cybersecurity and tangible public value.
Platform regulation on the horizon
The Federal Council submitted a draft law, the Federal Law on Communication Platforms and Search Engines, aimed at strengthening the rights of citizens using digital platforms. This initiative seeks to improve transparency, accountability and fairness in the operations of global online intermediaries.
The challenge is formidable: Switzerland is trying to protect its citizens without antagonizing the dominant U.S. tech players or replicating the EU's DSA — a regulatory framework that would be difficult to enforce considering the domestic market's size.
AI governance through incremental adaptation
Unlike the EU, Switzerland does not intend to adopt a comprehensive AI Act. Instead, the Federal Council favors a sector-specific approach, adapting existing laws where necessary.
Several departments have been tasked with assessing the need for regulatory adjustments within their domains and to report back by the end of 2026. This deliberate and decentralized strategy reflects Switzerland's traditional reliance on evidence-based policymaking and consensus. It also underscores a belief that overregulation could stifle innovation and weaken the country's competitive advantage in research and applied technologies.
Taken together, these initiatives illustrate Switzerland's enduring preference for balance over speed. Trust, not regulation, remains the cornerstone of its digital policy. Yet in a world where digital infrastructures increasingly determine economic resilience and public confidence, maintaining this equilibrium will be a delicate exercise.
Contributor: Ken-Ying Tseng
Taiwan's privacy framework is entering a new phase. On 11 Nov. 2025, President Lai Ching-te publicly announced a partial amendment to the Personal Data Protection Act; the effective date will be determined later. This marks the first major reform pursuant to the 2022 Constitutional Court decision on health insurance data that requires the government to establish an independent supervision mechanism of its data privacy practices.
The amendment strengthens data governance rules for government authorities and data security obligations for private and public sectors. It requires government agencies to appoint data protection officers, imposes mandatory breach-reporting and documentation duties on both public and private sectors, and introduces fines for non-compliance with the reporting obligations.
In its 2023 amendments, the PDPA prescribed that the Personal Data Protection Commission will be the regulator in charge of the PDPA. The PDPC has yet to be established, however. The 2025 amendment emphasizes a transition mechanism: once the PDPC is officially formed in the near future, supervision of private-sector data practices will gradually shift from sectoral regulators to a unified authority within six years. The PDPC will be empowered to issue technical standards, conduct inspections and coordinate data protection oversight across all sectors.
However, the Organization Act of the PDPC — the legal foundation for the commission's establishment — has not been passed yet. This could occur anytime in 2026. Until then, the current multiregulator structure remains in place.
Meanwhile, the PDPA amendment will only take effect pursuant to the government's further announcement and is anticipated to take place after the Organization Act is enacted and promulgated and the PDPC is formerly established.
Once operational, the PDPC is expected to launch a comprehensive overhaul of the PDPA. The following areas may be subject to future reforms: anonymization and pseudonymization — defining their legal effects and clarifying when data is no longer personal; AI and privacy protection — balancing AI development with privacy protection and introducing accountability and transparency rules for AI data processing, etc.; cross-border data transfers — possibly setting stricter limits and proposing relevant requirements and exemptions; and extraterritorial application — possibly extending PDPA obligations to foreign entities processing data of individuals in Taiwan.
In the meantime, the Taiwan Food and Drug Administration, the sectoral regulator for the pharmaceutical industry, has finalized its draft order to prohibit pharmaceutical companies from transferring personal data from Taiwan to China, Hong Kong and Macau. It will take effect 1 Oct. 2026.
According to the order, there are several exceptions to the restriction, one being that a pharmaceutical company would be allowed to transfer personal data to China if, among others, a standard contract — akin to those in the EU — is entered into between the transferring and receiving parties.
While Taiwan's PDPA amendment is an important milestone, the real transformation will unfold after the PDPC's establishment, expected to come in 2026. Companies should anticipate more comprehensive, AI-aware and globally harmonized data protection rules in the years ahead.
Contributor: Nop Chitranukroh, CIPP/A, Thammapas Chanpanich
Thailand's Personal Data Protection Act came into full effect 1 June 2022. The Personal Data Protection Committee has since issued various subordinate regulations.
These include regulations on security measures to be implemented by data controllers and data breach notification requirements. They also include a mandatory obligation to appoint a data protection officer when processing activities require regular monitoring of personal data or systems due to the large scale of personal data, administrative measures, data processors' record of processing activities, data breach notifications, cross-border transfers of personal data including criteria for adopting binding corporate rules and contractual clauses, and handling data subject requests to exercise the right of erasure.
In August 2025, the PDPC announced it issued eight administrative fine orders across five cases against government bodies, private entities and data processors for inadequate personal data security. Since the PDPA took effect, there have been six cases and nine orders totaling over THB21.5 million (approximately USD684,811). With the "zero data leakage" goal of the Office of the Personal Data Protection Committee, all penalties stemmed from formal fact-finding and expert committee deliberations.
Key cases included a breach of a government agency's web app that exposed the personal data of over 200,000 individuals on the dark web. Both the agency and its developer lacked robust security, risk assessments and a data processing agreement. The agency and its developer were fined THB153,120 (approximately USD4,873) each.
A large hospital leaked more than 1,000 medical records during destruction due to poor vendor oversight. Sensitive health data was exposed, and retention rules were not followed. The contractor took records home and failed to report the breach to the appropriate authorities. The hospital and processor were fined THB1.2 million (approximately USD38,184) and THB16,940 (approximately USD539), respectively.
Three private-sector cases in wholesale, retail and online businesses led to fines of THB7 million (approximately USD222,740), THB2.5 million (approximately USD79,575) and THB3.5 million (approximately USD111,370) for inadequate security, failure to notify the PDPC, and, in one case, failure to appoint a DPO. The PDPC has not published full case details or orders.
Furthermore, the PDPC is revisiting the current PDPA and is in an initial stage of proposing amendments that cover various areas such as exempted activities, status and duties of local representatives, interpretation of data controller and data processor, consent and lawful basis principles, criminal data processing, categories of sensitive personal data, and power and duties of the PDPC and the PDPC office.
Enforcement in 2026 is expected to become more active and potentially more serious, which means organizations should pay closer attention to ensure compliance with the PDPA. Updates to the PDPA are also anticipated to be issued.
Similar to the EU GDPR, the PDPA also has extraterritorial effect. The subordinate regulation on international cooperation to be issued by the PDPC should clarify how PDPA enforcement against organizations located outside of Thailand will be conducted by regulators.
Silence surrounds the development of sector-specific data protection laws.
Contributor: Jason Grant, CIPM
As a result of Trinidad and Tobago's April 2025 general elections, a new government that campaigned on the promise of stimulating economic growth through AI, digital technology and new media will take office.
The government established the Ministry of Public Administration and AI in May 2025. The new ministry has embarked on several initiatives under the National Governance AI Framework to modernize and enhance the delivery of public services and promote responsible AI adoption across the country.
In July 2025, the ministry entered into a strategic partnership with the United Nations Development Programme and the United Nations Educational, Scientific and Cultural Organization to develop and implement UNESCO's Readiness Assessment Methodology and the UNDP's AI Landscape Assessment. A partnership with the UNDP and the Inter-American Development Bank Group created the country's first Open-Source Programme Office.
The Recommendation on the Ethics of AI was adopted by UNESCO member states in 2021, including countries in the Caribbean such as Trinidad and Tobago. The Readiness Assessment Methodology will be completed in conjunction with the UNDP's AI Landscape Assessment, a diagnostic tool that will evaluate Trinidad and Tobago's strategic and technical readiness to adopt AI.
The assessments will result in a national report establishing recommendations, data visualizations and a roadmap to support AI integration across sectors such as health care, education and public services. The ministry will also conduct an AI Landscape Survey among policymakers, AI experts, academics and private sector stakeholders to inform the findings of the assessments.
In August 2025, the ministry, through iGovTT — the National Information and Communication Technology Company Limited — launched "Anansi," a national AI chatbot to assist citizens in navigating access to public services.
It is important to note that these AI-related initiatives are occurring against the backdrop of the Data Protection Act, Chapter 22:04, which although passed in 2011, only remains partially in effect. Additionally, the legislation is not aligned with data privacy best practices on enforcement. For example, the act does not contain a requirement for a data controller to report a personal data breach to the Office of the Information Commissioner.
The role of the information commissioner forms the nucleus of the legislation's operational, compliance, regulatory and enforcement provisions, like most data privacy and data protection legislation models. With Chapter 22:04 of the Data Protection Act not fully in effect, the act remains for the most part inoperable, notwithstanding the few provisions which have been proclaimed.
That situation, however, is expected to change in 2026 given the ministry's advertisement for the information commissioner position this past year. This anticipated appointment, coupled with the AI assessment initiatives, signal moves are underway to fully proclaim the legislation — all of which augur well for the development of a robust governance framework within which AI adoption can advance.
Contributor: Furkan Güven Taştan
Following a 2025 that saw the establishment of a new, centralized cybersecurity framework, Turkey is set to enter 2026 with a broader legislative agenda for its digital landscape. While key priorities include completing its long-awaited data protection reform and making its new cybersecurity apparatus operational, attention is also expanding to encompass the governance of AI and the ongoing regulation of online content. These multifaceted efforts point toward an increasingly comprehensive and regulated digital environment.
A key legislative goal for 2026 is the finalization of the second reform package to harmonize the Personal Data Protection Law with the EU's GDPR. According to the presidency's objectives for 2026, the Ministry of Justice is tasked with completing this alignment. This package will address core GDPR principles, such as a risk-based approach and accountability, which were not fully covered in the initial 2024 amendments to the PDPL. The enactment of this package would conclude a multiyear effort to bring Turkey's data protection standards in line with those of the EU.
On cybersecurity, 2025 saw the enactment of the Cybersecurity Law and the establishment of the primary regulator, the Cybersecurity Presidency. With the agency's head appointed in late 2025, the focus in 2026 will shift to institutional capacity building. This includes finalizing the agency's organizational structure and issuing secondary legislation to implement the new law. It will be important to observe how the Cybersecurity Law is enforced as its specific provisions and potentially stricter penalties may position it as a primary legal instrument for IT-related offenses and breaches.
Meanwhile, other digital policy areas are also developing. After several individual proposals were introduced in 2024 and 2025, AI is expected to become a more concrete legislative topic. A parliamentary research commission on AI, established in 2025, is tasked with a comprehensive mandate: to define the steps for harnessing the benefits of AI, create the necessary legal infrastructure and determine measures to mitigate the risks associated with its use. The commission's report, expected in 2026, will likely provide a foundational analysis of AI's impact, addressing key issues such as liability and data governance.
Additionally, the regulation of online content remains a prominent issue. The long-standing Internet Law, in effect for nearly two decades, continues to be a subject of debate. As part of the "11th Judicial Package," there is a renewed effort to amend its controversial Article 9, which concerns the procedure for individuals to request the removal of online content that violates their personal rights. The proposed changes aim to create a framework that can withstand judicial scrutiny, following a previous annulment of the article by the Constitutional Court.
Contributor: Motunrayo Ope-Ogunseitan, CIPP/E, CIPM, FIP
The United Arab Emirates continues to assert its position at the forefront of global technological innovation and disruption, leading the charge in AI adoption across the Middle East.
As the nation awaits issuance of the Executive Regulations under the UAE Personal Data Protection Law, organizations are proactively working to strengthen privacy compliance frameworks and foster deeper consumer trust. Within the UAE's free zones, the Dubai International Financial Centre and Abu Dhabi Global Market have remained steadfast in advancing and enforcing their respective data protection regimes.
In 2025, the DIFC introduced pivotal amendments to its Data Protection Law, including higher administrative fines, a private right of action enabling data subjects to seek redress directly through the DIFC Courts and expanded applicability to include both DIFC and non-DIFC entities processing data within the jurisdiction, irrespective of physical establishment. The amendments also clarified data sharing obligations and introduced other substantive changes aimed at strengthening accountability.
Meanwhile, ADGM enacted the Data Protection Regulations (Substantial Public Interest Conditions) Rules 2025, which permit the processing of personal data without consent for insurance purposes and safeguarding vulnerable individuals. It also introduced updated administrative enforcement procedures to enhance regulatory efficiency.
The UAE made notable progress on AI governance in 2025 by embedding the principles of its Charter for the Development and Use of AI into national policy frameworks and aligning public and private-sector AI initiatives with the ethical values it espouses — such as human oversight, transparency and bias mitigation. These efforts reflect the country's commitment to responsible AI deployment. Notably, the Abu Dhabi government announced its ambition to become the world's first fully AI-powered government by 2027, marking a bold step toward integrating AI into public administration.
Looking ahead to 2026, it is anticipated that the long-awaited Executive Regulations under the UAE PDPL may be issued, activating full enforcement of the law. Should this occur, significant improvements in data processing practices are expected across industries, particularly in high-risk sectors such as telecommunications, health care, hospitality and real estate, which routinely handle large volumes of personal data. Key compliance priorities will include consent management, purpose limitation, cross-border data transfer restrictions and marketing governance.
The UAE Data Office, tasked with overseeing enforcement, is likely to issue sector-specific guidance, initiate compliance audits, and enforce data transfer protocols. These developments are expected to drive increased demand for skilled data privacy professionals and robust compliance infrastructure across the region.
On the AI front, the UAE is poised to continue operationalizing ethical AI principles across both public and private sector projects, with regulatory developments likely to emerge in response to growing adoption. It is also noteworthy that the DIFC has been selected to host the Global Privacy Assembly in 2026 — a prestigious event that will further cement its reputation as a regional leader in privacy governance.
Contributor: Natalia Kirichenko
In Ukraine, personal data protection is governed by the Law of Ukraine on Protection of Personal Data, primarily based on the EU Directive 95/46 and Convention 108+. Although a new draft law aligned with the EU GDPR has been under consideration by Parliament since 2022, its adoption is only anticipated after martial law due to the ongoing war with Russia is lifted.
The existing law provides a foundational regulatory framework for data protection, covering essential aspects but lacking the comprehensive scope of the GDPR.
Parliament is also actively moving forward with Draft Law No. 14057, a substantial overhaul of the Civil Code of Ukraine concerning the personal rights of individuals, including privacy and data protection. Responding to the rapid digitalization of social relations, the draft law introduces several new rights aimed at safeguarding individuals in the online environment, such as the right to a digital image, to be forgotten, to protection from deepfakes, and to digital privacy — including the protection of a person's "digital footprint."
The drafted law also significantly expands the regulatory framework for personal rights that ensures the individualization of a natural person by clarifying existing rights and introducing new ones, including rights to a name, signature, image and voice, as well as biometric and genetic data.
Contributor: Phillip Lee, AIGP, CIPP/E, CIPM, FIP
Implementing UK data protection reforms
The U.K. adopted the Data Use and Access Act in June 2025. This law, introduced by the Labour-led government, reflects a less ambitious set of data protection reforms than those proposed by the previous Conservative-led administration and was intended — among other things — to maintain the U.K.'s adequacy recognition from the EU.
Reforms include specifying "recognised legitimate interests;" relaxing automated decision-making rules; adding exemptions from cookie consent, such as for some types of analytics; and permitting marginally more data transfers — allowing transfers to countries whose data protection standards are "not materially lower" than those in the U.K.
The DUAA will take effect in four stages implemented through various commencement orders. Stage 3, expected by early 2026, implements the main changes to U.K. data protection laws. This stage will be of most direct relevance to privacy practitioners.
Maintaining UK adequacy recognition
The U.K.'s adequacy recognition by the EU was originally due to expire 27 June 2025. However, this deadline was extended to 27 Dec. 2025 to allow the EU time to review the changes to U.K. data protection law introduced by the DUAA.
The European Commission issued its draft adequacy decision in July 2025 and, despite suggesting further analysis and clarifications, the EDPB issued an opinion that "welcomes the continuing alignment" between the U.K. and EU data protection frameworks. As such, the DUAA appears likely to achieve its goal of introducing moderate data protection reforms that preserve U.K. adequacy, enabling the free flow of data from the EU to the U.K. throughout 2026 and beyond.
Cybersecurity developments
The U.K. government proposed a new Cyber Security and Resilience (Network and Information Systems) Bill in November 2025. The CSR Bill represents the U.K.'s answer to the EU's NIS2 Directive and, if adopted, would amend the U.K.'s existing Network and Information Systems Regulations 2018. The CSR Bill seeks to raise U.K. cybersecurity standards following a number of high-profile incidents, such as those suffered by Jaguar Land Rover, Heathrow Airport and Marks & Spencer.
The CSR Bill aims to expand the scope of the NIS Regulations to include data centers, managed service providers and supply chain providers designated as "critical suppliers." Similar to NIS2, the CSR Bill would require initial incident reporting with 24 hours and a full report required within 72 hours; the bill would also expand the scope of reportable incidents to include those "capable of" an adverse effect on network and information systems. Penalties for violations would run up to 4% of worldwide turnover.
UK AI legislation — at last?
Although previewed in the King's Speech in 2024, which set out the Labour party's legislative agenda, the U.K. has yet to see draft legislation "to place requirements on those working to develop the most powerful artificial intelligence models."
The U.K. has taken a hands-off approach to regulating AI thus far, preferring to rely on existing laws and regulatory bodies to govern AI risk. However, there is growing speculation that the government may introduce a "Frontier AI Bill" sometime in 2026. Unlike the EU AI Act, which regulates AI systems based on different tiers of risk, the bill is anticipated to only focus on so-called frontier AI models — the most advanced and powerful AI models — developed by large tech companies.
If and when introduced, the bill is expected to principally focus on AI safety, potentially placing existing voluntary commitments by major AI companies on a statutory footing and requiring prerelease model testing by the AI Security Institute or similar body. However, its precise scope and requirements will be known only once draft legislation is published.
Federal Law
Contributor: Joe Duball
Despite full control of the 119th U.S. Congress, House and Senate Republicans were unable to find enough common ground to break through on privacy or AI legislation in 2025 aside from the TAKE IT DOWN Act.
There are no clear signs that the situation will change in the new year, especially with costly bipartisan limitations. However, avenues exist to make waves nonetheless. Carry-over initiatives from 2025 and midterm elections will both serve as potential momentum to deliver results.
The House Committee on Energy and Commerce will be the linchpin to any action on federal privacy law. Energy and Commerce Chairman Brett Guthrie, R-Ky., used 2025 to reset the dialogue through committee fact-finding and intentional stakeholder consultation.
The committee is reportedly preparing a discussion draft on a comprehensive privacy bill for an early 2026 release. That bill is expected to carry a combination of lighter-touch provisions found in comprehensive state laws, including frameworks in Kentucky and Texas. Any Republican-led proposal will likely aim to keep industry compliance burdens to a minimum, and language for strong federal preemption will be the main vehicle to achieve that.
The House Committee on Energy and Commerce's ongoing work around children's online safety will also be worth monitoring throughout the year. Proposals for the Children and Teens' Online Privacy Protection Act, also known as COPPA 2.0, and the Kids Online Safety Act highlight a package of bills awaiting a full committee markup following subcommittee approval at the end of 2025. These bills face a tougher reconciliation path in the Senate, which passed their own bipartisan version of COPPA 2.0 and KOSA in the prior congressional term.
With AI, Republican members of Congress face a fresh mandate from President Donald Trump to draft innovation-first legislation that still addresses consumer safety.
A core component of the White House's recent executive order to limit the impacts of state AI laws is to build a federal standard for AI development and use. That federal framework would supersede state laws and establish legal certainty, allowing the administration to back off the order's interim measures, including potential litigation over state-level laws.
The executive order may steer some state legislatures to shelve AI bills considered over prior years while other states may be emboldened to challenge the Trump administration's threats. For those states continuing to legislate, expect more targeted laws on specific AI use cases or model safety in lieu of comprehensive, cross-sector proposals.
Comprehensive state privacy legislation in Indiana, Kentucky and Rhode Island took effect 1 Jan. New comprehensive laws took a backseat to amendments to existing laws last year — a trend that could repeat itself more frequently moving forward. However, state lawmakers in Maine, Massachusetts, New Mexico, Pennsylvania and Vermont remain steadfast with their desires to pass comprehensive bills.
Federal Trade Commission
Contributor: Cobun Zweifel‑Keegan
The U.S. Federal Trade Commission is entering 2026 with a more defined posture after a year of unprecedented transition.
Under Chair Andrew Ferguson, the agency has for the first time embraced the unitary executive theory rather than asserting its independence, aligning with President Donald Trump and supporting his contested power to remove FTC commissioners without cause, as he purported to do early in 2025. In a pending case brought by one of the fired commissioners, Trump v. Slaughter, the Supreme Court will soon resolve this debate one way or the other. In the meantime, the agency is operating with only two Republican commissioners. Another is expected to be nominated soon.
Privacy and data security are not top-of-mind enforcement issues for the Trump-Vance FTC. That said, there will likely be more privacy enforcement in 2026 than in 2025. Public statements from the agency's leadership make clear that it will continue to favor targeted enforcement of black-letter legal restrictions, such as those within the Children's Online Privacy Protection Act, rather than rulemaking or even general consumer protection enforcement under Section 5 of the FTC Act. Make no mistake, the FTC will continue to make use of its authority under the principles of deception and unfairness, but its influence as a standalone enforcement tool is expected to wane in 2026, often being used in conjunction with other laws.
In keeping with the FTC's renewed focus on children's privacy, the most significant regulatory development of 2025 was the updated COPPA Rule — its first major overhaul since 2013. The revisions expand the definition of personal information, clarify mixed‑audience obligations and require overhauls of data retention and security programs. Compliance deadlines hit in April 2026, making this a top priority for child‑directed and mixed‑audience services alike. COPPA enforcement themes from 2025 included third-party diligence — especially when it comes to integrations like software development kits — and a clear emphasis on age‑verification technologies as evidenced by an upcoming workshop.
The FTC's dance card also includes a couple brand-new laws that will soon become enforceable. The TAKE IT DOWN Act mandates new notice‑and‑removal processes for non‑consensual intimate images by May 2026 although the non-FTC criminal aspects of the law are already enforceable. Early cases will no doubt target major platforms. In addition, the Protecting Americans' Data from Foreign Adversaries Act now treats certain transfers of sensitive personal data by data brokers to adversarial countries as a violation of the FTC Act. The FTC is continuing to rollout enforcement of this law, likely in coordination with the Department of Justice, which is tasked with enforcing a related but distinct regulation known as the Data Security Program.
Enforcement related to AI technologies at the FTC is also going through major changes, driven in large part by the Trump administration's pivot toward an innovation-first strategy for AI. The FTC has already begun taking the unusual step of revisiting prior settlements that might "unduly burden AI innovation," as instructed by Executive Order 14179 and America's AI Action Plan. More could follow in 2026.
Nevertheless, the agency continues to scrutinize deceptive marketing claims in the AI space, especially when related to the accuracy of outputs. In addition, the agency issued 6(b) orders to AI chatbot "companion" providers late in 2025, showing how scrutiny continues to grow for safety testing, monetization and protections for minors.
In short, expect FTC enforcement to heat back up in 2026 while re-focusing on tried-and-true legal theories. Overall, the U.S. landscape will be quite active as state enforcers will also be doubling down on privacy scrutiny.
Health care
Contributor: Kirk Nahra, CIPP/US
Health care privacy continues to be a growing mess.
There are a variety of important developments to watch at the federal level. The Trump administration has been pushing back on certain changes from recent years with an eye toward altering the overall approach of the Health Insurance Portability and Accountability Act and enforcement. Attention will focus on whether this includes the proposed changes to the HIPAA Security Rule that could lead to a much more prescriptive approach to health care security.
The administration also has some holdover HIPAA proposals from its first go-round relating to disclosures to social service organizations and disclosures related to opioid patients that it could choose to revisit.
Sen. Bill Cassidy, R-LA, recently proposed legislation to address the "non-HIPAA" aspects of health care privacy. While significant movement on this legislation is unlikely, it could prove to be a useful element if the debate over a national privacy law begins again in earnest.
Much of the relevant health care activity has actually been at the state level. New laws continue to emerge in various states that will impact how health care data — very broadly defined — will be used and disclosed. Some of these laws are modeled on Washington state's My Health My Data Act. A new "comprehensive" law in Maryland is creating broad compliance challenges. A law awaiting signature by the governor in New York will also create challenges.
More of these laws are expected in 2026 — targeting specific privacy issues but potentially raising other kinds of health care concerns at the same time.
Contributor: Martin Pesce
In 2026, Uruguay is expected to maintain a solid standing in data protection and cybersecurity, supported by a modern framework aligned with international standards. The country has a modern personal data protection regime in line with the EU GDPR; it was the first state in Latin America to approve, in 2021, Convention 108+.
In 2025, Uruguay adopted relevant cybersecurity regulation through Decree No. 66/025, which reorganizes national cybersecurity governance. The decree grants Uruguay's Agency for Electronic Government and Information and Knowledge Society powers to set policies, supervise compliance, and coordinate incident response, including coordination with the Personal Data Regulatory and Control Unit and the adoption of AGESIC's Cybersecurity Framework. The decree also reinforces the role of the National Computer Emergency Response Team as the national interlocutor. Notable obligations include the national cybersecurity incident registry and the 24-hour notification period for entities linked to critical services or sectors.
In 2026, Parliament could begin considering an AI law based on recommendations submitted by AGESIC — the government entity responsible for AI policy. There is currently no specific law, but horizontal rules — such as those governing data protection, consumer protection, intellectual property and civil liability — already apply to the design and deployment of AI systems. Any new law should be carefully crafted to avoid barriers to innovation or diminishing the country's competitiveness.
The Agency for Electronic Government and Information and Knowledge Society's recommendations prioritize clear governance and a risk-based approach with references to transparency, impact assessments for high-risk systems and human oversight while emphasizing caution to avoid harming innovation and competitiveness.
Looking ahead to 2026, Uruguay is expected to consolidate its alignment on data protection and the operation of its recently updated cybersecurity framework. Any legislative progress in AI governance should balance the protection of rights with competitiveness and innovation, avoiding regulations that create disproportionate costs or reduce the country's attractiveness for technology investments.
Contributor: Eduardo Monteverde, CIPP/E, CIPP/US, CIPM
Venezuela maintains a scattered legal framework regarding data privacy, cybersecurity, digital responsibility and AI governance. These are potential areas for change in Venezuelan regulation, considering the country lacks a comprehensive legal framework to support the protection of individual rights.
AI governance legislation
During the first quarter of 2025, it was confirmed that the National Assembly was continuing the legislative process of the AI Bill, which had been approved in its first debate in November 2024. This bill aims to establish a national AI agency responsible for promoting ethical and transparent technological development.
However, there have been no advances or updates regarding the bill since the first quarter of 2025. It is not certain if priorities have changed or if there are other reasons for the absence of updates. Nevertheless, the legislation has already advanced significantly and could be approved by the end of 2026.
Comprehensive data privacy regulation
In its Decision 1318 from August 2011, the Supreme Court of Justice of Venezuela clarified and expanded the scope and applicability of the rights to Habeas Data privacy, as established by Articles 28 and 60 of the Constitution of Venezuela. However, there doesn't seem to be any motivation to introduce new regulations that would create a comprehensive data privacy regulation.
The latest regulations seem to only respond to specific needs. For example, in an October 2025 decision, the Social Cassation Chamber of the Supreme Court of Justice explained that under the Venezuelan Labor Act and related regulations, the same protections — like equal pay and benefits — should extend to both on-site work and online or remote work, aligning with the standards of the International Labor Organization. This matter was first covered during the COVID-19 pandemic; however, courts are now starting to recognize the new reality of a more expansive digital work economy.
Based on this, no comprehensive data privacy laws are expected to be approved in 2026.
Contributor: Thammapas Chanpanich, Nop Chitra, CIPP/A, and Waewpen Piemwichai
In 2026, privacy, AI governance and digital responsibility are expected to remain key priorities for regulators.
As of the end of 2025, Vietnam saw several major legislative developments. These include the 2025 Cybersecurity Law, which replaces both the current Cybersecurity Law and the Law on Network Information Security; the country's first comprehensive Law on AI; and a prospective new decree guiding the implementation of the Personal Data Protection Law.
As of late December, the final adopted version of the 2025 Cybersecurity Law had not yet been published. However, based on the publicly available draft, which was submitted to the legislature for final consideration, the 2025 Cybersecurity Law continues to emphasize the importance of strengthening cybersecurity, protecting children's data and safeguarding the rights and legitimate interests of agencies, organizations and individuals in cyberspace to address the rise of cybercrime regionally and globally.
The law expands the scope and enforcement powers of regulators, particularly the Ministry of Public Security, to strengthen cybersecurity and combat cybercrime. For example, enterprises providing services in cyberspace will be required to supply IP addresses of organizations and individuals using internet services to regulators for monitoring and cybersecurity assurance. The data localization requirements remain in place; further details are expected to be provided through lower-level legislation such as a decree.
Vietnam's first AI law is expected to promote AI development while ensuring developer and provider accountability and user safety. Similar to the 2025 Cybersecurity Law, the final adopted version of the AI Law has not yet been published. Based on the draft submitted to the legislature for final consideration, AI systems are classified by risk level: high risk, medium risk and low risk. The law also introduces provisions on ethics and responsibility in AI activities, including requirements for transparency, labeling and accountability.
Notably, the AI Law mentions extraterritorial applicability and requires foreign providers of AI services or products that are classified as highrisk AI systems to appoint a local point of contact in Vietnam. In addition, if such systems are subject to mandatory conformity certification prior to use, the provider must also establish a commercial presence or appoint an authorized representative in Vietnam.
The new PDPL takes effect 1 Jan. 2026; the government will issue a guiding decree to clarify several key provisions, including qualifications for data protection officers, DPO service requirements, filing obligations and consent formalities. Recognizing personal data as a valuable resource in the digital age, regulators are expected to remain active in this area.
Additionally, a new sanctioning decree is anticipated, paving the way for stronger enforcement. Monetary penalties for general violations may reach up to VND3 billion (approximately USD115,000). The fine for trading personal data can be up to 10 times the revenue generated from the transaction or VND3 billion, whichever is higher. Violations related to cross-border data transfers may result in fines of up to 5% of the violator's revenue from the previous year or VND3 billion, whichever is greater.
Given the rapid pace of legal developments, enterprises should closely monitor legislative changes to ensure timely and effective compliance.
Previous Editions
- 2025 Global Legislative Predictions
- 2024 Global Legislative Predictions
- 2023 Global Legislative Predictions
- 2022 Global Legislative Predictions
- 2020 Global Legislative Predictions
- 2019 Global Legislative Predictions
- 2018 Global Legislative Predictions
- 2017 Global Legislative Predictions
Note: The 2021 Global Legislative Predictions were not published due to impacts of the COVID-19 pandemic.

This content is eligible for Continuing Professional Education credits. Please self-submit according to CPE policy guidelines.
Contributors:
Jennifer Bryant
Associate Editor, IAPP
Tags:
IAPP Global Legislative Predictions 2026
In this annual release, the IAPP gathers insights from around the globe, providing an on-the-ground look and predictions for the year ahead.
Published: 9 Jan. 2017
Last updated: 28 Jan. 2026
Contributors:
Jennifer Bryant
Associate Editor, IAPP
Additional Insights
- A look ahead at digital policy in 2026 (Podcast)
- Data Privacy Day: 2026 Predictions (Video)
In 2025, artificial intelligence governance rapidly expanded, the global wave of privacy reform continued, cybersecurity threats led to strengthened regulation, and enforcement activities and capacities continued to grow.
In 2026, AI regulation will continue to evolve as countries transition from drafting to implementing AI regulatory frameworks. While EU member states will focus on AI Act implementation, countries in Latin America are introducing or finalizing risk-based AI laws. Meanwhile, countries in the Asia-Pacific region are adopting approaches ranging from comprehensive statutes to innovation-preserving frameworks.
Countries like Japan and Canada will continue efforts to modernize older privacy laws while others like India and Taiwan will work to implement newly passed legislation. Across Europe, implementation of the NIS2 Directive will continue. Countries including Vietnam and South Korea are working toward new or expanded cybersecurity laws.
As privacy, cybersecurity and AI governance continue to converge, 2026 will undoubtedly be a pivotal year. For an on-the-ground perspective, professionals from 69 countries and jurisdictions share their insights on what lies ahead.
Editor's note: While we try to include as many countries as possible, we recognize this is not a comprehensive list. If you are interested in submitting predictions for a country not yet featured, please reach out to IAPP Associate Editor Jennifer Bryant at jbryant@iapp.org.
Countries and Jurisdictions
Contributor: Mariano Peruzzotti, CIPP/E
Argentina is expected to maintain strong momentum with personal data protection and AI legislative initiatives throughout 2026.
The country's data protection framework, still governed by the Personal Data Protection Law enacted in 2000, remains largely unchanged despite multiple efforts to modernize it. Several bills drafted over the past decade have sought to update the PDPL to address technological advancements and align Argentina with global and regional standards.
Argentina's data protection authority, the Agency of Access to Public Information, recently indicated it is preparing a new draft bill, largely inspired by a 2023 proposal that closely followed the EU General Data Protection Regulation. That earlier bill, however, was never debated in Congress and ultimately lapsed. While the data protection authority's current draft appears to be at an early stage, expectations are high that legislative discussions could resume in 2026.
The protocol amending the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data, known as Convention 108+, may finally enter into force in 2026. The protocol, which has been open for signatures since 2018, only requires five additional ratifications to take effect. Should that occur, Argentina — along with other signatory states — would likely move to align domestic frameworks with the convention's enhanced provisions, potentially reinvigorating local data protection reform efforts.
The debate around personal data protection has also increasingly become intertwined with discussions on AI governance. Multiple AI-related bills are currently under consideration. Some mirror the European approach, seeking to establish a comprehensive regulatory framework that categorizes AI systems according to their potential risks. Other bills focus on specific issues like deepfakes; the use of AI in education, marketing, national security or law enforcement; and the dissemination of misinformation.
These topic-specific bills are expected to continue surfacing in Congress unless a broader AI regulation is enacted, which would likely make these fragmented initiatives redundant. Notwithstanding this, current general laws and principles, in addition to soft law, remain applicable to AI systems.
On that note, the DPA issued a guidance document for the responsible use of AI by public and private entities in September 2024 and has since announced its intent to update it.
In sum, 2026 is likely to be a pivotal year for Argentina's regulatory convergence with international data and AI standards, laying the groundwork for the long-anticipated modernization of the personal data protection regime and the establishment of a coherent policy on AI.
Contributor: Olga Ganopolsky
Australia's privacy laws will build on the developments of 2025 and continue to evolve in 2026.
Although Australia is likely to see further Privacy Act reforms, a key part of the 2026 landscape will be determined by new guidance, the enforcement posture of the Office of the Australian Information Commissioner and emerging case law. These drivers will continue to be underpinned by consumer expectations and individuals' growing recognition that privacy is a human right.
The influence of areas adjacent to privacy will also be relevant. These include cybersecurity legislation; reforms to the Security of Critical Infrastructure Act 2018; the new social media ban under the Online Safety Act 2021, effective December 2025; and consumer protection laws regulating data sharing and digital environments.
Implementation of Privacy Act reforms
Automated decision-making. By 10 Dec. 2026, entities that fall under the Australian Privacy Principles must include what personal information is used to support automated decision-making in their privacy policies.
This means that many entities will be focused on assessing the practices that will come into scope for such disclosures for a large part of the year. This will require the review and reassessment of many data-handling practices in the context of broader data governance and AI-related initiatives.
Statutory tort for serious invasions of privacy. The statutory tort for serious invasions of privacy, which came into effect 10 June 2025, will continue to be developed. Plaintiffs are expected to plead the tort in a broad range of disputes as a cause of action giving rise to compensation or as support for injunction applications to prevent certain disclosures.
For example, in the first court case applying the statutory tort, the District Court of New South Wales granted an injunction on an interlocutory basis to prevent the publication of certain photos on the basis that it would support an alleged "campaign of extortion." In doing so, the court found that materials — such as wedding photos of non-public figures — were to be protected from publication. This decision was based on the grounds that misuse of private photographs that were never intended to be made public may amount to a serious invasion of privacy.
Pursuant to an exemption under the act, the statutory tort does not apply where an invasion of privacy involves the collection, preparation for publication or publication of material by a journalist — and other related parties. Litigation that will test and help define the scope of this exception is likely to emerge.
Proactive enforcement posture of the OAIC
The OAIC will continue to rely on guidance and determinations to establish new standards of compliance for novel uses of technologies and to reinforce the requirements of fairness and transparency. This approach builds on landmark determinations on uses of facial recognition, fairness and transparency requirements, a consistent focus on a culture of compliance and the working combination of technical and organizational measures taken by APP entities to address their requirements.
Though reforms to the Privacy Act have not — at this stage — created a new fair and reasonable test for data handling as previously recommended, the OAIC will be applying existing laws to develop a similar test by relying on a working combination of APPs. These APPs include: APP 3.5, requiring collection of personal information by fair means; APP 5.1, involving notification of data collection; and APP 1.4, which sets out what a privacy policy must include.
The OAIC will rely on its newly expanded power, introduced in December 2024, to issue compliance and infringement notices and impose penalties of up to AUD66,000 (approximately USD44,296). These enforcement actions will seek to ensure APP entities have privacy policies that comply with the act. These actions are expressly foreshadowed in a compliance sweep planned by the OAIC and reinforce the earlier focus on "sectors and technologies that compromise rights and create power and information imbalances including: the rental and property, credit reporting and data brokerage, sectors; advertising technology (Ad tech) such as pixel tracking; practices that erode information access and privacy rights in the application of AI; excessive collection and retention of personal information; and systemic failures to enable timely access to government information."
Enforcement
The OAIC will expand enforcement processes under the civil penalty regime, aligning its approach with the Federal Court's decision in the Clinical Labs case. In this case, the Federal Court, for the first time, enforced a civil penalty of AUD4.2 million (approximately USD3 million) for failure to protect personal information. It also imposed a further AUD800,000 (approximately USD535,800) penalty for failure to adequately investigate a suspected data breach and another AUD800,000 for failing to report the breach once there were reasonable grounds to believe it constituted an eligible data breach under the Privacy Act, requiring notification to the OAIC and impacted individuals.
The OAIC will continue to implement proceedings for serious contraventions of the Privacy Act. The approach to enforcement will reflect that one incident can lead to multiple contraventions of the act and, hence, sizable penalty orders.
Further reforms
Passage of further reforms to the Privacy Act remains a possibility. However, the pace and content of reforms is far from certain.
Policy debates will remain highly influenced by the progress of reforms in adjacent areas, such as cybersecurity and AI, as well as enforcement activity by digital regulators and other agencies including the Australian Securities and Investments Commission, the Australian Competition and Consumer Commission and the Australian Transaction Reports and Analysis Centre.
There appears to be less of an appetite to create new prescriptive standards and mirror high watermark regulations such as the EU GDPR. The thinking will be informed by the European Commission's Digital Omnibus. In December 2025, the federal government announced it would not proceed with mandatory guardrails under the National AI Plan and instead would rely on existing laws.
Conclusion
Regulators will focus on enforcement and compliance with the Privacy Act, especially in data-rich business environments dealing with potentially vulnerable individuals.
For privacy practitioners, this means assisting organizations to ensure policies comply with the Privacy Act and supporting a practice and culture of compliance.
For individuals, privacy remains an important right, supported by regulatory enforcement and new legal action individuals can pursue — the statutory tort for serious invasions of privacy.
Contributor: Andreas Zavadil, AIGP, CIPP/E, CIPM
Several EU legal acts have entered into force, or will shortly, that apply in parallel to the EU GDPR and have implications for data protection law. Notable examples include the already applicable Data Act and Regulation (EU) 2024/900 on the transparency and targeting of political advertising.
The designation of competent authorities under the Artificial Intelligence Act is still pending. It remains unclear to what extent the Austrian Data Protection Authority will play a role in this regard. What is clear, however, is that the AI Act has already raised data protection questions. This is evident, for example, from the fact that the European Data Protection Board has addressed several questions concerning AI models and data protection in "Opinion 28/2024 on certain data protection aspects related to the processing of personal data in the context of AI models."
It is also worth highlighting that Austria has introduced a right to access information held by public authorities through the Freedom of Information Act. This development is significant from a data protection perspective as some decisions about whether certain information must be disclosed by public bodies require careful consideration of privacy concerns.
2026 will therefore continue to be of significant interest in the area of data protection.
Contributor: Charles Helleputte, CIPP/E
While 2026 is expected to resonate with the "simplification" of the EU digital rule book stemming from the Digital Omnibus everyone in the EU bubble is interested in — read a Brussels melting pot that might have vol au vent looking like a solid dish — this wave is unlikely to hit Belgium's domestic digital lasagna the same way.
Indeed, 2026 should be another pivotal year for the implementation and deployment of digital laws that came out of the past European Commission's tenure or that might be approved — or changed — on a fast track.
Belgium's NIS2 law will move from registration and gap-analysis into full-blown supervision and sanctioning. One can expect/hope for secondary acts and guidance from the Centre for Cybersecurity Belgium clarifying incident-notification thresholds, supply chain obligations and the interplay with the Digital Operational Resilience Act and sector rules.
In parallel, Belgium will have to complete the designation of the critical entities within its jurisdiction by 17 July. The EU Cyber Solidarity Act, in force since February 2025, may also translate into measures around the national Security Operations Center's capacity and Belgium's participation in cross-border "Cyber Shield" infrastructures and crisis protocols. A successor to the 2021-25 National Cybersecurity Strategy may be adopted, and one could also see tweaks to the CCB's mandate.
2026 will be a pivotal year for AI governance. The EU AI Act's phased implementation started in 2024 and has required member states to have notifying and market-surveillance authorities in place since August 2025. Although 21 "fundamental rights" authorities have been designated under Article 77, Belgium still had not adopted a full oversight framework as of late 2025. In 2026, one or more "AI framework acts" are expected to designate a central AI authority and coordination body; establish provisions for sanctions, inspections and regulatory sandboxes; and/or embed AI-specific rules into public-procurement and sectoral laws — such as those governing health care, mobility and public-sector use.
On privacy, a more risk-based enforcement framework is anticipated to finally develop, fueled by the wind of simplification blowing in the sails of Berlaymont and the European district. Belgium's data protection authority, the Autorité de protection des données, seems to have dealt with most of its past challenges and seems to be ready to fully deploy — even if its budget is still very insufficient to operate satisfactorily.
This is very much needed at a time when multiple frameworks have been developed to target similar topics — think dark patterns — and where effective interaction at both domestic and European levels are key. Capacity building will be critical, as will trust and open communication between the overflow of stakeholders crowding the digital boat.
So, a few "big bang" acts are expected, but a dense 2026 pipeline of implementing laws, delegated acts and sectoral tweaks should hopefully allow smooth interactions between cybersecurity, AI and privacy.
Contributor: Nancy Volesky, CIPP/US
A new privacy commissioner is anticipated to take the helm of the Office of the Privacy Commissioner for Bermuda in 2026. While a new commissioner's individual style and approach could be anticipated to make for subtle or marked changes, certain previously started initiatives will continue.
PrivCom is expected to expand and enhance its guidance, undertaking further enforcement actions and promoting privacy rights and resources. "The Pink Sandbox," where organizations can receive regulatory oversight and expertise on innovations or projects, will carry on. Monitoring present and emerging issues and technologies — both local and global — for impacts on privacy and cybersecurity remains a priority, and the sandbox helps inform PrivCom of potential challenges in the developing landscape. Cooperation with domestic and international regulatory bodies on several fronts will also be ongoing.
AI continues to be on the regulatory radar. While no AI-specific legislation currently exists, concrete steps taken in 2025 should help advance that goal throughout 2026. In the summer of 2025, the Bermuda Monetary Authority, the local financial services regulator, issued a discussion paper titled "The Responsible Use of Artificial Intelligence in Bermuda's Financial Services Sector," which was designed to solicit input from that community.
Through this discussion process, the BMA seeks to develop an outcomes-based regulatory framework for AI governance and oversight to lead responsible AI adoption and use. Follow-up consultations and industry workshops are expected in the first quarter of the year; a final proposal is anticipated in the third quarter.
The government of Bermuda published its internal AI policy last year to provide a structured framework for safeguarding human rights, privacy and public trust and continues to actively monitor this landscape.
In the area of security, the government will be active with legislative and operational cybersecurity and cybercrime initiatives. Further alignments are planned to meet global standards with the Budapest Convention on Cybercrime. The anticipated introduction or advancement of legislative amendments to other key laws, such as the Electronic Communications Act 2011 and the Criminal Code Act 1907, will strengthen PrivCom's ability to investigate and prosecute cybercrimes.
As part of the full implementation of the Cybersecurity Act 2024, the National Cybersecurity Incident Response Team will complete its key capacity-building phase. The provision of international technical assistance through the International Telecommunications Union is expected to advance critical infrastructure development, including a threat intelligence platform and NCIRT's public facing website, which will enhance the island's ability to proactively detect and analyze cyber threats.
Contributor: Ana Valeria Escobar, CIPP/E
Bolivia enters 2026 on the verge of a long-awaited data protection transformation.
Following the Financial System Supervisory Authority's 2025 fintech regulation, Supreme Decree No. 5384, which introduced EU GDPR-inspired controls into the financial ecosystem, the country is shifting from fragmented rules to a more coherent data governance model. This regulatory momentum has made privacy a new currency of trust in the market.
The Agencia de Gobierno Electrónico y Tecnologías de Información y Comunicación's draft data protection bill remains the foundation for the upcoming law. The draft bill aligns with GDPR principles, including extraterritorial scope, local representation, database registration and a future supervisory authority. While political transitions delayed adoption, the 2026 post-election period is expected to bring either a comprehensive law or a phased framework through an executive decree.
The fintech sector continues to act as a catalyst: Its compliance mechanisms should become a blueprint for other industries facing digital transformations. Yet, the challenge ahead lies in building institutional capacity, particularly a strong and independent data protection authority capable of turning regulation into enforcement.
If successful, 2026 could mark Bolivia's pivot from regulatory aspiration to regional relevance. Privacy, once peripheral, is now positioned to become part of the country's economic infrastructure, a foundation for innovation, cross-border credibility and digital growth.
Contributor: Tiago Neves Furtado, CIPP/E, CIPM, CDPO/BR, FIP
In 2026, Brazil is expected to move from designing its regulatory framework for AI to implementing it.
One of the key developments will be the progress of Bill No. 2338/2023, which was approved in the Senate in 2024 and is currently under review in the Chamber of Deputies. Following a full cycle of hearings and consultations, the Chamber's working group has signaled that a revised text is expected.
The current version of the bill adopts a risk-based approach: Systems considered to present unacceptable risks would be prohibited, and high-risk systems would be subject to governance, transparency and documentation obligations. The bill also establishes a national AI governance system led by Brazil's data protection authority, the Autoridade Nacional de Proteção de Dados, reinforcing the central role of the data protection framework in AI oversight.
During the 2025 consultation period, copyright and the use of copyrighted works for training generative AI models emerged as one of the most debated topics. Discussions focused on whether AI developers should disclose the sources of training datasets and what mechanisms should exist for rights holders to request information or object to certain uses. Other recurring points included proportionality for small- and medium-sized companies and alignment with sector-specific regulatory expectations in areas such as finance, health care and elections.
In parallel, Congress is reviewing two proposals that may shape how data can be used to train AI systems. Bill No. 2370/2023 proposes specific rules for the use of personal data in AI training with enhanced safeguards for data involving children and adolescents. Bill No. 522/2022 introduces the definition of "neural data," focusing on signals or metrics capable of revealing cognitive states. This reflects an emerging concern with cognitive privacy and early-stage neurotechnology applications.
Cybersecurity governance is also evolving. The Legal Framework for Cybersecurity, under Bill No. 4752/2025, seeks to strengthen incident response capabilities and clarify institutional responsibilities, particularly in critical infrastructure environments. Its implementation is expected to closely interact with data protection supervision.
On the regulatory side, the ANPD's 2025–26 agenda indicates which topics are most likely to advance to guidance or regulation. Priority areas include automated decision-making and the right to human review, data protection impact assessments, the processing of children's data, biometric and facial recognition systems, transparency obligations for data brokers and aggregators, and technical criteria for anonymization and pseudonymization.
In parallel, the government advanced a digital competition proposal through Bill No. 4675/2025, which proposes the creation of a Superintendency of Digital Markets within the Administrative Council for Economic Defense and establishes criteria to identify platforms with systemic relevance, requiring local representation and allowing the authority to impose obligations such as transparency measures and interoperability. The bill aims to update competition rules in digital markets, focusing on economic structure rather than content moderation.
At the international level, Brazil and the European Commission advanced toward mutual adequacy. If the adequacy decision is finalized, data transfers between Brazil and the EU will no longer require additional safeguards, increasing legal certainty and facilitating cross-border AI development and deployment.
Contributor: Irena Koleva, CIPP/E
The evolution of digital legislation is poised to continue through 2026, bringing significant changes across multiple domains.
Following the May 2025 election of new members to Bulgaria's data protection authority, the Commission for Personal Data Protection, the direction of work remains to be defined for 2026. However, the finalization of the legal framework governing accreditation, particularly concerning the requirements for certification bodies and bodies monitoring codes of conduct, is expected to be an important area of focus.
Strategically, there is anticipation surrounding the CPDP's potential adoption of a comprehensive strategy addressing personal data protection and the protection of whistleblowers stretching into 2030. Additionally, Bulgaria is on the brink of adopting the drafted Digital Transformation Strategy for 2026 through 2030. This strategy has been proposed by the Ministry of e-Government and is pending adoption by the Council of Ministers, which would signify alignment with the government's priorities for Bulgaria's strategic development at a national level.
Cybersecurity legislation is another area facing delays, particularly concerning the implementation of the NIS2 and the Critical Entities Resilience Directive. A bill proposing amendments and supplements to the Cybersecurity Act passed its first reading in the National Assembly; a final reading is anticipated by early 2026. Following its adoption, the Council of Ministers will be responsible for amending and enacting secondary legislation, such as an ordinance on minimum requirements for network and information security and a methodology for identifying essential and important entities.
A new national cybersecurity strategy is also expected. Furthermore, a draft Law on the Resilience of Critical Entities, a law that transposes the CER Directive, has been released for public consultation and is expected to be adopted in 2026.
On the frontier of digital policy, a Bill on the Use and Development of AI was submitted to the National Assembly in October 2025. Its progress through the legislative process remains uncertain as it awaits parliamentary debate and votes.
Bulgaria is progressing in its efforts to transpose the EU Representative Actions Directive, which establishes requirements for collective actions across various fields, including data protection. Amendments to the Consumer Protection Act, crucial for implementing this directive, are in the legislative pipeline awaiting a final vote.
Approximately two years after the DSA took effect across the EU, Bulgaria has made strides with amendments to the Electronic Communications Act, promulgated 21 Nov. 2025. These amendments establish the administrative framework necessary for the DSA, introducing a multiauthority model. They also anticipate issuance of a cooperation instruction between the Communications Regulation Commission, the Council of Electronic Media and the CPDP to delineate responsibilities among the authorities involved.
Lastly, attention is also directed toward the anticipated progress in implementing the EU's Data Act and Data Governance Act, illustrating Bulgaria's ongoing commitment to aligning with EU regulations and advancing its digital governance infrastructure.
Contributor: Shaun Brown
Federal privacy reform is the biggest open question heading into 2026. With Bill C-27 having died on the Order Paper in early 2025, it is possible the federal government will introduce another bill in 2026 to replace the Personal Information Protection and Electronic Documents Act with new federal privacy legislation.
One of the biggest criticisms of Bill C-27 was its inclusion of the AI and Data Act. Tying two significant and largely unrelated legislative proposals together in one bill meant the passage of new privacy legislation also required federal lawmakers to agree on how to regulate AI. With this in mind, the government is expected to address privacy reform with its own bill.
Alberta recently overhauled its public sector privacy regime with the passage of the Protection of Privacy Act, which came into effect in 2025. The POPA includes several new provisions, including mandatory privacy management programs and automated decision-making transparency that, outside of Quebec, are unique to Canada's public sector. The POPA also has a broader scope, regulating "data derived from personal information" and certain "non-personal data."
Both New Brunswick and Prince Edward Island are currently reviewing their own public sector privacy laws, so it is possible some of these changes will be adopted as part of legislative reform in those jurisdictions in 2026.
With public sector reform complete, it seems likely Alberta will introduce amendments to its private sector legislation, the Personal Information Protection Act, in response to recommendations from the Standing Committee on Resource Stewardship published in early 2025. Among other things, the committee recommended the government introduce administrative monetary penalties, higher penalties for offenses and automated decision-making transparency requirements.
British Columbia may also reengage on private-sector privacy reform in 2026. Although nothing has been formally announced, the British Columbia Personal Information Protection Act is due for an update as it remains the only private sector privacy law in Canada without mandatory breach reporting.
Contributor: Jaime Urzua
Chile's privacy and AI governance will move from design to delivery in 2026.
The Senate will decide the oversight model and governance of Chile's bill regulating AI systems; the Agencia Nacional de Ciberseguridad will translate the cybersecurity framework into enforceable rules; and the new Agencia de Protección de Datos Personales will open its doors and kick off certification and supervision, setting a more complex and risk-based compliance landscape.
AI systems bill
In October 2025, the Chamber of Deputies approved Chile's AI bill, Bulletins 15.869-19 and 16.821-19, in its first constitutional stage and sent the bill to the Senate.
The second stage is expected to revisit core architecture issues. These include the bill's new transparency duty for synthetic audio, image, video and text — labeling obligations beyond "high-risk" uses — and how it will be operationalized, including watermarking or disclosures in user interfaces. They also cover the carve-out for open-source components and how liability attaches if such components are commercialized or used in high-risk systems; the risk taxonomy, including the definition of "significant risk," and alignment with global frameworks; incident-reporting channels and a specialized administrative procedure, which was removed by the lower house; and governance.
With the responsibilities centered in the Ministry of Science, Technology, Knowledge and Innovation, senators will likely test whether the APDP or another body should have supervisory powers and sanctioning procedure — including how economic benefit and recidivism aggravate fines.
Cybersecurity: ANCI's regulations, instructions and 2026 priorities
Throughout 2025, the ANCI focused on elaborating and executing regulations and instructions for the entry into force of the Cybersecurity Framework Law. 2026 will see a focus on operationalizing the cybersecurity framework and defining priorities for essential services providers and operators of vital importance, reporting obligations and sector standards.
The ANCI is expected to focus on incident notification procedures and elaborate technical standards to address emerging security threats while continuing to designate operators of vital importance. These activities will shape compliance requirements for public and private sectors, reinforcing Chile's alignment with global cybersecurity best practices and supporting its ambition to become a reference point in Latin American cybersecurity governance.
APDP set-up and first actions
With the new Personal Data Protection Law entering into force, the APDP should become operational and begin implementing its compliance infrastructure in 2026. Early focus will include certifying, registering and supervising voluntary data protection compliance programs, creating an auditable signal of accountability for adopters.
The PDPL is scheduled to take effect 1 Dec. 2026, so the year will be focused on capacity-building, procedures and tooling — for example, creating portals for certifications, addressing complaints and reporting breaches.
The APDP can be expected to issue practical guidance on designating data protection officers and the minimum contents of programs, including data inventories, internal reporting and whistleblowing channels, risk matrices and protocols. The APDP is also projected to coordinate with the ANCI on incident handling where security and privacy intersect.
Contributor: Barbara Li, CIPP/E
In 2026, China is expected to see major developments across privacy, AI and cybersecurity, accompanied by robust regulatory enforcement.
Amendments to China's Cybersecurity Law will take effect 1 Jan. 2026. While China already regulates specific AI issues, such as algorithms, deepfakes, generative AI, AI labeling and ethics, these amendments represent the first time AI governance falls within the scope of a major national law.
Beyond AI, the amendments also strengthen data protection obligations in critical information infrastructure sectors and impose enhanced cybersecurity requirements relating to supply chains and data breaches. Penalties for noncompliance have been significantly increased.
Since the second quarter of 2024, China has gradually relaxed compliance requirements for cross-border data transfers. While the regulatory approval and filing processes have been simplified and accelerated, certain transfer scenarios are now exempt from formal cross-border data transfer legal mechanisms.
In 2025, the Cyberspace Administration of China, the country's top data regulator, issued a series of question-and-answer guidance documents to address common challenges faced by businesses when conducting cross-border data transfer impact assessments and preparing regulatory approvals and filings.
At the local level, Free Trade Zones have been used as regulatory sandboxes to test innovative approaches aimed at promoting AI, advancing autonomous driving, improving logistics, developing pharmaceuticals and enhancing financial services. Local regulators in Beijing, Shanghai, Hainan, Tianjin and Hangzhou, among others, have released local rules, policies, whitelists and negative data catalogues to further ease cross-border data transfers. This trajectory is expected to continue in 2026.
Further collaboration is also anticipated among data regulators in Guangdong, Hong Kong and Macau to facilitate data transfers within the Greater Bay Area, which covers nine cities in Guangdong Province as well as the Hong Kong and Macau Special Administrative Regions. A model contract for data transfers within the Greater Bay Area was already released with five pilot projects rolled out in 2025. More rules and policies are expected to further enable dataflows and explore building the Greater Bay Area into a "special data zone" in 2026.
Compliance obligations relating to "important data" remain a major challenge for businesses, largely due to the lack of clear and comprehensive rules. While regulators in certain sectors have begun to develop catalogues of important data, a regulatory vacuum persists for many industries. Industry authorities have signaled plans to issue further rules and guidance, making this an area businesses should closely monitor.
On the enforcement front, multiple Chinese regulators have overlapping authority and supervisory power over data protection, AI and cybersecurity; they have shown little hesitation in pursuing investigations and issuing penalties. In 2025, national and local authorities conducted multiple enforcement campaigns targeting mobile apps, e-commerce platforms, financial services, transportation, consumer-facing businesses, cross-border data transfers, data breaches and cyber incidents.
These areas, along with AI compliance, are expected to remain enforcement priorities in 2026. Regulators are likely to focus on the use of AI for misinformation and illegal purposes, intellectual property protection in training datasets for large language models, and compliance with mandatory algorithm and LLM review and filing requirements, where applicable.
According to China's AI Plus Action Plan issued by the State Council in August 2025, the country aims to deeply integrate AI across economic activity and daily life. China is expected to adopt an agile and pragmatic approach, balancing AI innovation with governance and risk management. In 2026, national, local and industry regulators are likely to issue further AI-related rules, standards and guidance, providing businesses with clearer benchmarks and best practices.
Contributor: Luis-Alberto Montezuma, CIPP/C, CIPP/E, CIPP/US, CIPM, FIP
In 2025, Congresswoman Maria Fernanda Carrascal re-introduced Bill 214/2025C to align Colombia's personal data protection system with the EU GDPR. Meanwhile, the Ministry of Commerce and the Ministry of Science, Technology and Innovation proposed Bill 274/2025 to amend the territorial scope of Law 1581 of 2012, Colombia's current personal data protection law.
The bills were approved 28 Oct. 2025 by the House Committee on Constitution and forwarded to the House Floor. If passed, the bills, which have been consolidated through Bill 274/2025, go to the Senate's Committee on Constitution and Justice and then the Senate floor. Any differences between House and Senate versions are resolved by a conference committee, after which both chambers vote on a revised bill.
Before the president's signature, the bill will be subject to prior review by the Constitutional Court and may be declared unconstitutional if found to conflict with provisions of the Constitution. If the bill is not passed in June 2026, it dies.
Bill 274/2025 defines terms such as automated decision-making, neural and genetic data, large-scale data processing and biometric data. The bill establishes contracts, legal obligations and vital interests as grounds for processing personal data while excluding legitimate interests; it also updates rules for data transfers and children's data processing. Bill 274/2025 prohibits controllers from indirectly collecting personal data through computer programs or techniques, including AI, that infer or deduce information from other data. This would prohibit the practice of data scraping in Colombia.
Under the bill, the deputy superintendent for the protection of personal data will continue to be appointed by the Superintendent of Industry and Commerce. However, the position will now have a four-year term and be filled through a public process based on expertise. This aims to ensure independence, efficiency and objectivity, helping Colombia meet the European Commission's standards for adequate data protection.
In 2025, the SIC issued three circulars, including Circular 001/2025 which sets rules for personal data handling within the fintech ecosystem, requiring transparency, compliance with data transfer rules, and data protection assessments. While Law 1581 of 2012 does not mandate disclosure about automated decisions, fintech firms must disclose what information is used, the reasons why, and the effects on individuals. The circular limits the sharing of biometric data with third parties and prohibits data controllers and processors from contributing to centralized biometric databases, except when data processors access and process such data on behalf of the controller.
Circular 002/2025 sets standards for technology transfer agreements involving data processing.
The authority has also adopted the Ibero-American Data Protection Network's Model Contractual Clauses via a circular. Sharing personal data from a controller to a processor abroad is not considered an international transfer in Colombia. However, the circular covers transfers between controllers and to processors in countries lacking adequacy decisions. Use of the RIPD MCCs is optional.
Contributor: León Weinstok
Costa Rica will hold elections in February 2026, so the future composition of Congress remains uncertain. The proposed Personal Data Protection Law, Bill No. 23.097, is expected to be one of the main legislative items under discussion. However, it is unclear whether the new members of Congress will introduce modifications to the bill or draft an entirely new proposal.
If the bill is approved in its current form, organizations should prepare for new data subject rights, legal bases, reporting obligations and cross-border data transfer rules to be enforced by Costa Rica's data protection authority, the Agencia de Protección de Datos de los Habitantes, which is expected to increase its guidance and supervisory activity once the law enters into force.
AI regulation is likely to remain fragmented and highly technical in 2026. Competing bills and sectoral proposals — such as the proposed Law for the Regulation of AI and the Law for the Implementation of AI — are already under discussion, but none appear to have a strong chance of passage. Given the technical complexity of these initiatives and the arrival of a new congressional term, the approval of any AI-specific legislation in 2026 seems unlikely.
Contributor: Krunoslav Smolcic, AIGP, CIPP/E, CIPM, CIPT, FIP
2026 is set to be a defining year for the digital landscape in Croatia, marked by the full implementation of major EU legislative acts and significant national digital transformation initiatives. Driven by the dual pillars of AI governance and enhanced digital responsibility, a period of intense compliance focus across both public and private sectors is anticipated — one that will inevitably reshape the local privacy framework.
Privacy and digital responsibility: A mandate for transparency
The mandatory adoption of new national e-governance standards will drive the most tangible shifts in digital responsibility. Starting 1 Jan. 2026, the introduction of Fiskalizacija 2.0, a comprehensive digital ledger system mandating real-time e-invoicing and digital archiving for all value-added tax payers, will require widespread use of digital certificates and electronic signatures.
This transition, though primarily tax driven, significantly elevates the baseline for data security, data integrity and business-to-business digital transparency across Croatia's economy. Companies will have to implement robust internal data handling procedures to manage this new level of digital security.
In parallel, the gradual rollout of the European Digital Identity Wallet based on the Electronic Identification, Authentication, and Trust Services 2.0 framework is expected. Croatia has already begun technical and organizational preparations by upgrading its National Identification and Authentication System infrastructure. Although the timeline for full availability depends on EU-level project schedules and national implementation decisions, the shift toward user-controlled identity attributes will place additional pressure on data minimization practices and transparency requirements across all digital services.
AI governance: The operational reality of the AI Act
In 2026, the overarching theme for AI governance will be the necessary preparation and initial compliance with the EU AI Act, which enters full application 2 Aug.
Croatia is developing a multi-layer supervisory model for the AI Act. Draft national implementing legislation envisages a distribution of oversight responsibilities among several key players: the Personal Data Protection Agency, which will focus on the intersection of AI and personal data; general and specialized ombudsmen for children, gender equality and individuals with disabilities to ensure protection of vulnerable groups; the State Electoral Commission; and the Agency for Electronic Media.
This distributed enforcement model reflects a clear national intent to ensure comprehensive oversight across all high-risk sectors, including employment, health care, public services and elections.
Addressing the biggest regulatory challenges
Despite the clear regulatory framework, Croatian businesses face two significant hurdles in 2026 that will test the capabilities of both the private sector and regulators.
First, low institutional readiness is a significant challenge. Surveys indicate a vast majority of companies are not yet fully aware of the AI Act's specific classification and documentation requirements for high-risk AI systems. The primary challenge is translating the theoretical act into an operational governance framework, specifically developing internal AI use policies, conducting mandatory fundamental rights impact assessments, and securing technical competencies for oversight of high-risk AI systems. The knowledge gap could lead to early compliance failures.
A further challenge lies in the unresolved question of liability for harm caused by autonomous systems. Discussions are underway regarding potential adjustments to criminal and civil liability frameworks, particularly in sectors such as autonomous mobility, but concrete legislative solutions have not yet been finalized. The key issue for 2026 will be clarifying how responsibility is allocated between developers, importers, deployers and AI system operators.
For Croatia, 2026 will be defined by the simultaneous push for national digital efficiency and the intense, foundational work required to establish a secure, rights-respecting AI regulatory environment as mandated by the EU. The focus will be on transitioning from legislative adoption to the operational reality of compliance across the economy.
Contributor: František Nonnemann
In the Czech Republic, the end of 2025 was marked by major developments in the field of cybersecurity. On 1 Nov. 2025, the Cybersecurity Act entered into force, transposing the NIS2 Directive. In parallel, the Critical Infrastructure Act, which implements the Critical Entities Resilience Directive, is taking effect in stages. These new laws significantly expand the scope of obligated entities required to ensure the continuous availability of their services and systematically manage cybersecurity, information security and resilience.
All implementing regulations under these laws are expected to be adopted in 2026. Regulated subjects will therefore have to fully apply and update their internal frameworks, aligning cybersecurity and operational resilience systems with existing data protection processes. Ensuring coherence between personal data protection, cybersecurity and resilience obligations will become a key compliance priority.
In 2026, a new national law implementing the EU AI Act is also expected to be adopted. The Czech Republic is likely to follow a minimalist approach, introducing no additional obligations for developers, providers or deployers of AI beyond those required under the EU regulation.
The Act on the Digital Economy is another important legislative initiative in progress. This law will regulate the dissemination of commercial communications, the activities of information society service providers, and the recognition of organizations for data altruism. The act will also serve to implement provisions of the EU's Data Governance Act and Digital Services Act.
Among other changes, the proposal introduces new rules for sending electronic commercial communications. For instance, it will limit such communications to a two-year period following the last interaction with a customer. The draft also defines mechanisms for content removal and data disclosure requests addressed to digital service providers, specifying certain aspects of dispute resolution procedures, and regulates access to data held by very large online platforms and very large online search engines. Finally, the act will create a public register of recognized data altruism organizations.
Taken together, these legislative changes mark a continued integration of EU digital, cybersecurity and AI regulatory frameworks into law. 2026 is expected to be a year focused on implementation, compliance and alignment across the key domains of data protection, cybersecurity and digital governance.
Contributor: Niels Torm, AIGP, CIPP/E, CIPM, FIP
Throughout 2025, numerous instances of AI applications emerged, raising significant questions regarding privacy, ethics, and intellectual property rights.
In Denmark, particular emphasis has been placed on addressing the challenges and risks associated with AI adoption. These developments have heightened public awareness and led to the launch of several initiatives, which may result in increased regulation for certain sensitive sectors.
Legislators recognize that the rapid advancement of AI presents unique ethical, IP and privacy challenges both to lawmakers and societies. Lawmakers are looking for ways to balance its development and the public's rising concern. According to many experts and legislators, 2026 is going to be the year where Denmark will have to decide as a country how it wants to handle AI.
The growing amount of AI-generated content across social media platforms has amplified worries about diminishing trust in authorities and, ultimately, in societal cohesion. Addressing these concerns is crucial for politicians as declining confidence in official news sources can foster unrest and societal division. Leaders remain vigilant, observing similar trends in other countries where distrust in media has catalyzed societal fragmentation.
In response to growing public concern, companies that are undertaking or advancing their AI initiatives are expected to place greater emphasis on the ethical and responsible use of AI. This increased attention will inevitably drive a stronger focus on AI governance as a strategic means of managing these challenges.
For privacy professionals, concerns related to AI use are expected to drive privacy initiatives in both the private and public sectors. As a result, there will likely be heightened demand for professionals with expertise in both privacy and AI governance, making certifications in these areas increasingly valuable throughout 2026.
Contributor: Pedro Cordova
Ecuador is headed toward approving legislative advances in personal data protection and AI regulations in 2026.
Throughout 2025, the Superintendency of Personal Data Protection intensified its regulatory activities. Having approved its institutional regulatory plan, the authority issued several key instruments, including the Risk Management and Impact Assessment Guide, standard contractual clauses, the regulation on imposing sanctions for personal data protection infringements and the statute detailing data protection officer appointments and roles.
For 2026, the issue of a new regulatory plan is expected to address matters still pending development through secondary regulations and resolutions, such as establishing technical security standards, issuing guidelines for sectors processing sensitive data, determining retention periods, establishing the National Personal Data Registry for international transfers, and designating countries with an adequate level of data protection. These measures will enable the SPDP to provide clearer operational guidance and facilitate organizational compliance.
In parallel, the normative development of AI governance and digital responsibility has progressed at a slower but deliberate pace. In 2024, three legislative proposals addressing AI regulation were submitted to the National Assembly. Among these, the bill for the Organic Law for the Regulation and Promotion of AI was formally admitted for consideration and is currently undergoing technical analysis through expert roundtables. The bill is expected to advance toward its final debate and potential approval in 2026, thereby establishing Ecuador's first comprehensive legal framework for the development and responsible use of AI.
The proposal seeks to define AI systems in a manner consistent with the EU's AI Act and to adopt a risk-based approach that balances innovation with the protection of fundamental rights.
2026 is projected to be a year of regulatory consolidation, characterized by increased activity from the SPDP and the potential enactment of a law that lays the foundation for the development of AI in Ecuador.
Contributor: Isabelle Roccia, CIPP/E
2026 will see a continued focus on competitiveness and sovereignty in the EU. The European Commission is expected to deliver tangible results to address recommendations for improving European competitiveness within the Draghi report. The simplification efforts launched in 2025 are part of this approach, aimed at alleviating the compliance burden for subject matter experts and streamlining the EU digital rulebook, among others.
The impact of these proposals will depend on two factors: how Brussels calibrates competitiveness and simplification objectives while safeguarding its value-based system, and how it navigates global geopolitics regarding AI, digital trade, dataflows and platform regulation.
Simplification will not mean heavy deregulation. The Commission's objective will be to simplify the application of rules in practice and streamline their interplay rather than drastically lowering the volume of rules. Its proposals touch on data, cybersecurity, privacy and AI but will be fiercely negotiated for months. Some member states will also continue to push issues on the agenda to trigger European action — for example, children's online safety.
Despite a simplification agenda, Brussels will continue to initiate policy and laws impacting digital governance. Several laws will undergo a review — for instance, the Cybersecurity Act and Digital Markets Act — which could introduce sovereignty requirements and expand the initial scope of application.
New initiatives are also expected to trickle down to professionals' desks, including data sharing and access, cloud service provisions, consumer online protection and dark patterns with the Digital Fairness Act; data retention; cross-border enforcement of the EU GDPR; child sexual abuse material; and product liability.
The Commission will continue to use its enforcement competencies under the DMA, DSA, and AI Act. In parallel, the patchwork of agencies involved at the national level will only be tightened, creating an urgency for clear and predictable cooperation mechanisms.
The implementation of existing laws will remain a Gordian knot for organizations and regulators alike. The landscape will remain complex. Some laws still require full implementation, including the AI Act and the NIS2 Directive, designation of authorities under the AI Act, and additional guidelines and standards for the DMA, DSA, and AI Act. Organizations, authorities and courts will increasingly grapple with questions that require them to be more knowledgeable, skilled and thoughtful than ever.
Contributor: Eija Warma-Lehtinen, CIPP/E
Finland's data protection authority, the Office of the Data Protection Ombudsman, has been granted additional funding to significantly increase its resources. This is partly due to criticism regarding the DPA's lengthy processing times, caused by the increased number of notifications and complaints following the EU GDPR's entry into force.
In September 2025, the DPA issued two decisions imposing administrative fines on financial sector operators for various data security deficiencies identified in their operations. In the coming year, the level of adequate data security will be thoroughly evaluated and examined in light of these two cases. Both cases have been appealed to the Helsinki Administrative Court.
For the first time, the DPA published an inspection plan for the next three years. Inspections are particularly targeted at personal data processing that individuals cannot directly monitor themselves. Organizations that process this kind of data are primarily found in the public administration sector.
The Office of the Data Protection Ombudsman is handling its first so-called "click or consent" case in which a media company has started applying this practice on its websites. It will be interesting to see how the case progresses and when and what kind of decision will be issued.
As regulation has increased, the DPA has received several new types of supervisory tasks. In the coming years, it will have to further adapt staffing and learn the effects of new regulation on personal data processing activities.
Data Protection Ombudsman Anu Talus was elected for a new five-year term. In addition to this role, Talus is the Chair of the EDPB.
Contributor: Cécile Martin
In 2026, France is expected to accelerate the regulation of AI use within companies.
Indeed, as employees increasingly use AI, case law is beginning to multiply decisions on the conditions for implementing this new technology in the workplace.
In this regard, as announced with its 2025-28 strategic plan, France's data protection authority, the Commission Nationale de l'Informatique et des Libertés, intends to continue its work to provide legal certainty for stakeholders and promote AI that respects individuals' rights.
To this end, in addition to the education and health care sectors, the CNIL intends to continue its studies and establish recommendations on the use of this new technology in the workplace.
While the EU AI Act has already established safeguards in this area, the CNIL is expected to provide additional guidance on the use of AI in the workplace soon.
Contributor: Ulrich Baumgartner
The wheels continue to turn in Germany with notable developments in data protection law expected to carry on in 2026.
First and foremost, Germany will likely see a change in the structure of data protection law oversight.
Currently, the country's supervisory system is structured federally; each of the 16 German states has its own data protection authority. There is also a Federal Commissioner for Data Protection and Freedom of Information, the Bundesdatenschutzbeauftragte, that is competent for supervising public authorities of the federal government as well as providers of postal or telecommunications services.
Following a decades-long debate over the centralization of the system of data protection supervision in Germany, the current government seems poised to follow words with action. In its coalition agreement, the federal government — which took office in March 2025 — set the goal of centralizing data protection supervision at the BfDI. Currently, it is still unclear whether and to what extent the current federal supervisory authorities will become obsolete or what competences they may retain.
While there is no draft law amending the German national data protection laws in that regard yet, 2026 will likely bring some material changes.
As a harbinger of broader centralization efforts, the government advanced a draft law to implement the EU Data Act into national law. The draft law centralizes supervision over any processing of personal data in the context of the Data Act at the BfDI, thereby depriving the federal supervisory authorities of their actual competence as provided for in Article 37(3).
That said and although neither the national implementing law for the Data Act nor the corresponding national law for the EU AI Act have been formally adopted, it is already clear that the Federal Network Agency, the Bundesnetzagentur, will be the primary supervisory authority for both regulations. Back in 2024, the agency was appointed as the primary supervisory authority for the EU DSA.
This will propel the BNetzA — so far primarily responsible for telecommunication, postal, energy and rail services — to become the new "super authority" for supervision over both the EU Data Act and AI Act, as well as the DSA.
On the cybersecurity side, Germany will finally adopt a national NIS2 implementation law. Among other things, the law will introduce a three-stage reporting system for security incidents.
Regarding substantive law, the government's ambition to adopt a new dedicated Employee Data Protection Act seems to have lost momentum. While the prior government adopted a draft law on employee data protection, the new government has so far shown little desire to move the draft forward.
Contributor: Alexandra Athanasious, CIPP/E
In 2026, Greece is expected to implement a plethora of legislative acts, including its supplementary law to the EU Data Governance Act. Law 5188/2025 will establish the framework and set out conditions for the reuse of data held by public sector bodies that are legally protected — particularly personal data or data protected by intellectual property rights — by regulating the applicable procedure through a single information point.
Further, the expected supplementary law will establish data intermediation services and data altruism organizations to support and facilitate the provision of data by the data subject or holder.
The supplementary law for the EU Data Act will include the designation of the competent supervisory authority, outline its procedures and operational framework, and set the administrative fines that may be imposed in cases of violations.
Other legislative developments will concern the field of AI. These specifically include the establishment of the competent supervisory authority for the EU AI Act and the implementation of the newly established Special Secretariat for AI and Data Governance. The special secretariat will be responsible for opening public-sector data, which, if nonprotected, will be provided automatically. If protected, the public-sector data will be provided through application programming interfaces upon demand.
In addition, the secretariat will be responsible for designing and implementing the national strategy for AI and data, coordinating involved stakeholders, serving as a hub of expertise and capacity-building in AI and data, developing AI projects and providing expertise to bodies wishing to develop AI projects where needed.
At the same time, the National Cybersecurity Authority was established under Law 5086/2024 and is expected to gradually unfold its work and respond to requirements by making decisions related to risk management and ensuring the smooth continuity of the market.
In terms of data protection supervision and enforcement, the head of the Hellenic Data Protection Authority stepped down in the summer of 2025 and has been temporarily replaced by the authority's deputy president. For this reason, the authority has not defined its strategy or action plan for 2026; it is unlikely to undertake major initiatives for market regulation or supervision.
Overall, significant developments are expected in legislation, especially in terms of AI and data regulation.
Contributor: Daniela Meza and León Weinstok
Guatemala continues to lack a comprehensive data protection framework, relying instead on constitutional guarantees, freedom of information rules, and scattered sector-specific provisions. These instruments only provide limited safeguards for individuals and do not define clear obligations for data controllers nor mechanisms for cross-border dataflows.
Historically, data protection efforts in Guatemala have mainly focused on personal data held and processed by the public sector. The Guatemala Constitution and the Law on Access to Public Information set out basic principles for how public institutions collect and disclose personal information. However, these provisions were not designed to address the complexities of today's digital environment where both public and private actors process increasing volumes of personal data. As a result, the private sector operates in a regulatory vacuum, creating legal uncertainty and fragmented compliance practices.
Looking ahead to 2026, data privacy and emerging technologies are expected to return to the legislative agenda. Several new data protection bills have recently been introduced before Congress. These bills aim to recognize fundamental rights of data subjects — including the right to informational self-determination — and to establish rules on consent, impose administrative fines for misuse of personal data, define cross-border data transfer mechanisms and regulate automated decision-making that affects individuals.
In parallel, discussions on AI governance are gaining traction. Although Guatemala does not yet have an AI-specific law, recent policy dialogues emphasize the need to regulate AI systems' transparency, accountability and ethical use, particularly within the public sector and digital financial services. However, a regulatory framework in this regard is not expected to be established soon as the protection of personal data and cybersecurity-related issues are being prioritized in the legislative agenda.
Contributor: Ali Ordonez and León Weinstok
Honduras still does not have a specific law dedicated to regulating privacy matters. Legislation, such as the Consumer Protection Law and the Law on Transparency and Access to Public Information, gives a very small regulatory framework.
In practice, companies tend to apply international standards for data privacy as best as they can to be able to commercialize and operate in regional markets. It is strongly recommended that this be examined considering the Constitution of the Republic of Honduras, which also regulates the right to honor, intimacy and secrecy of communications.
In 2013, a draft Personal Data Protection Law sought to guarantee privacy and the right to informational self-determination for individuals. However, this draft was never approved by the National Congress of Honduras. Rights derived from the processing of personal data are mainly protected by the Constitution of the Republic of Honduras.
To date, the National Congress has not officially announced any discussion of a new law.
In 2018, there was an attempt to pass a law dealing with cybersecurity and protective measures against acts of hate and discrimination on the internet. However, the proposal did not pass the draft stage in the legislative chamber and remains on hold.
The use of AI is not regulated in Honduras, and no possible regulation has been considered yet.
Contributor: Wilfred Ng
When Hong Kong's first cybersecurity legislation, the Protection of Critical Infrastructures (Computer Systems) Ordinance, took effect 1 Jan. 2026, it imposed organizational, preventive and incident reporting and response obligations on designated critical infrastructure operators across different essential service sectors and infrastructures.
The newly established Office of the Commissioner of Critical Infrastructure is the primary regulatory and enforcement authority. Working alongside sectoral regulators — such as banking and telecommunications regulatory authorities — the commissioner has the power to designate critical infrastructure operators, monitor compliance with the ordinance, issue codes of practice, and serve as the primary authority for which security incidents should be reported.
In the concluding months of 2025, the government finalized the much-anticipated codes of practice for the relevant essential services sectors that will serve as practical guidance for compliance.
Regarding data protection, no timeframe has been reported for amending the Personal Data (Privacy) Ordinance to bring it closer in line with international standards. Previously, the Office of the Privacy Commissioner for Personal Data proposed amendments such as establishing a mandatory data breach notification mechanism, granting the PCPD authority to impose administrative fines and introducing direct regulation on data processors.
To the extent the PDPO introduces a mandatory breach notification regime in the future, critical infrastructure operators regulated under the new cybersecurity law should take note of the overlapping reporting requirements that may be triggered by data breaches and security incidents under the two regulatory frameworks.
The PCPD has marked an eventful year in implementing a series of AI-related initiatives since the publication of the "Artificial Intelligence: Model Personal Data Protection Framework" in June 2024. This publication was followed by a compliance check conducted between February and May 2025 by the PCPD concerning 60 local organizations' implementation of the framework.
In March 2025, the PCPD published the "Checklist on Guidelines for the Use of Generative AI by Employees" which aims to assist organizations in developing internal generative AI use policies in compliance with the PDPO. The PCPD joined 20 other data protection authorities in signing the "Joint Statement on Building Trustworthy Data Governance Frameworks to Encourage Development of Innovative and Privacy-protecting AI" in September 2025 at the 47th Global Privacy Assembly.
With the continuing prevalence of generative AI technologies, the PCPD is expected to provide more sectoral or use case specific guidance for carrying out AI-assisted data processing activities, particularly in areas concerning data analytics and profiling.
Contributor: Tamas Bereczki, CIPP/E, and Ádám Liber, CIPP/E, CIPM, FIP
Ongoing EU AI Act implementation
In late 2025, Hungary's Parliament and government adopted the implementing legislation of the EU AI Act, officially known as Act LXXV, designating the competent authorities pursuant to Article 70 of the AI Act.
With the adoption of the legislation, the National Accreditation Authority will act as the notifying authority; the Ministry for National Economy will serve as the solitary market surveillance authority and single point of contact in Hungary. The ministry will also implement the national AI regulatory sandbox with an expected deadline of August 2026.
The AI Act implementation legislation will also establish detailed procedural rules for supervisory authorities and will establish the Hungarian AI Council, which will be tasked with issuing guidelines and opinions regarding the implementation of the EU AI Act.
NIS2 cybersecurity audits
Due to regulatory delays, the first NIS2-related cybersecurity audit deadline was extended to 30 June 2026 in Hungary. However, with over 3,000 entities affected and few accredited auditors available, audit capacity may be insufficient, raising concerns that many organizations could miss the deadline and face potential action from Hungary's cybersecurity supervisory authority, the Supervisory Authority for Regulated Activities.
Delayed Data Act implementation
Although the EU Data Act became applicable 12 Sept. 2025, the legislature has yet to announce the designation of the competent authority and data coordinator in Hungary, pursuant to Article 37 of the act. With no draft proposals currently available, Parliament is expected to adopt the implementing legislation sometime in 2026.
New rules for online financial services contracts
New legislation aiming to strengthen consumer protection in the case of financial contracts concluded at a distance is expected to become applicable in June 2026. The new act is the implementing legislation of Directive (EU) 2023/2673 of the European Parliament and Council; it repeals the current Act XXV of 2005, which governs distance selling contracts for services in the financial sector.
The new legislation aims to strengthen consumer rights in the digital space, update the legislative framework, remove unnecessary burdens on service providers and consumers and provide for equal conditions on competition between financial services providers.
Contributor: Pranav Rai
India's tech legislation is set for major shifts in 2026. After years of debate, the country’s first comprehensive privacy law — the Digital Personal Data Protection Act — is moving from policy design to active enforcement.
The detailed Digital Personal Data Protection Rules, released in November 2025, lay out a phased rollout — over the next 12–18 months — of strict compliance requirements. Regulators are empowered with hefty penalties — up to INR2.5 billion (approximately USD27 million) per breach — to ensure firms toe the line.
In 2026, businesses will focus on compliance, appointing data protection officers, auditing data practices and deploying certified consent manager services. The newly created Data Protection Board of India is taking shape to investigate complaints and penalize violations, signaling that India's era of lax oversight is ending and digital privacy rights, affirmed by the Supreme Court in 2017, are now backed by real consequences.
AI governance will follow a cautious, "light-touch" path. India has no immediate plan for an AI-specific law, leaning instead on guidelines and existing laws. In November 2025, the government released India AI Governance Guidelines — a comprehensive but non-binding framework promoting ethical, transparent AI use while explicitly aiming not to throttle innovation. The guidelines lay out principles like fairness, accountability and "do no harm" but do not impose new legal mandates.
Officials argue that many AI-related risks can be handled under current statutes. For example, a malicious deepfake is essentially misrepresentation, an infraction already punishable under the Information Technology Act's fraud provisions. To tackle emerging concerns, the government is tweaking policies rather than writing new legislation. Notably, in late 2025 the IT Ministry proposed mandatory labeling of AI-generated content on social media — requiring platforms to clearly flag deepfake images, videos and audio so users know when content is synthetic.
This targeted draft rule to curb misinformation exemplifies India's pragmatic approach. Unless a major AI incident forces a rethink, 2026 will see regulators continuing to favor soft governance and industry self-regulation for AI rather than sweeping new laws — a contrast to the more heavy-handed AI regulations in the EU.
Cybersecurity and tech regulation remain a work in progress. The much-anticipated Digital India Act, a proposed overhaul of the country's 23-year-old IT Act, is still in draft form and wasn't passed in 2025. Officials have indicated there is no urgency to rush this omnibus law, noting that existing frameworks address many issues for now.
Instead, the government is pursuing piecemeal measures to strengthen cyber governance in the interim. In practice, this means leaning on and refining current laws: tightening intermediary rules, issuing sector-specific security directives, and using the new DPDPA and updated penal code to punish cyber offenses. The recently proposed deepfake labeling rules are a prime example, leveraging an amendment to the IT Rules, 2021 rather than waiting for a big, new act.
Likewise, other cyber initiatives are likely to advance within the present legal setup. A long-delayed National Cybersecurity Strategy is also expected to finally be unveiled in 2026, setting a policy blueprint for critical infrastructure protection and coordinated responses to threats.
India's legal trajectory in 2026 balances bold implementation with cautious policymaking. Data protection will dominate the agenda as the DPDPA's stringent norms kick in, forcing a cultural shift in how organizations handle personal data. AI governance will be guided by principles and light oversight, signaling an intent to foster innovation under watchful eyes rather than impose premature restrictions. The absence of an immediate, all-in-one cybersecurity law will be offset by incremental reforms and proactive policies to address pressing risks.
It will be a year of consolidation and measured progress — one in which India begins putting its digital laws into action, even as it fine-tunes the rules for the next chapter of its tech-driven future.
Contributor: Glenn Wijaya
Indonesia's personal data protection framework has been overseen by the Directorate General of Digital Space Supervision, an agency established under the Ministry of Communication and Digital Affairs. Pursuant to Presidential Regulation No. 174 of 2024, the DG-DSS formulated and implemented policies on digital space supervision and personal data protection. The DG-DSS has served as the interim authority enforcing the Personal Data Protection Law until the independent Personal Data Protection Agency is established.
A key legal milestone was reached in 2025 through Constitutional Court Decision No. 151/PUU-XXII/2024, which reaffirmed personal data as a constitutional right requiring maximum state protection. The court reviewed Article 53(1) of the PDPL concerning the obligation to appoint a personal data protection officer and ruled that the conjunction "and" must be read as "and/or." This interpretation ensures that the listed criteria — public service processing, large-scale monitoring, or large-scale processing of specific data — operate as independent and alternative grounds, strengthening compliance obligations and legal certainty.
The judge emphasized that data protection is integral to the constitutional right to personal security and privacy under Article 28G(1) of the 1945 Constitution. The court's clarification aligns statutory language with constitutional guarantees and reinforces Indonesia's evolving personal data protection framework.
As of November 2025, the government entered the final stage of preparing the implementing regulation of the PDPL, the Government Regulation. According to Executive Secretary of the DG-DSS Mediodecci Lustarini, the draft completed interministerial harmonization and was submitted for presidential approval by Komdigi through the Minister of State Secretariat. Enactment is expected soon, marking a critical step toward full enforcement.
Once established, the independent PDPA will assume supervisory functions of the DG-DSS, focusing on compliance, enforcement and the protection of digital rights. These developments indicate an accelerated transition from legislative groundwork to institutional implementation.
Further digital governance reform continues through the inclusion of the Bill on Amendments to the PDPL and the Cybersecurity and Resilience Bill in the 2026 Priority National Legislation Program, signaling continued modernization of Indonesia's data protection regime.
On the AI front, Komdigi finalized the national AI roadmap, which will soon be enacted through a presidential regulation. The roadmap provides guidance for safe, ethical and sustainable AI development supported by a forthcoming regulation on AI security and safety. Vice Minister of Communication and Informatics Nezar Patria stated the policy aims to balance innovation with risk mitigation. Developed through 21 public consultations involving over 400 stakeholders, the roadmap emphasizes accountability, transparency and respect for creators' rights across sectors such as health care, education, finance and transportation.
Contributor: Dan Or-Hof, CIPP/E, CIPP/US, CIPM, FIP
On 14 Aug. 2025, Israel entered a new era in data protection. The coming into force of Amendment 13 to the Protection of Privacy Law marked a decisive shift, modernizing the country's privacy framework, partially aligning it with global standards while continuing its strong focus on data security. The reform not only updated statutory obligations but also granted the Protection of Privacy Authority sweeping enforcement powers, setting the stage for a year of assertive regulatory activity.
2026 will be the year when the PPA's enhanced authority is felt across Israel's landscape. The regulator now holds the ability to impose substantial financial penalties, issue administrative orders, and initiate criminal investigations for breaches of the law. Organizations can expect rigorous supervision, especially in areas involving large-scale or sensitive data processing, and the use of advanced technologies, including AI.
A key feature of Amendment 13 is the mandatory appointment of data protection officers. The requirement, clarified in draft guidelines published just weeks before the amendment took effect, applies broadly to public bodies, data brokers and any entity whose core activities involve systematic monitoring or processing of sensitive data on a large scale.
The PPA's interpretation, as set out in the guidelines, expands the scope of organizations required to appoint a DPO, emphasizing independence, expertise in Israel's privacy laws, knowledge in information technology and cybersecurity and direct access to senior management. In 2026, a surge in DPO appointments is expected as organizations adapt their governance structures to meet these new expectations.
The PPA's guidelines also signal a stricter approach to compliance. Consent mechanisms must be granular, explicit and tailored to the context with layered notices and clear options for individuals to control their data. The regulator's broad interpretation of the law means organizations must reassess their privacy practices not only to avoid sanctions and class actions, but to align with evolving standards of transparency and accountability.
AI regulation is developing in parallel. While Israel has no intention of enacting a horizontal AI statute in the foreseeable future, the PPA has taken the lead by publishing draft guidance on personal data processing in AI systems. These guidelines clarify that privacy law applies throughout the AI lifecycle — from data collection and scraping to model training and deployment — requiring organizations to conduct risk assessments, ensure transparency and implement safeguards against bias and discrimination. Sectoral regulators, such as the Supervisor of Banks and other governmental entities like the Israel National Digital Agency, have also issued their own positions, contributing to a multi-layered regulatory environment.
Looking ahead, 2026 will be defined by active enforcement; more litigation, including class actions; organizational adaptation; and the gradual emergence of a coordinated approach to privacy, cybersecurity and AI governance.
Contributor: Poojan Bulani, CIPP/E, and Kate Colleary, CIPP/E, CIPM, FIP
Ireland's Data Protection Commission reached its full complement of three commissioners in 2025 with the appointment of Niamh Sweeney, who joined commissioners Des Hogan and Dale Sunderland. Sweeney's appointment coincided with the DPC's expectation of expanded responsibilities, including significant market surveillance authority for certain high-risk AI systems, particularly those used in law enforcement and biometric applications, beginning in 2026.
The DPC will assume a more prominent role in AI regulation in 2026. In September 2025, Ireland designated 15 National Competent Authorities under the EU AI Act, becoming one of the first six member states to reach this milestone. The DPC is expected to play a significant role as one of these designated authorities, particularly where AI systems intersect with data protection and fundamental rights.
In 2025, the DPC continued its pattern of significant enforcement actions. The most substantial fine was imposed in February 2025 on TikTok — totaling 530 million euros — for violations related to the lawfulness of transfers of European user data to China and transparency requirements. The decision addressed TikTok's failure to adequately assess applicable law in China and to verify that supplementary measures and standard contractual clauses were effective in ensuring data protection equivalent to EU standards.
The DPC's approach to AI reveals the most about its evolving strategy. In March 2025, LinkedIn informed Ireland's DPA of its intent to train proprietary generative AI models using the personal data of LinkedIn members in the EU and European Economic Area. Following detailed review and extensive engagement, the DPC identified multiple risks and issues, leading LinkedIn to adopt significant changes including improved transparency notices, reduced data processing scope and enhanced protections for users under the age of 18.
This interventionist approach signals a shift from reactive enforcement to collaborative pre-deployment engagement with AI developers. This is a model the DPC has used with Meta and other large language model providers. The question for 2026 is whether this model will become the norm. Organizations can expect continued early-stage scrutiny of AI systems.
Cybersecurity regulation will also advance significantly in 2026. While Ireland missed the October 2024 transposition deadline for the NIS2 Directive, implementation is expected by early 2026; enforcement is anticipated to begin in 2026.
As Ireland positions itself at the forefront of responsible AI governance in Europe, organizations that proactively engage with the DPC's pre-deployment framework may find themselves better positioned to not only meet regulatory requirements, but also lead in an era where privacy-respecting innovation becomes a competitive advantage. Against this, providers of LLMs may be wary of voluntarily attracting regulatory scrutiny.
The question isn't whether 2026 will bring heightened scrutiny but whether organizations will seize this moment to demonstrate that cutting-edge technology and robust data protection can advance hand in hand.
Contributor: Rocco Panetta, CIPP/E
For Italy, 2026 will likely unfold with enthusiasm and responsibility, but also uncertainty in the realm of the data economy and AI.
Enthusiasm
Italy is entering a new era of compliance. There are new interpretations of the EU GDPR to be implemented and tested, both in relation to well-established concepts — such as that of personal data — and concerning the application of the legal framework to AI systems.
Moreover, the AI Act's provisions will become largely applicable in 2026. Organizations operating in Italy will need to be adequately prepared, ensuring timely — and, where possible, proactive — compliance. Italy has also enacted its own national legislation on AI, Law No. 132/2025, which establishes a framework of key guiding principles and addresses several relevant sectoral issues through dedicated provisions. The law also identifies new national regulatory authorities for AI, the Agency for Digital Italy and the National Cybersecurity Agency, without prejudice to the powers of Italy's DPA, the Garante.
The law entered into force 10 Oct. 2025; organizations operating in Italy will be required to comply with its new obligations and regulatory requirements. In addition, the national AI law will be further refined and supplemented throughout 2026 via the adoption of implementing decrees and secondary legislation.
As a result, entities active in the economic ecosystem will face a multilayered and evolving regulatory environment in the fields of the data economy and AI — an undoubtedly exciting challenge.
Responsibility
Both private and public entities will need to confront these new challenges with a strategic and operational vision. The most effective approach will be one that recognizes the competitive potential of regulatory compliance, striking a balance between technological innovation and the protection of individuals' fundamental rights and freedoms.
In this context, the role of data economy professionals and data protection officers will be even more crucial.
Uncertainty
In 2026, it will become apparent whether — and to what extent — the anticipated regulatory simplification promoted at the EU level will produce effects at the national level. This is a development worth closely monitoring because it may have significant implications for legal certainty, market dynamics and national competitiveness.
Both the AI Act and the GDPR are at stake. Every step towards simplification is welcome, provided it does not lead to deregulation devoid of proportionality.
Contributor: Hiroyuki Tanaka
It is highly likely a bill amending Japan's data privacy law, the Act on the Protection of Personal Information, will be proposed in 2026 since one was not submitted in 2025.
On 9 Jan. 2026, the Personal Information Protection Commission published "Policy for System Reform Regarding the So-Called Triennial Review of the APPI," which includes the following points:
- Relaxing certain formal consent requirements. For example, under the current law, provision of personal data to third parties and the collection of sensitive personal information from data subjects generally requires the data subjects' consent — even when the purpose is to create statistical information. The proposed amendment seeks to relax these rules. When data is used solely for creating statistical information — including AI development that can reasonably be regarded as a form of statistical creation — and it is ensured the data will be exclusively used for the creation of such information, data subjects' consent would no longer be required for the provision of personal data to third parties or for collection of publicly available sensitive personal information.
- Relaxing, in some cases, the notification obligation regarding personal data breaches. The current law, which requires notification to the data subject when reporting to the PPC is necessary, would be revised so that notification to the data subject would not be required when there is little likelihood the protection of their rights and interests would be impaired, even without notification.
- Introducing regulations regarding children's personal information. For example, when the data subject is under 16, obtaining consent and providing required notices must, in principle, be directed to the minor's legal guardians. In addition, a request for the suspension of use of retained personal data may be made without the need to demonstrate any unlawful conduct concerning personal data processing of data subjects under 16. A provision will be introduced mandating that priority consideration be given to the best interests of children in the handling of their personal information.
- Revising obligations relating to the proper handling of personal data entrusted to contractors. Specifically, an explicit statutory obligation shall be imposed on contractors not to handle personal data beyond the scope necessary to perform the entrusted services. If a contractor does not determine the means of handling personal information and the entrusting party and contractor agree on all aspects of the means of handling and the measures necessary for the entrusting party to ascertain the status of handling by the contractor, the obligation shall, in principle, be exempted with respect to that contractor.
- Prohibiting the improper collection and use of personally referrable information — such as a cookie identifier — that enables communication with a specific individual.
- Strengthening regulation concerning facial feature data and similar data.
- Mandating verification of the identity of the recipient and the purpose of use when personal data is provided to a third party under the opt-out scheme.
- Introducing an administrative fine system for certain APPI violations. The administrative fine amount shall be equivalent to the monetary or other financial benefit the business operator gained from the relevant act or would gain by discontinuing it. The applicable acts are limited to certain specified obligations, and the conditions for imposing administrative fines are also prescribed.
- Enhancing the effectiveness of recommendations and orders.
- Expanding the scope of criminal penalties and implementing harsher penalties.
A bill to amend the APPI based on this policy was expected to be promptly addressed in the ordinary Diet session beginning in January, but the Diet's schedule has become uncertain with the prime minister's plan to dissolve the House of Representatives and appeal for a snap general election.
If an amended law were to be enacted in 2026, it is highly likely it would come into effect in 2027 or 2028.
Contributor: Mugambi Laibuta, CIPM
Kenya's 2026 data and technology governance landscape will be defined by transition, regulation and heightened electoral sensitivity. The aftereffects of the Worldcoin judgment, the implementation of the Virtual Asset Service Providers Act, 2025, and the end of the current data commissioner's term will converge just as Kenya gears up for the 2027 general elections.
The Worldcoin decision will continue shaping Kenya's position on high-risk biometric and crypto-linked data operations. The High Court at Nairobi's findings against large-scale data harvesting without proper data protection impact assessments, valid consent or transparent dataflows will set the compliance tone for 2026. Any new digital identity or AI-based system introduced ahead of the elections will be measured against the Worldcoin precedent, especially regarding voter profiling, digital registration and surveillance.
Simultaneously, the Virtual Asset Service Providers Act will begin full enforcement. Crypto exchanges, blockchain firms and token platforms will need to demonstrate data protection, cybersecurity and anti-money laundering compliance before licensing. This will mark Kenya's emergence as a leading jurisdiction in Africa, integrating data privacy with financial technology regulation and aligning its framework with global virtual asset governance standards.
To close compliance gaps highlighted by the courts, the Office of the Data Protection Commissioner will likely issue new guidance notes on biometric data, data protection officers, closed-circuit television, AI and cross-border data transfers. The anticipated review of the Data Protection Act could introduce a restructured ODPC governance model. With a new data commissioner expected at year-end, continuity and institutional identity will be under scrutiny.
The Computer Misuse and Cybercrimes (Amendment) Act, enacted in 2025, gives the government and courts more power to block or remove websites and online content linked to illegal or extremist activity. While this may improve cybersecurity, it may affect privacy and data protection. Authorities could demand access to user data, communication records or platform logs to identify offenders or verify harmful content; this potential action raises concerns about surveillance and the misuse of personal information. Internet platforms and service providers will need to share or process user data more frequently to comply with court orders. Without strong data-handling safeguards, these measures could expose users to privacy violations and erode trust in Kenya's digital space.
As Kenya enters the 2027 election cycle, issues such as voter data use, political profiling, targeted advertising, misinformation tracking and the protection of journalists' and activists' data will dominate the privacy agenda. Thus, 2026 will be a pivotal year, balancing regulatory maturity, political transition and the protection of democratic digital rights.
Contributor: Vincent Wellens, Yoann E. A. Le Bihan, CIPP/E, CIPM, FIP
2026 will mark a turning point for Luxembourg's data regulatory landscape, as most of the EU AI Act's provisions become fully applicable 2 Aug. 2026 and data related acts — such as the Data Act and the Data Governance Act — are expected to be further implemented on the national level.
The EU AI Act will reshape the governance of AI systems, introducing strict obligations for "high-risk" applications in sectors such as finance, health care and public administration. Luxembourg has already designated its national data protection authority, the National Commission for Data Protection, as the main supervisory authority for AI Act enforcement — thereby significantly expanding its role.
Under its broader mandate, the CNPD will act as Luxembourg's market surveillance authority, operate AI regulatory sandboxes and serve as the national point of contact for AI-related oversight. Its responsibilities will include assessing high-risk AI systems, ensuring compliance with transparency and human oversight obligations, and monitoring potential violations of rights. The CNPD's enforcement activity is therefore expected to rise sharply, especially in the financial sector where the use of AI tools for risk and fraud management is already very common.
Other converging frameworks, such as the Data Act, NIS2 Directive, and DGA, are expected to further complexify compliance efforts, requiring an integrated approach across data governance, cybersecurity and AI ethics. The DGA has especially fueled discussion about the transfer of data between public authorities and entities and the implementation of the so-called "once only" principle in the public sector, which is one of the key data topics in the governmental coalition program. This principle must ensure that, instead of asking the concerned citizen, an administrator should look at whether the requested data cannot be received from another administrator which already holds it.
In parallel, the CNPD is expected to pursue its existing domestic agenda, including ensuring compliance on video surveillance rules, safeguarding the independence of data protection officers, and maintaining registers of processing activities. These priorities reflect ongoing challenges with the EU GDPR and the preparation of Luxembourg's landscape to the forthcoming AI regulatory regime.
Overall, 2026 is expected to bring a more holistic approach to regulatory compliance and the transformation of the CNPD from a traditional data protection authority to a dual data and AI regulator with a broader scope of operation.
Contributor: Poh Teck Lim, AIGP, CIPP/A, CIPP/E, CIPM, CIPT, FIP
The Personal Data Protection (Amendment) Act 2024 took effect in 2025, marking a significant milestone in Malaysia's efforts to enhance its privacy landscape. Looking ahead to 2026, the country's privacy scene will likely undergo further consolidation and refinement, driven by the need to align with international standards.
Enhanced enforcement and breach reporting
The revised breach penalty and mandatory breach reporting requirements under the PDPA Amendment are likely to result in increased scrutiny and accountability for organizations handling personal data. The first enforcement actions under these revised requirements may occur, serving as a deterrent to non-compliant businesses.
AI and data protection: A new era of collaboration
In 2026, the Department of Personal Data Protection is expected to collaborate closely with the National AI Office. The office was established to spearhead AI development and integration across various sectors in Malaysia and to develop AI frameworks and guidance for key industry sectors that prioritize data protection and privacy.
A pivotal year
In conclusion, 2026 will be a pivotal year for Malaysia's privacy landscape. With the enforcement of new regulations, collaboration on AI initiatives and a focus on data protection, the country is poised to make significant strides in protecting personal data and promoting digital trust.
Contributor: Gabriela Espinosa Cantú, CIPP/US, CIPM
Mexico's privacy regulatory landscape underwent a significant transformation in 2025.
A new data protection authority, the Secretariat of Anti-Corruption and Good Governance, was established due to the dissolution of the National Institute for Transparency, Access to Information and Protection of Personal Data. In March, new data protection laws for both the private and public sectors were enacted in the General Law on Transparency and Access to Public Information, the General Law on the Protection of Personal Data Held by Public Sector Entities and the Federal Law on the Protection of Personal Data Held by Private Parties.
While the core principles and obligations remain largely consistent with the previous legal framework, it is expected that the regulations may introduce additional or expanded requirements. There is also a possibility that the new authority will review and propose amendments to the current laws once it has fully settled in its responsibilities.
Several new legal initiatives in Congress intend to advance a legal framework to regulate the use of AI, strengthen privacy and data protection and address digital rights challenges in the context of technological developments and their risks. The proposed reforms reflect a growing concern with the potential misuse of AI and the intent to incorporate ethical principles around its use.
Integration of AI in public security
The proposal introduces the use of AI within public security institutions to support the investigation and analysis of organized crime. Applications would include biometric recognition, financial network analysis, predictive risk assessment and automated criminal investigation systems.
Legal framework for AI governance
Several initiatives suggest providing Congress faculties to enact new federal laws envisioned to regulate the design, development and use of AI systems. These initiatives include ensuring that AI operates under principles of fairness, transparency and privacy, as well as establishing a regulatory authority for its implementation.
Digital rights, children's protection and cybersecurity
One initiative considers cyber harassment a felony; criminal penalties are proposed for the unauthorized creation or distribution of AI-manipulated content. Another proposal seeks to establish the government's obligation to ensure secure internet access and to prevent AI-driven digital violence against minors, protecting their privacy, dignity and security.
Copyright
An initiative proposes recognizing the validity of works created with the assistance of AI systems, provided there is substantial and creative human participation in their development. Works created exclusively by AI without meaningful human input or those that replicate or imitate preexisting works would be excluded from such protection.
The Senate created a Commission for Analysis, Monitoring and Evaluation on the Application and Development of AI in Mexico, which has held several sessions to review different initiatives. However, there's still no clear direction on which of these will move forward through the legislative process and become effective laws.
Overall, Mexico's legislative agenda shows a growing interest in regulating AI and digital rights, but progress remains slow and uncertain. The current political agenda in Mexico is largely driven by the ruling party, so these topics will be pushed into law when they become a priority.
Contributor: Abraham Mouritz, AIGP, CIPP/E, CIPP/US, CIPM, CIPT, FIP
2025 saw many developments that will further shape the legal data landscape in the Netherlands for 2026.
TikTok decision
On 7 Oct. 2025 the Amsterdam Court of Appeal rendered a key class action decision against TikTok under the Dutch Act on the Settlement of Mass Damages in Collective Actions, as known as the WAMCA regime.
The court set aside the District Court's prior finding that claims for non-material damages were unsuitable for a collective action due to their individual nature. The Court of Appeal ruled that the claims for non-material damages are, in fact, based on the same factual violations and legal standards and therefore can be assessed collectively.
It also affirmed court jurisdiction over all non-GDPR-based claims and ruled that the lead claimant could represent minors who would join the action later. This ruling is highly relevant for other collective actions in the Netherlands, particularly those concerning privacy and consumer rights against Big Tech companies.
The Appellate Court's decision to allow collective claims for non-material damages will significantly lower the admissibility hurdle for future data privacy WAMCA cases. This is consistent with the Court of Appeals' September 2024 ruling in the joint case against Oracle and Salesforce. The ability to claim all damages — both material and non-material — in a single collective procedure is seen as a way to promote judicial efficiency; it will likely lead to an increase in WAMCA class actions seeking compensation for privacy violations.
NIS2 and e-Evidence Directives
EU directives require implementation in national laws and, therefore, leave more room for country-to-country differences when compared to other EU regulations. The transposition of the e-Evidence Directive and the NIS2 Directive into law will present distinct challenges and nuances.
The Netherlands will not meet the original 2024 transposition deadline; the new Netherlands Cybersecurity Act is now expected to enter into force in the second quarter of 2026. This delay places the country among a group of slower-moving EU states, offering a temporary grace period for companies newly within scope.
In 2026, the Cybersecurity Act's supporting legislation will likely mandate stricter, sector-specific security standards in areas like health care and life sciences. This focus on sector-specific stringency will result in higher compliance costs for certain entities in the Netherlands compared to other EU countries opting for minimum compliance.
The e-Evidence Regulation, related to the e-Evidence Directive, is set to apply 18 Aug. 2026, with the directive transposition deadline in February 2026.
The transposition of the e-Evidence Directive will heavily emphasize procedural safeguards in 2026. A more rigorous national approach to protecting legal professional privilege driven by strong domestic concerns and court rulings on covert production orders is expected. This emphasis on legal professional privilege protection will make the judicial process a stronger guarantor of fundamental rights in cross-border evidence requests, uniquely positioning the country against EU member states with less rigorous or delayed filtering mechanisms.
Contributor: Daimhin Warner, CIPP/E
Biometrics, or, more specifically, the regulation of biometric processing will be the key theme for 2026 in New Zealand. While the rest of the world grappled with how best to regulate emerging technology, New Zealand quietly got on and took concrete steps to do just that.
On 21 July 2025, The Office of the Privacy Commissioner issued the Biometric Processing Privacy Code 2025, which regulates how organizations use biometric technologies to collect and process biometric information.
The code is law, issued by the privacy commissioner under Part 3 of the Privacy Act 2020. It came into force 3 Nov. 2025, but organizations already using biometric systems have a nine-month grace period to comply with the new rules. That period ends 3 Aug. 2026.
The code broadly applies to all organizations in the public and private sector that collect biometric information for biometric processing. It adapts the Privacy Act's existing 13 information privacy principles to specifically apply to biometric processing. The code alters the IPPs in some very material ways, intended to address the specific risks of biometric processing, including the introduction of necessity and proportionality assessments.
The OPC has published extensive guidance on the code and, specifically, the completion of proportionality assessments, which it accepts will be new territory for many organizations. However, the OPC has indicated it will be closely watching compliance, using its inquiry powers where necessary to investigate systemic non-compliance of the code. Privacy Commissioner Michael Webster has stated he will be looking for evidence and data to back up proportionality assessments.
It will be worth watching New Zealand in the coming year as both organizations and the OPC grow accustomed to the regulation of biometrics, offering a potential source of guidance, precedent and expertise in this area.
On 1 May 2026, a new IPP3A, inserted by the Privacy Amendment Act 2025, will come into force. Information Privacy Principle 3 of the Privacy Act only currently applies when an organization collects personal information directly from the person concerned. Information Privacy Principle 3A expands the transparency obligation to also apply to the collection of personal information from third parties, referred to as "indirect collection."
While not as consequential as the Biometric Processing Privacy Code, this amendment — required in part to ensure New Zealand could maintain its EU adequacy status — is important to the OPC, which will be monitoring compliance in 2026 and beyond.
It is unlikely 2026 will see any further major legislative changes, which is fortunate, as organizations will have more than enough to contend with managing the new transparency obligations and getting biometrics use in line with the new rules.
It is possible, however, that there will be some policy maneuvering — a necessary precursor to law change — related to the contentious topic of child access to social media based on Australia's experiences over the coming months.
Contributor: Juan Castillo and León Weinstok
In 2026, Nicaragua is expected to continue developing its personal data protection framework, primarily grounded in Law No. 787, the Law on the Protection of Personal Data, which was enacted in 2012.
The law serves as the principal legal framework regulating the processing of personal data in the country. Despite its comprehensive provisions, practical enforcement remains limited, as the law has yet to establish a national data protection authority. Sanctions for non-compliance are administrative in nature and mainly address violations such as data falsification and security breaches.
Complementary sectoral and constitutional provisions support this framework but remain fragmented, leading to incomplete and inconsistent protection — particularly regarding cross-border data transfers and the clear delineation of data controller responsibilities. Ongoing discussions within the telecommunications, banking and health care sectors reflect a growing awareness of data governance challenges.
Looking ahead, Nicaragua's digital transformation — driven by the expansion of digital financial services and e-government initiatives — is accelerating the need for coherent public policy on privacy and cybersecurity. Although comprehensive legislative reform is not anticipated in 2026, the continued expansion of these digital services may catalyze the creation of a national data protection authority and the adoption of stronger compliance mechanisms.
Collectively, these developments would enhance Nicaragua's alignment with modern data protection principles and better position the country for more effective privacy regulation in the near future.
Contributor: Ridwan Oloyede, CIPP/E, CIPM, FIP, Precious Nwadike, CIPP/E, CIPM, and Victoria Adaramola, CIPP/E
After a year defined by record enforcement and the landmark Nigeria Data Protection Act General Application and Implementation Directive, the country's data protection landscape in 2026 will be characterized by a maturation of oversight and a flurry of complex legislative activity. The era of passive compliance is ending as organizations shift toward operationalizing rules and navigating a multiregulator environment.
The Nigeria Data Protection Commission is set to intensify its enforcement, leveraging the GAID. Its actions suggest a move toward a more risk-based strategy, prioritizing high-risk sectors and imposing stricter measures on the private sector, potentially leveraging "naming and shaming" tactics. Businesses also anticipate crucial guidance on ambiguous areas, particularly international data transfers. This clarity is expected as Nigeria operationalizes its recent admission as an associate member of the Global Cross-Border Privacy Rules Framework.
Legislative activity will be a critical focal point. Finalization is expected of the long-touted National AI Strategy; progress is anticipated on various AI-specific bills such as the Digital Sovereignty and Fair Data Compensation Bill, National Digital Economy and E-Governance Bill, National AI Commission (Establishment) Bill and four consolidated AI bills.
Concurrently, two proposals to amend the NDPA — targeting foreign social media platforms and app developers — are expected to advance alongside a new proposal for a cybersecurity law and the potential reintroduction of the Digital Rights and Freedom Bill.
A significant push for child online safety is anticipated with expected progress on the Child Online Access Protection Bill, amendments to the Cybercrimes Act and new rules from the Nigerian Communications Commission, including its draft Internet Code of Practice and Standard Operating Procedure for Child Online Protection.
Additional layers are expected as domain and sector-specific regulators and court interpretations contribute to the framework. 2026 will challenge organizations to move beyond check-box compliance and embed data governance as a core strategic function.
Contributor: Lia P. Hernández Pérez
In 2025, the use and regulation of AI was the topic marking legislative predictions on data protection and privacy in the Republic of Panama.
Currently, three draft bills were presented by three congressmen from three different parliamentary groups and are awaiting discussion in the National Assembly. None of the drafts have been approved or reached committee stage for debate.
The first bill was presented in August 2024 by a group of deputies led by Deputy Manuel Cheng; it establishes the legal framework, promotion and development of AI. Presented in early July 2025 by Deputy Ernesto Cedeño, the second bill seeks to regulate AI. The third bill, presented through the citizen participation procedure, establishes a framework law for the protection of citizens' digital rights in relation to AI.
For its part, the National Secretariat for Science, Technology and Innovation, with the support of the Georgia Institute of Technology, has initiated a process to develop the first National AI Strategy. This initiative involves training various sectors of the country and developing the strategy through multisectoral working groups to address the needs of all segments of society in relation to the use of this emerging technology.
It should be noted that a group of university students has presented a bill to establish a digital protection system and restrictions on access to social media information for children and adolescents. The National Assembly's Commission on Women, Children, and the Family has begun to debate the bill, and it is currently before an ad hoc commission for changes before continuing the legislative process.
Four years after it entered into force, Law 81, also known as Panama's Personal Data Protection Law, continues to be passively applied by some sectors. The insurance and reinsurance sector has adapted its data protection regulations after the Superintendence of Insurance and Reinsurance issued Agreement No. 5 in August 2025, establishing privacy guidelines.
As predicted in the IAPP Global Legislative Predictions 2025, Panama finally adapted Law 478 through the approval and publication in the Official Gazette. This law brings national legislation into alignment with the Budapest Convention on Cybercrime, addressing offenses such as online extortion, the dissemination of non-consensual intimate images and online identity theft.
Here's hoping that Panama will finally have some legal text, a national strategy or law on AI by 2026.
Contributor: Cecilia Abente Stewart
In November 2025, Congress finally passed the long-awaited Law No. 7593/2025 on Personal Data Protection in Paraguay, a milestone in the country's efforts to build a modern privacy framework. The law was subsequently promulgated by the Executive Branch, completing the legislative process; the law will enter into force in 2027.
The new law draws inspiration from the EU GDPR and establishes fundamental principles for data protection, including a comprehensive set of rights for data subjects, obligations for data controllers and processors, rules governing international data transfers and the establishment of an independent supervisory authority.
Law No. 7593/2025 comes into force in two years, giving both the public and private sectors time to prepare for compliance. The coming months will be crucial for establishing the new authority, drafting the regulatory decree and helping organizations gradually adapt to the new requirements. Awareness and training initiatives are expected to expand significantly throughout 2026.
After several years of debate, Paraguay is on track to join the growing group of countries in Latin America with comprehensive data protection laws, reinforcing the country's commitment to accountability and good governance.
Contributor: Catherine Escobedo Paredes
Peru enters 2026 amid significant political and security turbulence. In October 2025, Congress unanimously impeached the sitting president following a severe crisis marked by rising crime and gang violence. General elections are scheduled for 12 April 2026.
Compounding these challenges, a series of major data breaches in 2025 exposed critical weaknesses in Peru's digital infrastructure. The April 2025 breach of the National Registry of Identification and Civil Status compromised 146,199 biometric facial recognition images and was followed months later by the publication of the initial electoral roll containing the home addresses of more than 27 million Peruvians — an alarming incident in a context of rising extortion and organized crime.
Earlier breaches affecting a major bank and a telecommunications provider intensified pressure on lawmakers. Combined with the electoral cycle, these failures are driving renewed legislative urgency around data protection, cybersecurity and emerging technology governance.
The new Regulations of the Personal Data Protection Law took effect in early 2025, introducing the obligation to designate data protection officers with phased deadlines based on company size — the first of which fell on 30 Nov. 2025. However, the government only published detailed directives on DPO designation — including risk-based frameworks and phased timelines — on 31 Dec. 2025, leaving early adopters to designate data protection officers without complete guidance. Significant gaps remain around breach notification procedures and the status of existing DPO designations.
Meanwhile, Ministerial Resolution No. 342-2025-JUS, published in October 2025, should become binding in early 2026, finally providing the National Authority for Personal Data Protection with a predictable framework for calculating fines related to violations. As organizations face their first compliance audits under these new rules, regulatory clarifications addressing implementation challenges are likely to emerge throughout 2026.
Institutional reform debates are also intensifying. Draft Bill 12926/2025-CR, which would create a powerful Digital Transformation Superintendency, absorbing the ANPD and other digital governance functions, is widely viewed by stakeholders as overly broad and institutionally overreaching. The proposal is expected to face significant debate and is unlikely to advance as drafted.
On the cybersecurity front, Peru's long-delayed proposed cybersecurity law, Bill 9906/2024-CR, faces an uncertain path despite being mandated by the 2026–28 National Cybersecurity Strategy. Poor coordination with recently enacted regulations — the PDPL Regulation and the Digital Trust Framework Regulation — creates overlapping incident-notification and governance requirements. Significant amendments are likely before any plausible adoption in 2026.
AI-related legislative activity remains intense, with more than 35 bills under discussion, placing Peru among the regional leaders in AI legislation.
A highly controversial proposal to establish a national DNA database for all criminal suspects and prisoners appears poised for passage despite weak safeguards and international criticism. Similarly, bills being discussed by the Justice and Human Rights Commission of the Congress would modify the Criminal Procedural Code to incorporate biometric fingerprint and facial verification as police identity control tools. One proposal includes a modification to Article 13 of the PDPL allowing such processing without consent. Both measures reflect growing public pressure for security-focused solutions that override privacy safeguards.
Other proposals, such as a draft bill on AI supervision and transparency, remain stalled and are unlikely to advance in 2026 amid legislative overload and overlapping initiatives.
Overall, legislative activity in 2026 is expected to focus on implementing the newly clarified PPDPL regulations
Contributor: Irish Salandanan-Almeida, CIPM
The Philippines' data protection authority, the National Privacy Commission, welcomed a new privacy commissioner and chairman, Johann Carlos S. Barcena, and a new deputy commissioner, Jose Sutton Belarmino II in 2025. Barcena is a career public servant and governance professional, having previously served as executive director at the Governance Commission for Government-Owned or Controlled Corporations. His extensive work in institutional reform and oversight offers a strategic foundation for advancing the country's data protection framework.
Meanwhile, Belarmino brings extensive expertise and deep institutional knowledge in data privacy and security to the NPC. He previously served in the public sector in the Philippines and the private sector in Europe, focusing on global privacy compliance.
In October 2025, the NPC issued a cease-and-desist order against a technology company for collecting biometric data in exchange for monetary incentives. While the NPC supports technological innovations, it emphasized that, "The promise of technology must never come at the expense of human dignity and data protection."
As AI continues to emerge as a top priority across industries, the NPC reminded organizations "that data privacy is a fundamental right, and that innovations involving personal data such as biometric data, must operate within the bounds of lawful, fair, and transparent processing."
Cybersecurity remains a key priority with the NPC urging heightened public vigilance following reports that user data from a financial technology company was exposed on the dark web in the fall of 2025. While the NPC's investigation concluded there was no data breach and that user data remains secure, it issued a warning to individuals and groups engaging in the unauthorized access, sale or distribution of personal data. The NPC stated such acts constitute clear violations of the Data Privacy Act of 2012 and are punishable under the law.
With the proliferation of SMS scams, particularly those using international mobile subscriber identities, catchers or fake base stations that mimic legitimate cell towers, the government and the private sector continue to intensify protection efforts. Organizations have begun leveraging AI, analytics and automation to detect and block scam activities. Amplified awareness campaigns remain key in successfully informing the public about how to protect themselves from scams. A regional approach to intelligence sharing and incident reporting is likewise being considered.
Contributor: Piotr Lada, CIPP/E, CIPP/US, CIPM, CIPT, FIP
In 2026, Poland is expected to see notable developments in data protection, cybersecurity and AI regulations, both through national updates and the implementation of new European frameworks.
In terms of national regulations, Poland's data protection authority, Urząd Ochrony Danych Osobowych, took steps to initiate legislative changes that may materialize in 2026. This applies to regulations relating to, among other things:
- The role of the DPO in law enforcement agencies.
- The functioning of the National Police Information System and the use of operational control by the police.
- Responsibility for the dissemination of false content in the form of deepfakes and ensuring effective protection of personal data.
- Disclosure of medical records for scientific purposes and access to the personal data required to be entered in the Register of Entities Performing Medical Activities.
- Law enforcement access to data covered by telecommunications protections in the Electronic Communications Law to safeguard privacy and personal data protections.
- A model of universal access to entrepreneurs' personal data stored in the Central Register and Information on Economic Activity.
- Reservation of a minor's national identification number by parents or legal guardians to prevent unauthorized use, such as fraud or identity theft.
- Increasing the level of protection of personal data contained in electronic land and mortgage registers.
In 2026, the Supreme Administrative Court of Poland is expected to rule on a precedent-setting case concerning whether national statutes of limitations — under the Polish Code of Administrative Procedure — can be applied to administrative fines resulting from GDPR provisions. Poland's DPA argues the GDPR does not provide for a statute of limitations on administrative fines and does not delegate the power to introduce one to member states, thus applying national statutes of limitations may violate the principles of uniformity and effectiveness of EU law.
The DPA also requested that the Supreme Administrative Court consider referring a preliminary ruling to the Court of Justice of the EU to ensure a uniform interpretation of EU law.
In 2026, Poland's DPA is expected to publish updated guidelines on data processing in employment, which will help employers ensure compliance with the GDPR and labor laws related to personal data processing during recruitment and employment. The guidelines will consider technological changes, including those resulting from AI.
The planned employment guidelines reflect a broader trend of heightened DPA activity in this area. In 2025, it published guidelines on reporting personal data breaches and data protection impact assessments.
In the European context, Poland is still awaiting adoption of a national law implementing the NIS2 Directive. The government published the ninth version of the draft law amendment in October 2025.
Legislative work also continues on the adoption of national regulations implementing the EU Data Act. The Data Act was directly applicable in all EU member states beginning 12 Sept. 2025, with some exceptions. However, the adoption of the national law will enable its full and effective application in Poland, primarily by creating an institutional framework and a system of sanctions.
In August 2025, the government published a draft law on AI systems, implementing the AI Act, which is already in force in the EU. The draft law specifies, among other provisions, the organization and oversight of the market for AI systems, as well as the procedures addressing non-compliance with national and EU AI regulations.
In summary, Poland is actively working on national legislation to complement and align with the GDPR and regulations implementing EU applicable legislation and frameworks. The results of these legislative processes are expected to materialize in 2026 — creating new challenges for private and public entities while appropriately addressing personal data protection and compliance in respective areas.
Contributor: Pedro Marques Gaspar, AIGP, CIPP/A, CIPP/C, CIPP/E, CIPP/US, CIPM, CDPO/BR, FIP
Portugal is expected to take a deliberate, forward-looking approach to implement the EU AI Act. The country will likely observe regulatory developments in other EU markets before consolidating its national framework — much as it did during the early years of EU GDPR enforcement. While the AI Act entered into force 1 Aug. 2024 and will be largely applicable by 2 Aug. 2026, the coming year will be a decisive year for Portugal's institutional and regulatory readiness.
Throughout 2026, national efforts are expected to focus on building the administrative capacity and interagency coordination required for the AI Act's operational phase. The National Communications Authority has been designated as one of the authorities responsible for AI supervision. Oversight, however, will be shared among several entities, including Portugal's data protection authority, the National Data Protection Commission, and other sectoral regulators designated under Article 77 of the AI Act.
The CNPD is expected to retain a leading role where AI systems involve the processing of personal data or the protection of fundamental rights. Coordination between ANACOM, the CNPD and other regulators will be essential to manage overlaps between the AI Act, GDPR, NIS2 Directive and Data Services Act.
In parallel, 2026 is likely to mark the first phase of national guidance and enforcement preparedness. Regulators will hopefully publish interpretative materials, sectoral guidelines and possibly launch regulatory sandboxes to support compliant innovation. The National Communications Authority and the CNPD may also collaborate with the National Cybersecurity Centre to harmonize approaches to AI-related risk management, incident reporting and digital resilience. These efforts will shape how Portugal operationalizes the AI Act's risk-based framework and ensure consistent supervision across sectors.
Data protection impact assessments will remain central to compliance for high-risk AI systems involving personal data, embedding privacy-by-design principles into technical development. In parallel, providers and deployers will be required to carry out conformity assessments before market placement, including comprehensive documentation such as model cards, decision logs and risk registers. These mechanisms are expected to become best practices demonstrating accountability, transparency and human oversight in AI governance.
Moreover, Portugal's strategic vision for ethical and human-centric AI, as articulated in the AI Portugal 2030 Vision strategy, is likely to advance through sector-specific initiatives. Key industries such as health care, financial services, urban transformation and sustainable energy are expected to feature prominently, reflecting the government's ambition to position the country as a testbed for responsible AI experimentation under EU-wide standards.
The revised EU Product Liability Directive, already in force, must be transposed by 9 Dec. 2026, reinforcing accountability and consumer protection in AI-driven products and services. The EU's proposed AI Liability Directive, still under negotiation, is expected to complement this framework in the years ahead.
Together, these developments will define Portugal's regulatory scene for AI and digital services as the country transitions from strategic planning to operational enforcement in 2026.
Contributor: Adriana Neagu, AIGP, CIPP/E, CIPP/US, CIPM, FIP
Romania has undertaken several legislative initiatives to regulate areas that have gained traction on the public agenda, including establishing guidelines for AI, preventing the dissemination of harmful or toxic content on social media platforms and regulating children's access to digital content and services.
Among these initiatives, those with the strongest likelihood of becoming law are proposals aiming to enforce stricter regulations for children online, following models in countries such as Germany and the U.K. Two proposals — a draft age of consent law that addresses online service providers and web pages and a draft law on the protection of minors under 18 from harmful content distributed on very large online platforms — are progressing through the legislative process.
The first law grants parents or guardians the right to request that online service providers or website administrators suspend or delete accounts of children under 16 or restrict their access to web pages with harmful content. Harmful content includes commonly recognized categories such as tobacco, alcohol, hate speech and self-harm, but it also encompasses some areas interpreted as health-damaging behaviors or any other forms of content that may pose a significant risk to physical or emotional development.
The proposal targeting VLOPs introduces additional specific obligations, such as implementing active parental control measures, establishing age verification protocols and ensuring transparency in the Romanian language. Furthermore, the law proposes prohibiting the monetization of content featuring minors without the consent of parents or guardians.
An additional proposal has been put forward regarding VLOPs to address the dissemination of illegal content, including material that incites hatred or manipulates major national issues through malicious technological means intended to mislead public opinion. The proposed obligation requires the removal of any illegal content within 15 minutes of its publication by employing specialized identification algorithms. Critics have argued this requirement is nearly impossible to fulfill in practice. Further provisions seek to limit the reach of such posts.
Contributor: Fabiola Uwera
Rwanda continues to cement its status as a leading technology hub in Africa. Following a year defined by the launch of the continent's first AI Scaling Hub and the publication of the National Data Sharing Policy, 2026 will see the government pivot from foundational initiatives to robust operationalization. The focus now turns to embedding AI and data governance across the public and private sectors.
Legislative reform will be a primary driver of this shift. Completion of a comprehensive review of the information and communication technology legal framework is expected to update provisions that currently lack specific oversight of emerging technologies. This overhaul is anticipated to bring significant changes to legislation governing electronic technologies, cybercrimes and cybersecurity, among others. These updates will likely introduce dedicated provisions for AI and other emerging technologies, ensuring the regulatory environment keeps pace with the digital landscape.
The Data Protection and Privacy Office, housed within the National Cyber Security Authority, is expected to transition from an awareness-building phase to active enforcement. After a year of extensive public consultation, new sector-specific guidance notes are expected to be released. Crucially, the DPO is poised to enforce strict compliance obligations, including mandatory registration and authorization for cross-border data transfers. It is also expected to clarify data subject rights and set clear breach response protocols.
The establishment of the Responsible AI Office under the National AI Policy signals a move toward agile regulation. Aligned with the 2023 policy, the government has been revising its legal framework and holding consultations to gather firsthand input on ethical AI guidelines under development. In 2026, a new law is expected to be introduced that creates a coordinating authority to oversee AI deployment and align national standards with international best practices. This domestic effort will be bolstered by Rwanda's continued global leadership, building on the success of the Global AI Summit, Africa Declaration on AI, and AI Playbook for Small States developed in partnership with Singapore.
Finally, the continued rollout of the national digital identity project will remain a priority with expected guidance on safeguards to ensure privacy and security.
Overall, 2026 promises to be a year in which Rwanda balances rapid innovation with accountability. By integrating updated regulations with robust ethical frameworks, the country is set to influence not only national policy but also the broader continental approach to responsible digital governance.
Contributor: Muneeb Imran Shaikh
Data privacy
Saudi Arabia issued its Personal Data Protection Law and Implementing Regulations in 2023, and the law was fully enforceable by September 2024. Recent years have been marked by continuous enhancement and issuance of regulatory guidelines to help organizations understand and embed data privacy within their operations.
2026 will be characterized by increased enforcement of the PDPL by the Saudi Data and AI Authority. This includes increased registration of controllers, particularly public bodies and entities processing personal data.
In recent years, the SDAIA has also launched services on its national data governance platform. Entities processing personally identifiable information would likely be required to submit privacy impact assessments and report any data breach cases as part of the data protection authority's enforcement function.
With the increased adoption of cloud applications, platforms and infrastructure globally, companies are often compelled to host their data outside their jurisdiction. The SDAIA issued regulations and risk-assessment guidelines on PII transfer outside Saudi Arabia. It is likely the DPA will issue a list of countries and jurisdictions deemed adequate, supporting economic growth and facilitating organizations' ability to conduct a sound transfer impact assessment.
AI governance
Under its Public Investment Fund, Saudi Arabia launched the AI company Humain to develop AI solutions and technologies and to invest in the broader AI ecosystem. This initiative will give a huge boost to AI growth and adoption within the country, meaning associated standards and regulations regarding AI ethics and governance are likely to be developed and made enforceable in the coming years.
It is important to mention that the SDAIA has already published AI ethics principles and controls. With the increased adoption of AI within Saudi Arabia, the SDAIA and other regulators are likely to enhance requirements around transparency, recordkeeping, explainability and auditability of AI systems.
This is expected to prompt various organizations to either create independent functions for AI ethics and governance or empower existing functional units within organizations to establish and implement AI governance.
Contributor: Petar Mijatović
Building on the strategic frameworks adopted in recent years, Serbia is expected to make further progress in the areas of privacy protection, AI governance and digital responsibility throughout 2026.
The Personal Data Protection Strategy for 2023–30 outlines key goals, including the regulation of automated processing of genetic and biometric data and oversight of audio and video surveillance. While substantial implementation of these measures has not yet occurred, more concrete actions are expected in 2026, such as amendments to the Law on Personal Data Protection or the introduction of special regulations addressing emerging technologies.
Public authorities, including the Commissioner for Information of Public Importance and Personal Data Protection, may further engage citizens through media, social networks and open data platforms to promote awareness and compliance.
The AI Development Strategy for 2025–30 establishes goals for responsible AI use, focusing on strengthening institutional and legal frameworks, investing in human capital and supporting technological innovation.
Based on these strategies, 2026 is likely to be a foundational year for AI governance in Serbia. Preparatory work for drafting a dedicated AI law and forming specialized working groups may commence, guided by alignment with EU regulations and United Nations Educational, Scientific and Cultural Organization guidelines. These steps aim to regulate market applications, prevent misuse and promote the ethical development of generative AI.
Digital responsibility is increasingly emphasized in both the AI and personal data protection strategies with human-centric principles — including ethics, human rights, non-discrimination and proactive risk mitigation — guiding AI deployment. In 2026, it is anticipated that Serbia may begin initiatives such as public awareness campaigns, industry guidelines and monitoring tools to foster responsible digital practices. Investments in education and training may further support the development of skills needed to responsibly navigate AI and digital technologies.
Overall, while the exact outcomes cannot be guaranteed, 2026 is projected to be a critical year in moving from strategic planning toward tangible governance actions. By reinforcing privacy protections, establishing robust AI governance structures, and promoting digital responsibility, Serbia is likely to advance its commitment to safe, ethical, and human-centric technological development.
Contributor: Pranav Rai
Singapore's 2026 digital policy agenda signals a calibrated shift toward trust, accountability and global interoperability while preserving its innovation-driven reputation. Expect refinements across privacy, AI governance, and cybersecurity, with legislation introduced only where essential.
The Personal Data Protection Act will continue to be the cornerstone of the country's privacy regime; enforcement powers now allow penalties of up to SGD1 million (approximately USD776,602) or 10% of annual turnover. Although data portability has long remained dormant, 2026 could mark its activation, bringing it in line with global standards.
The elevation of the Data Protection Trustmark to a national standard underscores certification-based accountability. Cross-border data transfer frameworks will deepen, building on the Association of Southeast Asian Nations-European Union cross-border dataflows initiative. Rather than introducing sweeping new laws, Singapore's Personal Data Protection Commission is expected to issue targeted guidance — such as the Advisory Guidelines on use of Personal Data in AI Recommendation and Decisions Systems — to clarify obligations in emerging contexts like generative AI.
In AI governance, Singapore will maintain its light-yet-vigilant approach. The Model AI Governance Framework for Generative AI and the AI Verify sandbox allow companies to test systems against 11 trustworthy principles mapped to global standards. These voluntary programs are expected to inform technical standards and sectoral guidelines. Legislative intervention will remain narrow, exemplified by the Elections (Integrity of Online Advertising) (Amendment) Act 2024, which bans deepfakes during campaigns.
In 2026, anticipate co-regulatory models that blend voluntary compliance with enforceable norms for high-risk AI applications.
Amendments to the Cybersecurity Act expand oversight beyond critical information infrastructure to include cloud and digital service providers, introducing mandatory incident reporting and codes of practice. A civil penalty regime — with fines up to 10% of turnover — is now operational. The Operational Technology Cybersecurity Masterplan 2024 extends resilience to non-CII sectors such as health care and utilities, signaling broader systemic safeguards.
Singapore is expected to double down on principled regulation. Privacy enforcement will sharpen; AI governance will evolve through co-regulation; and cybersecurity will broaden in scope. The city-state will continue to only legislate where necessary, reinforcing its global reputation as a trusted, innovation-ready digital hub.
Contributor: Lukáš Mrázik, CIPP/E, CIPP/US, CIPT, and Lucia Korba
Slovakia is preparing for a year of significant legislative developments, particularly in the areas of data protection, AI, cybersecurity and related criminal law. These changes reflect both the country's internal priorities and its efforts to align with evolving EU regulations.
While Slovakia's Personal Data Protection Act is expected to be replaced; the impact of the changes is anticipated to be mostly minor and technical. More importantly, a considerable shift is predicted in enforcement practices. The Office for Personal Data Protection has become more active, especially in imposing higher fines. While penalties have traditionally ranged in the lower thousands, a substantial increase in the amounts imposed is expected, signaling a stricter regulatory approach.
2025 saw the implementation of the NIS2 Directive into Slovakia's Cybersecurity Act. This expanded the scope of regulated entities and introduced stricter obligations. In 2026, the National Security Authority is expected to intensify inspections and oversight, focusing on compliance with the cybersecurity law.
AI is another area undergoing rapid legislative evolution. The country is working on a national framework to support the adoption of AI in both public and private organizations. Alongside this, initiatives related to the AI Act, Data Act and DGA are progressing.
Criminal law is evolving to address emerging technological threats. The criminalization of harmful deepfakes is a key development; the legislation targets cases where such content causes substantial damage. In addition, new legislation is being introduced to reshape data-sharing practices with law enforcement in the digital space. These proposals include expanded obligations for sharing data during criminal investigations; implementation is expected within the next year.
Overall, 2026 promises to be a transformative year for Slovakia's digital and legal landscape in the areas of privacy, digital responsibility and AI governance.
Contributor: Armand Swart, CIPP/E, CIPM
South Africa's privacy landscape is set for a pivotal year in 2026 as initiatives from 2024–25 crystalize into binding obligations, sectorspecific rules and more stringent enforcement.
Direct marketing reset
South Africa's Information Regulator has issued a guidance note and amended regulations addressing direct electronic marketing, expressly including voice calls. These instruments tighten consent standards and strengthen optout mechanisms. They also clarify the respective rights and obligations of organizations and data subjects and signal how the IR intends to enforce these requirements.
Increased access and monitoring
Newly amended regulations to South Africa's data privacy law, the Protection of Personal Information Act, simplify and clarify the exercise of data subject rights. For example, objections to personal data processing may now be lodged free of charge via multiple channels, including hand delivery, post, fax, email, SMS, WhatsApp or telephone.
The IR's eServices portal, launched in 2024, will underpin more visible compliance monitoring into 2026, particularly around information officer registration, breach reporting and access to information obligations.
A prevalent theme: Data breaches
While 2025 saw fewer published enforcement notices for breaches, data breach enforcement is expected to remain prominent in 2026 given the risks involved. The IR's communication executive reported that the IR received 1,607 breach reports between April and September 2025, a 60% increase from 2024. This increase may partly be attributed to the mandatory use of the eServices portal for breach reporting.
Cybersecurity for financial institutions
The joint standard on cybersecurity and cyber resilience standards for financial regulators — effective June 2025 — applies to banks, insurers, exchanges, retirement funds and investment administrators. Institutions have until Dec. 2026 to implement enhanced cybersecurity measures, including breach notifications, incident responses, risk assessments, and security controls. Increased enforcement and potentially larger fines are anticipated.
Health care data: Critical care required
Draft regulations on processing health care data published in 2025 would apply to insurers, medical schemes, employers and pension fund administrators. A final version is expected in 2026. The draft imposes stricter requirements on the retention and destruction of personal data, crossborder transfers and consent. Critics raised concerns about burdensome consent provisions and the reliance on legitimate interests for processing health care data, which the POPIA does not permit.
Cloud computing
While the government's National Policy on Data and Cloud 2024 primarily reinforces principles in existing legislation — such as the POPIA and the various patchwork of cybersecurity laws and directives — its implementation is likely to drive more structured approaches to data residency, sovereignty and public sector procurement. Organizations should expect closer alignment of cloud deployments with the POPIA's crossborder transfer rules, clearer shared-responsibility expectations between customers and providers, and potential requirements for transparency, auditability and incident reporting.
Lacunae: AI and the protection of children
Despite the publication of the National AI Policy Framework in 2024, there was no material movement on AI in 2025. While the absence of regulation may foster innovation, clarity would be welcome. Similarly, domestic efforts to protect children's data and technology use have not met global advances.
Contributor: Kyoungjin Choi
In 2025, South Korea experienced a period of political upheaval following a presidential impeachment. However, with the inauguration of the new administration, stability was restored and the government placed AI at the top of its national policy priorities, mobilizing all resources to achieve the goal of earning "AI G3 status."
South Korea enacted its own comprehensive AI Framework Act, making it the second country in the world — following the EU and its AI Act — to enact a comprehensive regulatory law on AI.
In the field of data privacy, the Personal Information Protection Commission released the "Guidelines on the Processing of Personal Information for the Development and Use of Generative AI" and introduced a "Preliminary Adequacy Review System" to ensure safe data utilization at the planning and development stages of new technologies and services, including AI. This action demonstrates a strong commitment to fostering AI innovation.
At the same time, a series of hacking and personal data breach incidents affected not only credit card companies and major telecommunications companies, but also public institutions. These events brought data security and privacy protection to the forefront as national issues, highlighting their role as the foundation of safety and trust in the AI era.
In response, the government announced a comprehensive pan-ministerial information security strategy outlining key initiatives: Conducting a large-scale security audit of critical IT systems closely linked to citizens' daily lives; strengthening the effectiveness of consumer-centered incident response systems and recurrence prevention measures; enhancing information protection capabilities across both public and private sectors; building an information security environment aligned with international standards while fostering the information security industry, workforce and technologies; and reinforcing a nationwide cybersecurity cooperation framework.
In alignment with this policy direction, the PIPC also announced a strategic shift from post-incident sanctions to preemptive risk prevention. Key plans include strengthening the responsibilities of CEOs and expanding the authority of CPOs, establishing legal grounds for regular inspections of large-scale personal data processing sectors, providing incentives for companies' data protection efforts such as encryption and authentication, and operating the adequacy review system to help data controllers prevent data breaches in advance.
From the perspective of enhancing individuals' data autonomy, the PIPC also plans to expand the data portability right to 10 major sectors, enact a legal framework for the "right to be forgotten," and establish a comprehensive regulatory framework for personal video information.
Regarding enforcement, the commission intends to impose higher fines — potentially of up to 10% of a company's total turnover — on companies with repeated or serious data breaches. The PIPC also plans to introduce criminal penalties for the illegal distribution of personal data online.
Based on developments in 2025, it is anticipated that two potentially conflicting legislative directions will advance simultaneously in 2026: data regulatory innovation laws that support the national G3 initiative and regulatory reinforcement legislation aimed at preventing hacking and data breaches by strengthening accountability and compliance obligations for data controllers.
From the innovation perspective, legislative efforts such as the AI data regulatory sandbox — envisaged to be implemented through amendments to the Personal Information Protection Act, Data Industry Act or AI Framework Act — are expected to be actively pursued. This initiative is designed to permit the use of original personal data in secure environments for AI innovation. In addition, the institutionalization of the Preliminary Adequacy Review System for broader use of pseudonymized data is also anticipated.
Conversely, regulatory tightening is expected to focus on strengthening CEO and CPO accountability, introducing punitive fines and establishing criminal penalties for illegal data trading. Moreover, to encourage corporate investment in data protection, amendments to allow reductions in fines or penalties as incentives may also be introduced.
Ultimately, 2026 will be a pivotal year for South Korea — a year in which the nation must accomplish what might seem impossible: harmonizing its top national goal of achieving AI G3 status with the equally critical task of ensuring the protection of citizens' personal data and privacy as the fundamental safety net of the AI era.
Contributor: Joanna Rozanska, CIPP/E, CIPP/US
Spain's data protection and digital enforcement landscape will remain highly active in 2026. Under its newly appointed president, the data protection authority, the Agencia Española de Protección de Datos, is expected to continue prioritizing cases involving emerging technologies. In 2025, the agency issued extensive guidance on topics such as the European Digital Identity Wallet under the European Digital Identity Regulation and the intersection between AI and data protection. The DPA is now preparing further materials on the use of neurotechnology and neurological data.
The AEPD plans to carry out proactive analyses of high-impact sectors — including health care, digital education, platform marketing to minors, biometric and neurotechnology systems, workplace monitoring tools and digitalized public administrations — to identify risks early and issue preventive recommendations or audits. The DPA has also made clear that it can, and will, act against unlawful AI-related data processing even before the EU AI Act becomes fully applicable. Enforcement in 2025, particularly in deepfake-related cases, already set a precedent for intervention where personal data is manipulated or generated without a lawful basis.
Child and youth protection in the digital environment remains another key area. Spain is advancing a draft bill for the protection of minors in digital environments, requiring online platforms to implement age-verification systems, establishing reporting channels for harmful content, and potentially raising the minimum age to access social media. In parallel, a "sharenting" bill will regulate how parents and guardians share images of minors online. Both initiatives align with the EU DSA framework and reinforce accountability for how platforms handle children's data and digital well-being.
On the legislative side, Spain is lagging behind other countries in transposing the NIS2 Directive. Although the bill was granted urgent status to accelerate adoption — now expected in early 2026 — political fragmentation in Parliament has slowed progress and further delays cannot be excluded.
Likewise, while the draft national AI law has been approved by the government, its final passage through Parliament may face similar timing challenges. At the same time — and in a move that quietly places Spain at the forefront of EU AI governance — the Spanish Agency for the Supervision of AI published 16 practical guidelines on the EU AI Act, establishing some of the earliest interpretative standards for key aspects of the regulation and potentially influencing future supervisory and enforcement approaches across the EU.
Internally, the AEPD intends to adopt an "AI-first" policy across its operations to automate internal processes, improve case management and detect risks earlier. It will also deploy automated tools to analyze privacy policies, cookie configurations and public code or breach databases — making public-facing documentation and technical disclosures increasingly critical from a compliance perspective.
Finally, although not formally endorsed by the AEPD, Spain is expected to align with the European Commission's call for a more flexible interpretation of privacy obligations. This will potentially ease compliance for small- to medium-sized enterprises and signal a shift toward a more balanced, innovation-friendly privacy landscape.
Contributor: Ashwini Natesan
Sri Lanka's comprehensive Personal Data Protection Act, which passed in 2022, was initially expected to come into operation in March 2025. Administrative challenges surrounding its implementation — including appointing the data protection authority's staff and other preparatory requirements — resulted in a subsequent gazette notification rescinding the earlier operational date.
On 30 Oct. 2025, Parliament passed several amendments to the PDPA. Notably, these removed the requirement for an adequacy decision in relation to cross-border data transfers. Under the revised framework, personal data may be transferred outside Sri Lanka where "appropriate safeguards" are met or where any of the other conditions for lawful processing under the PDPA are fulfilled. The amendments also clarify the definition of "public authority," closing previous gaps that left room for interpretive ambiguity.
As per the amendment, the PDPA will come into operation on a date to be announced by the minister in charge; there has yet to be indication of when that will be. The operationalization and implementation of the PDPA could be phased.
The previously appointed Data Protection Authority of Sri Lanka has a substantial mandate ahead. Public advertisements have been issued to fill key vacancies at the DPA; these appointments are anticipated to accelerate the process of operationalizing the act. The PDPA's operationalization comes at a critical juncture as Sri Lanka begins rolling out several major digital initiatives under the Ministry of Digital Economy, most prominently the Unique Digital Identity project.
Once in force, the PDPA will provide citizens with enforceable rights over their personal data and require the state, as a controller, to adopt responsible and rights-respecting data processing practices.
The ministry is also advancing other digital transformation efforts, including GovTech — the government's implementation agency for national digital transformation that replaces the Information Communication Technology Agency of Sri Lanka — and GovPay, a platform that enables online payment for various government-related transactions. In 2026, the ministry can be expected to further advance measures in this regard.
Sri Lanka's National Strategy on AI was released in 2025. Among other things, the strategy places strong emphasis on data governance, committing to the development of a comprehensive national data strategy to strengthen the availability, integrity, quality and protection of AI-ready data. It proposes new data-sharing mechanisms, revitalizes open data platforms and establishes clear guidelines for data management and protection to support evidence-based policymaking and drive private-sector innovation. It remains to be seen how the AI Strategy will move forward.
The ministry is also responsible for the Online Safety Act 2024 and the Online Safety Commission established under it. A public call for comments was issued in 2025 and reforms or amendments to the Online Safety Act can be expected in 2026. The act remains central to ongoing debates on online expression, penalties and intermediary liability. Any reforms will have significant implications for digital rights and platform governance in Sri Lanka.
Contributor: Sofia Edvardsen, AIGP
Sweden enters 2026 with an almost complete regulatory architecture for the EU's digital and AI frameworks. Under a coordinated but decentralized model led by the Swedish Post and Telecom Authority and the Civil Contingencies Agency, the country continues to rely on sectoral expertise, inter agency cooperation and institutional independence. The year ahead will test how this tradition of trust-based governance scales across AI, cybersecurity, data and financial regulation.
AI Act — A multiagency model
Following SOU 2025:101, Anpassningar till AI-förordningen, and the government's 6 Oct. 2025 press conference, Sweden has a proposed national framework under Article 70 of the EU AI Act. The framework designates the PTS as the single point of contact and lead market-surveillance authority, the Swedish Board for Accreditation and Conformity Assessment as notifying authority, and a network of sectoral regulators — including the Financial Supervisory Authority, the Work Environment Authority, the Transport Agency, the Consumer Agency and Sweden's data protection authority, the Integritetsskyddsmyndigheten, and others — coordinated by PTS. A government bill is expected in the spring of 2026 to enable formal designations by 2 Aug. 2026.
Sweden's decentralized model may also set the tone for its Nordic neighbors. Denmark and Finland are expected to adopt similar sector-based structures, signaling an emerging "Nordic model" of AI governance — coordinated, expertise-driven and rooted in agency independence.
This approach mirrors coordination frameworks under the NIS2 Directive and the DSA. Enforcement will rely on collective institutional responsibility rather than individual liability, reflecting Sweden's constitutional preference for autonomy and proportionality in supervision.
Cybersecurity — NIS2 implementation
The Swedish Cybersecurity Act enters into force 15 Jan. 2026, expanding supervision to 18 sectors. The Swedish Civil Contingencies Agency is the national coordinator and EU contact point while sector regulators retain domain oversight. Sweden thus continues its multiagency approach, emphasizing sectoral expertise and institutional independence over centralization.
Cyber Resilience Act
A government directive proposes adjustments to the implementation of the EU Cyber Resilience Act, including small and medium-sized enterprise support. Implementation is expected to mirror Sweden's NIS2 implementation with national coordination and sector regulators enforcing compliance within their respective domains. A government bill will follow in 2026.
Data and platforms
The EU Data Act became applicable 12 Sept. 2025. Sweden has not yet designated a competent authority, but PTS is the likely candidate given its digital infrastructure and coordination roles. A government ordinance assigning enforcement responsibilities is expected in early 2026.
Under the DSA, PTS serves as Digital Service Coordinator alongside the Swedish Consumer Agency and Swedish Agency for the Media. This cooperative, non hierarchical setup reflects Sweden's constitutional limits on ministerial control.
Financial sector — DORA
The Financial Supervisory Authority is the competent authority under the national Digital Operational Resilience Act. Sweden's central bank, Riksbanken, and FI coordinate on threat-led penetration testing and oversight of critical information and communications technology providers. This activity will intensify through 2026.
Contributor: Stéphane Droxler, AIGP, CIPP/E, CIPM
Switzerland enters 2026 with an agenda shaped by pragmatism and public skepticism. Although highly connected and technologically advanced, the country continues to move cautiously when it comes to state-led digital transformation and data-driven regulation. Three areas are expected to dominate the national debate: the implementation of the new electronic identity, the regulation of digital platforms and the government's measured approach to AI.
A second chance for the e-ID
After a first failed attempt in 2021, the electorate has now narrowly approved the revised Federal Act on Electronic Identification Services, also known as the e-ID Act. This electronic identity, expected to be available in 2026, is intended to provide citizens with a secure, state-issued digital credential to access online public and private services. However, many observers remain doubtful about the federal administration's ability to deliver a reliable and user-friendly system.
The shadow of the electronic patient record — plagued by delays, usability issues and lack of public confidence — still looms large. Whether the e-ID will become a milestone in digital trust or another example of misplaced ambition will depend on the government's capacity to combine transparency, cybersecurity and tangible public value.
Platform regulation on the horizon
The Federal Council submitted a draft law, the Federal Law on Communication Platforms and Search Engines, aimed at strengthening the rights of citizens using digital platforms. This initiative seeks to improve transparency, accountability and fairness in the operations of global online intermediaries.
The challenge is formidable: Switzerland is trying to protect its citizens without antagonizing the dominant U.S. tech players or replicating the EU's DSA — a regulatory framework that would be difficult to enforce considering the domestic market's size.
AI governance through incremental adaptation
Unlike the EU, Switzerland does not intend to adopt a comprehensive AI Act. Instead, the Federal Council favors a sector-specific approach, adapting existing laws where necessary.
Several departments have been tasked with assessing the need for regulatory adjustments within their domains and to report back by the end of 2026. This deliberate and decentralized strategy reflects Switzerland's traditional reliance on evidence-based policymaking and consensus. It also underscores a belief that overregulation could stifle innovation and weaken the country's competitive advantage in research and applied technologies.
Taken together, these initiatives illustrate Switzerland's enduring preference for balance over speed. Trust, not regulation, remains the cornerstone of its digital policy. Yet in a world where digital infrastructures increasingly determine economic resilience and public confidence, maintaining this equilibrium will be a delicate exercise.
Contributor: Ken-Ying Tseng
Taiwan's privacy framework is entering a new phase. On 11 Nov. 2025, President Lai Ching-te publicly announced a partial amendment to the Personal Data Protection Act; the effective date will be determined later. This marks the first major reform pursuant to the 2022 Constitutional Court decision on health insurance data that requires the government to establish an independent supervision mechanism of its data privacy practices.
The amendment strengthens data governance rules for government authorities and data security obligations for private and public sectors. It requires government agencies to appoint data protection officers, imposes mandatory breach-reporting and documentation duties on both public and private sectors, and introduces fines for non-compliance with the reporting obligations.
In its 2023 amendments, the PDPA prescribed that the Personal Data Protection Commission will be the regulator in charge of the PDPA. The PDPC has yet to be established, however. The 2025 amendment emphasizes a transition mechanism: once the PDPC is officially formed in the near future, supervision of private-sector data practices will gradually shift from sectoral regulators to a unified authority within six years. The PDPC will be empowered to issue technical standards, conduct inspections and coordinate data protection oversight across all sectors.
However, the Organization Act of the PDPC — the legal foundation for the commission's establishment — has not been passed yet. This could occur anytime in 2026. Until then, the current multiregulator structure remains in place.
Meanwhile, the PDPA amendment will only take effect pursuant to the government's further announcement and is anticipated to take place after the Organization Act is enacted and promulgated and the PDPC is formerly established.
Once operational, the PDPC is expected to launch a comprehensive overhaul of the PDPA. The following areas may be subject to future reforms: anonymization and pseudonymization — defining their legal effects and clarifying when data is no longer personal; AI and privacy protection — balancing AI development with privacy protection and introducing accountability and transparency rules for AI data processing, etc.; cross-border data transfers — possibly setting stricter limits and proposing relevant requirements and exemptions; and extraterritorial application — possibly extending PDPA obligations to foreign entities processing data of individuals in Taiwan.
In the meantime, the Taiwan Food and Drug Administration, the sectoral regulator for the pharmaceutical industry, has finalized its draft order to prohibit pharmaceutical companies from transferring personal data from Taiwan to China, Hong Kong and Macau. It will take effect 1 Oct. 2026.
According to the order, there are several exceptions to the restriction, one being that a pharmaceutical company would be allowed to transfer personal data to China if, among others, a standard contract — akin to those in the EU — is entered into between the transferring and receiving parties.
While Taiwan's PDPA amendment is an important milestone, the real transformation will unfold after the PDPC's establishment, expected to come in 2026. Companies should anticipate more comprehensive, AI-aware and globally harmonized data protection rules in the years ahead.
Contributor: Nop Chitranukroh, CIPP/A, Thammapas Chanpanich
Thailand's Personal Data Protection Act came into full effect 1 June 2022. The Personal Data Protection Committee has since issued various subordinate regulations.
These include regulations on security measures to be implemented by data controllers and data breach notification requirements. They also include a mandatory obligation to appoint a data protection officer when processing activities require regular monitoring of personal data or systems due to the large scale of personal data, administrative measures, data processors' record of processing activities, data breach notifications, cross-border transfers of personal data including criteria for adopting binding corporate rules and contractual clauses, and handling data subject requests to exercise the right of erasure.
In August 2025, the PDPC announced it issued eight administrative fine orders across five cases against government bodies, private entities and data processors for inadequate personal data security. Since the PDPA took effect, there have been six cases and nine orders totaling over THB21.5 million (approximately USD684,811). With the "zero data leakage" goal of the Office of the Personal Data Protection Committee, all penalties stemmed from formal fact-finding and expert committee deliberations.
Key cases included a breach of a government agency's web app that exposed the personal data of over 200,000 individuals on the dark web. Both the agency and its developer lacked robust security, risk assessments and a data processing agreement. The agency and its developer were fined THB153,120 (approximately USD4,873) each.
A large hospital leaked more than 1,000 medical records during destruction due to poor vendor oversight. Sensitive health data was exposed, and retention rules were not followed. The contractor took records home and failed to report the breach to the appropriate authorities. The hospital and processor were fined THB1.2 million (approximately USD38,184) and THB16,940 (approximately USD539), respectively.
Three private-sector cases in wholesale, retail and online businesses led to fines of THB7 million (approximately USD222,740), THB2.5 million (approximately USD79,575) and THB3.5 million (approximately USD111,370) for inadequate security, failure to notify the PDPC, and, in one case, failure to appoint a DPO. The PDPC has not published full case details or orders.
Furthermore, the PDPC is revisiting the current PDPA and is in an initial stage of proposing amendments that cover various areas such as exempted activities, status and duties of local representatives, interpretation of data controller and data processor, consent and lawful basis principles, criminal data processing, categories of sensitive personal data, and power and duties of the PDPC and the PDPC office.
Enforcement in 2026 is expected to become more active and potentially more serious, which means organizations should pay closer attention to ensure compliance with the PDPA. Updates to the PDPA are also anticipated to be issued.
Similar to the EU GDPR, the PDPA also has extraterritorial effect. The subordinate regulation on international cooperation to be issued by the PDPC should clarify how PDPA enforcement against organizations located outside of Thailand will be conducted by regulators.
Silence surrounds the development of sector-specific data protection laws.
Contributor: Jason Grant, CIPM
As a result of Trinidad and Tobago's April 2025 general elections, a new government that campaigned on the promise of stimulating economic growth through AI, digital technology and new media will take office.
The government established the Ministry of Public Administration and AI in May 2025. The new ministry has embarked on several initiatives under the National Governance AI Framework to modernize and enhance the delivery of public services and promote responsible AI adoption across the country.
In July 2025, the ministry entered into a strategic partnership with the United Nations Development Programme and the United Nations Educational, Scientific and Cultural Organization to develop and implement UNESCO's Readiness Assessment Methodology and the UNDP's AI Landscape Assessment. A partnership with the UNDP and the Inter-American Development Bank Group created the country's first Open-Source Programme Office.
The Recommendation on the Ethics of AI was adopted by UNESCO member states in 2021, including countries in the Caribbean such as Trinidad and Tobago. The Readiness Assessment Methodology will be completed in conjunction with the UNDP's AI Landscape Assessment, a diagnostic tool that will evaluate Trinidad and Tobago's strategic and technical readiness to adopt AI.
The assessments will result in a national report establishing recommendations, data visualizations and a roadmap to support AI integration across sectors such as health care, education and public services. The ministry will also conduct an AI Landscape Survey among policymakers, AI experts, academics and private sector stakeholders to inform the findings of the assessments.
In August 2025, the ministry, through iGovTT — the National Information and Communication Technology Company Limited — launched "Anansi," a national AI chatbot to assist citizens in navigating access to public services.
It is important to note that these AI-related initiatives are occurring against the backdrop of the Data Protection Act, Chapter 22:04, which although passed in 2011, only remains partially in effect. Additionally, the legislation is not aligned with data privacy best practices on enforcement. For example, the act does not contain a requirement for a data controller to report a personal data breach to the Office of the Information Commissioner.
The role of the information commissioner forms the nucleus of the legislation's operational, compliance, regulatory and enforcement provisions, like most data privacy and data protection legislation models. With Chapter 22:04 of the Data Protection Act not fully in effect, the act remains for the most part inoperable, notwithstanding the few provisions which have been proclaimed.
That situation, however, is expected to change in 2026 given the ministry's advertisement for the information commissioner position this past year. This anticipated appointment, coupled with the AI assessment initiatives, signal moves are underway to fully proclaim the legislation — all of which augur well for the development of a robust governance framework within which AI adoption can advance.
Contributor: Furkan Güven Taştan
Following a 2025 that saw the establishment of a new, centralized cybersecurity framework, Turkey is set to enter 2026 with a broader legislative agenda for its digital landscape. While key priorities include completing its long-awaited data protection reform and making its new cybersecurity apparatus operational, attention is also expanding to encompass the governance of AI and the ongoing regulation of online content. These multifaceted efforts point toward an increasingly comprehensive and regulated digital environment.
A key legislative goal for 2026 is the finalization of the second reform package to harmonize the Personal Data Protection Law with the EU's GDPR. According to the presidency's objectives for 2026, the Ministry of Justice is tasked with completing this alignment. This package will address core GDPR principles, such as a risk-based approach and accountability, which were not fully covered in the initial 2024 amendments to the PDPL. The enactment of this package would conclude a multiyear effort to bring Turkey's data protection standards in line with those of the EU.
On cybersecurity, 2025 saw the enactment of the Cybersecurity Law and the establishment of the primary regulator, the Cybersecurity Presidency. With the agency's head appointed in late 2025, the focus in 2026 will shift to institutional capacity building. This includes finalizing the agency's organizational structure and issuing secondary legislation to implement the new law. It will be important to observe how the Cybersecurity Law is enforced as its specific provisions and potentially stricter penalties may position it as a primary legal instrument for IT-related offenses and breaches.
Meanwhile, other digital policy areas are also developing. After several individual proposals were introduced in 2024 and 2025, AI is expected to become a more concrete legislative topic. A parliamentary research commission on AI, established in 2025, is tasked with a comprehensive mandate: to define the steps for harnessing the benefits of AI, create the necessary legal infrastructure and determine measures to mitigate the risks associated with its use. The commission's report, expected in 2026, will likely provide a foundational analysis of AI's impact, addressing key issues such as liability and data governance.
Additionally, the regulation of online content remains a prominent issue. The long-standing Internet Law, in effect for nearly two decades, continues to be a subject of debate. As part of the "11th Judicial Package," there is a renewed effort to amend its controversial Article 9, which concerns the procedure for individuals to request the removal of online content that violates their personal rights. The proposed changes aim to create a framework that can withstand judicial scrutiny, following a previous annulment of the article by the Constitutional Court.
Contributor: Motunrayo Ope-Ogunseitan, CIPP/E, CIPM, FIP
The United Arab Emirates continues to assert its position at the forefront of global technological innovation and disruption, leading the charge in AI adoption across the Middle East.
As the nation awaits issuance of the Executive Regulations under the UAE Personal Data Protection Law, organizations are proactively working to strengthen privacy compliance frameworks and foster deeper consumer trust. Within the UAE's free zones, the Dubai International Financial Centre and Abu Dhabi Global Market have remained steadfast in advancing and enforcing their respective data protection regimes.
In 2025, the DIFC introduced pivotal amendments to its Data Protection Law, including higher administrative fines, a private right of action enabling data subjects to seek redress directly through the DIFC Courts and expanded applicability to include both DIFC and non-DIFC entities processing data within the jurisdiction, irrespective of physical establishment. The amendments also clarified data sharing obligations and introduced other substantive changes aimed at strengthening accountability.
Meanwhile, ADGM enacted the Data Protection Regulations (Substantial Public Interest Conditions) Rules 2025, which permit the processing of personal data without consent for insurance purposes and safeguarding vulnerable individuals. It also introduced updated administrative enforcement procedures to enhance regulatory efficiency.
The UAE made notable progress on AI governance in 2025 by embedding the principles of its Charter for the Development and Use of AI into national policy frameworks and aligning public and private-sector AI initiatives with the ethical values it espouses — such as human oversight, transparency and bias mitigation. These efforts reflect the country's commitment to responsible AI deployment. Notably, the Abu Dhabi government announced its ambition to become the world's first fully AI-powered government by 2027, marking a bold step toward integrating AI into public administration.
Looking ahead to 2026, it is anticipated that the long-awaited Executive Regulations under the UAE PDPL may be issued, activating full enforcement of the law. Should this occur, significant improvements in data processing practices are expected across industries, particularly in high-risk sectors such as telecommunications, health care, hospitality and real estate, which routinely handle large volumes of personal data. Key compliance priorities will include consent management, purpose limitation, cross-border data transfer restrictions and marketing governance.
The UAE Data Office, tasked with overseeing enforcement, is likely to issue sector-specific guidance, initiate compliance audits, and enforce data transfer protocols. These developments are expected to drive increased demand for skilled data privacy professionals and robust compliance infrastructure across the region.
On the AI front, the UAE is poised to continue operationalizing ethical AI principles across both public and private sector projects, with regulatory developments likely to emerge in response to growing adoption. It is also noteworthy that the DIFC has been selected to host the Global Privacy Assembly in 2026 — a prestigious event that will further cement its reputation as a regional leader in privacy governance.
Contributor: Natalia Kirichenko
In Ukraine, personal data protection is governed by the Law of Ukraine on Protection of Personal Data, primarily based on the EU Directive 95/46 and Convention 108+. Although a new draft law aligned with the EU GDPR has been under consideration by Parliament since 2022, its adoption is only anticipated after martial law due to the ongoing war with Russia is lifted.
The existing law provides a foundational regulatory framework for data protection, covering essential aspects but lacking the comprehensive scope of the GDPR.
Parliament is also actively moving forward with Draft Law No. 14057, a substantial overhaul of the Civil Code of Ukraine concerning the personal rights of individuals, including privacy and data protection. Responding to the rapid digitalization of social relations, the draft law introduces several new rights aimed at safeguarding individuals in the online environment, such as the right to a digital image, to be forgotten, to protection from deepfakes, and to digital privacy — including the protection of a person's "digital footprint."
The drafted law also significantly expands the regulatory framework for personal rights that ensures the individualization of a natural person by clarifying existing rights and introducing new ones, including rights to a name, signature, image and voice, as well as biometric and genetic data.
Contributor: Phillip Lee, AIGP, CIPP/E, CIPM, FIP
Implementing UK data protection reforms
The U.K. adopted the Data Use and Access Act in June 2025. This law, introduced by the Labour-led government, reflects a less ambitious set of data protection reforms than those proposed by the previous Conservative-led administration and was intended — among other things — to maintain the U.K.'s adequacy recognition from the EU.
Reforms include specifying "recognised legitimate interests;" relaxing automated decision-making rules; adding exemptions from cookie consent, such as for some types of analytics; and permitting marginally more data transfers — allowing transfers to countries whose data protection standards are "not materially lower" than those in the U.K.
The DUAA will take effect in four stages implemented through various commencement orders. Stage 3, expected by early 2026, implements the main changes to U.K. data protection laws. This stage will be of most direct relevance to privacy practitioners.
Maintaining UK adequacy recognition
The U.K.'s adequacy recognition by the EU was originally due to expire 27 June 2025. However, this deadline was extended to 27 Dec. 2025 to allow the EU time to review the changes to U.K. data protection law introduced by the DUAA.
The European Commission issued its draft adequacy decision in July 2025 and, despite suggesting further analysis and clarifications, the EDPB issued an opinion that "welcomes the continuing alignment" between the U.K. and EU data protection frameworks. As such, the DUAA appears likely to achieve its goal of introducing moderate data protection reforms that preserve U.K. adequacy, enabling the free flow of data from the EU to the U.K. throughout 2026 and beyond.
Cybersecurity developments
The U.K. government proposed a new Cyber Security and Resilience (Network and Information Systems) Bill in November 2025. The CSR Bill represents the U.K.'s answer to the EU's NIS2 Directive and, if adopted, would amend the U.K.'s existing Network and Information Systems Regulations 2018. The CSR Bill seeks to raise U.K. cybersecurity standards following a number of high-profile incidents, such as those suffered by Jaguar Land Rover, Heathrow Airport and Marks & Spencer.
The CSR Bill aims to expand the scope of the NIS Regulations to include data centers, managed service providers and supply chain providers designated as "critical suppliers." Similar to NIS2, the CSR Bill would require initial incident reporting with 24 hours and a full report required within 72 hours; the bill would also expand the scope of reportable incidents to include those "capable of" an adverse effect on network and information systems. Penalties for violations would run up to 4% of worldwide turnover.
UK AI legislation — at last?
Although previewed in the King's Speech in 2024, which set out the Labour party's legislative agenda, the U.K. has yet to see draft legislation "to place requirements on those working to develop the most powerful artificial intelligence models."
The U.K. has taken a hands-off approach to regulating AI thus far, preferring to rely on existing laws and regulatory bodies to govern AI risk. However, there is growing speculation that the government may introduce a "Frontier AI Bill" sometime in 2026. Unlike the EU AI Act, which regulates AI systems based on different tiers of risk, the bill is anticipated to only focus on so-called frontier AI models — the most advanced and powerful AI models — developed by large tech companies.
If and when introduced, the bill is expected to principally focus on AI safety, potentially placing existing voluntary commitments by major AI companies on a statutory footing and requiring prerelease model testing by the AI Security Institute or similar body. However, its precise scope and requirements will be known only once draft legislation is published.
Federal Law
Contributor: Joe Duball
Despite full control of the 119th U.S. Congress, House and Senate Republicans were unable to find enough common ground to break through on privacy or AI legislation in 2025 aside from the TAKE IT DOWN Act.
There are no clear signs that the situation will change in the new year, especially with costly bipartisan limitations. However, avenues exist to make waves nonetheless. Carry-over initiatives from 2025 and midterm elections will both serve as potential momentum to deliver results.
The House Committee on Energy and Commerce will be the linchpin to any action on federal privacy law. Energy and Commerce Chairman Brett Guthrie, R-Ky., used 2025 to reset the dialogue through committee fact-finding and intentional stakeholder consultation.
The committee is reportedly preparing a discussion draft on a comprehensive privacy bill for an early 2026 release. That bill is expected to carry a combination of lighter-touch provisions found in comprehensive state laws, including frameworks in Kentucky and Texas. Any Republican-led proposal will likely aim to keep industry compliance burdens to a minimum, and language for strong federal preemption will be the main vehicle to achieve that.
The House Committee on Energy and Commerce's ongoing work around children's online safety will also be worth monitoring throughout the year. Proposals for the Children and Teens' Online Privacy Protection Act, also known as COPPA 2.0, and the Kids Online Safety Act highlight a package of bills awaiting a full committee markup following subcommittee approval at the end of 2025. These bills face a tougher reconciliation path in the Senate, which passed their own bipartisan version of COPPA 2.0 and KOSA in the prior congressional term.
With AI, Republican members of Congress face a fresh mandate from President Donald Trump to draft innovation-first legislation that still addresses consumer safety.
A core component of the White House's recent executive order to limit the impacts of state AI laws is to build a federal standard for AI development and use. That federal framework would supersede state laws and establish legal certainty, allowing the administration to back off the order's interim measures, including potential litigation over state-level laws.
The executive order may steer some state legislatures to shelve AI bills considered over prior years while other states may be emboldened to challenge the Trump administration's threats. For those states continuing to legislate, expect more targeted laws on specific AI use cases or model safety in lieu of comprehensive, cross-sector proposals.
Comprehensive state privacy legislation in Indiana, Kentucky and Rhode Island took effect 1 Jan. New comprehensive laws took a backseat to amendments to existing laws last year — a trend that could repeat itself more frequently moving forward. However, state lawmakers in Maine, Massachusetts, New Mexico, Pennsylvania and Vermont remain steadfast with their desires to pass comprehensive bills.
Federal Trade Commission
Contributor: Cobun Zweifel‑Keegan
The U.S. Federal Trade Commission is entering 2026 with a more defined posture after a year of unprecedented transition.
Under Chair Andrew Ferguson, the agency has for the first time embraced the unitary executive theory rather than asserting its independence, aligning with President Donald Trump and supporting his contested power to remove FTC commissioners without cause, as he purported to do early in 2025. In a pending case brought by one of the fired commissioners, Trump v. Slaughter, the Supreme Court will soon resolve this debate one way or the other. In the meantime, the agency is operating with only two Republican commissioners. Another is expected to be nominated soon.
Privacy and data security are not top-of-mind enforcement issues for the Trump-Vance FTC. That said, there will likely be more privacy enforcement in 2026 than in 2025. Public statements from the agency's leadership make clear that it will continue to favor targeted enforcement of black-letter legal restrictions, such as those within the Children's Online Privacy Protection Act, rather than rulemaking or even general consumer protection enforcement under Section 5 of the FTC Act. Make no mistake, the FTC will continue to make use of its authority under the principles of deception and unfairness, but its influence as a standalone enforcement tool is expected to wane in 2026, often being used in conjunction with other laws.
In keeping with the FTC's renewed focus on children's privacy, the most significant regulatory development of 2025 was the updated COPPA Rule — its first major overhaul since 2013. The revisions expand the definition of personal information, clarify mixed‑audience obligations and require overhauls of data retention and security programs. Compliance deadlines hit in April 2026, making this a top priority for child‑directed and mixed‑audience services alike. COPPA enforcement themes from 2025 included third-party diligence — especially when it comes to integrations like software development kits — and a clear emphasis on age‑verification technologies as evidenced by an upcoming workshop.
The FTC's dance card also includes a couple brand-new laws that will soon become enforceable. The TAKE IT DOWN Act mandates new notice‑and‑removal processes for non‑consensual intimate images by May 2026 although the non-FTC criminal aspects of the law are already enforceable. Early cases will no doubt target major platforms. In addition, the Protecting Americans' Data from Foreign Adversaries Act now treats certain transfers of sensitive personal data by data brokers to adversarial countries as a violation of the FTC Act. The FTC is continuing to rollout enforcement of this law, likely in coordination with the Department of Justice, which is tasked with enforcing a related but distinct regulation known as the Data Security Program.
Enforcement related to AI technologies at the FTC is also going through major changes, driven in large part by the Trump administration's pivot toward an innovation-first strategy for AI. The FTC has already begun taking the unusual step of revisiting prior settlements that might "unduly burden AI innovation," as instructed by Executive Order 14179 and America's AI Action Plan. More could follow in 2026.
Nevertheless, the agency continues to scrutinize deceptive marketing claims in the AI space, especially when related to the accuracy of outputs. In addition, the agency issued 6(b) orders to AI chatbot "companion" providers late in 2025, showing how scrutiny continues to grow for safety testing, monetization and protections for minors.
In short, expect FTC enforcement to heat back up in 2026 while re-focusing on tried-and-true legal theories. Overall, the U.S. landscape will be quite active as state enforcers will also be doubling down on privacy scrutiny.
Health care
Contributor: Kirk Nahra, CIPP/US
Health care privacy continues to be a growing mess.
There are a variety of important developments to watch at the federal level. The Trump administration has been pushing back on certain changes from recent years with an eye toward altering the overall approach of the Health Insurance Portability and Accountability Act and enforcement. Attention will focus on whether this includes the proposed changes to the HIPAA Security Rule that could lead to a much more prescriptive approach to health care security.
The administration also has some holdover HIPAA proposals from its first go-round relating to disclosures to social service organizations and disclosures related to opioid patients that it could choose to revisit.
Sen. Bill Cassidy, R-LA, recently proposed legislation to address the "non-HIPAA" aspects of health care privacy. While significant movement on this legislation is unlikely, it could prove to be a useful element if the debate over a national privacy law begins again in earnest.
Much of the relevant health care activity has actually been at the state level. New laws continue to emerge in various states that will impact how health care data — very broadly defined — will be used and disclosed. Some of these laws are modeled on Washington state's My Health My Data Act. A new "comprehensive" law in Maryland is creating broad compliance challenges. A law awaiting signature by the governor in New York will also create challenges.
More of these laws are expected in 2026 — targeting specific privacy issues but potentially raising other kinds of health care concerns at the same time.
Contributor: Martin Pesce
In 2026, Uruguay is expected to maintain a solid standing in data protection and cybersecurity, supported by a modern framework aligned with international standards. The country has a modern personal data protection regime in line with the EU GDPR; it was the first state in Latin America to approve, in 2021, Convention 108+.
In 2025, Uruguay adopted relevant cybersecurity regulation through Decree No. 66/025, which reorganizes national cybersecurity governance. The decree grants Uruguay's Agency for Electronic Government and Information and Knowledge Society powers to set policies, supervise compliance, and coordinate incident response, including coordination with the Personal Data Regulatory and Control Unit and the adoption of AGESIC's Cybersecurity Framework. The decree also reinforces the role of the National Computer Emergency Response Team as the national interlocutor. Notable obligations include the national cybersecurity incident registry and the 24-hour notification period for entities linked to critical services or sectors.
In 2026, Parliament could begin considering an AI law based on recommendations submitted by AGESIC — the government entity responsible for AI policy. There is currently no specific law, but horizontal rules — such as those governing data protection, consumer protection, intellectual property and civil liability — already apply to the design and deployment of AI systems. Any new law should be carefully crafted to avoid barriers to innovation or diminishing the country's competitiveness.
The Agency for Electronic Government and Information and Knowledge Society's recommendations prioritize clear governance and a risk-based approach with references to transparency, impact assessments for high-risk systems and human oversight while emphasizing caution to avoid harming innovation and competitiveness.
Looking ahead to 2026, Uruguay is expected to consolidate its alignment on data protection and the operation of its recently updated cybersecurity framework. Any legislative progress in AI governance should balance the protection of rights with competitiveness and innovation, avoiding regulations that create disproportionate costs or reduce the country's attractiveness for technology investments.
Contributor: Eduardo Monteverde, CIPP/E, CIPP/US, CIPM
Venezuela maintains a scattered legal framework regarding data privacy, cybersecurity, digital responsibility and AI governance. These are potential areas for change in Venezuelan regulation, considering the country lacks a comprehensive legal framework to support the protection of individual rights.
AI governance legislation
During the first quarter of 2025, it was confirmed that the National Assembly was continuing the legislative process of the AI Bill, which had been approved in its first debate in November 2024. This bill aims to establish a national AI agency responsible for promoting ethical and transparent technological development.
However, there have been no advances or updates regarding the bill since the first quarter of 2025. It is not certain if priorities have changed or if there are other reasons for the absence of updates. Nevertheless, the legislation has already advanced significantly and could be approved by the end of 2026.
Comprehensive data privacy regulation
In its Decision 1318 from August 2011, the Supreme Court of Justice of Venezuela clarified and expanded the scope and applicability of the rights to Habeas Data privacy, as established by Articles 28 and 60 of the Constitution of Venezuela. However, there doesn't seem to be any motivation to introduce new regulations that would create a comprehensive data privacy regulation.
The latest regulations seem to only respond to specific needs. For example, in an October 2025 decision, the Social Cassation Chamber of the Supreme Court of Justice explained that under the Venezuelan Labor Act and related regulations, the same protections — like equal pay and benefits — should extend to both on-site work and online or remote work, aligning with the standards of the International Labor Organization. This matter was first covered during the COVID-19 pandemic; however, courts are now starting to recognize the new reality of a more expansive digital work economy.
Based on this, no comprehensive data privacy laws are expected to be approved in 2026.
Contributor: Thammapas Chanpanich, Nop Chitra, CIPP/A, and Waewpen Piemwichai
In 2026, privacy, AI governance and digital responsibility are expected to remain key priorities for regulators.
As of the end of 2025, Vietnam saw several major legislative developments. These include the 2025 Cybersecurity Law, which replaces both the current Cybersecurity Law and the Law on Network Information Security; the country's first comprehensive Law on AI; and a prospective new decree guiding the implementation of the Personal Data Protection Law.
As of late December, the final adopted version of the 2025 Cybersecurity Law had not yet been published. However, based on the publicly available draft, which was submitted to the legislature for final consideration, the 2025 Cybersecurity Law continues to emphasize the importance of strengthening cybersecurity, protecting children's data and safeguarding the rights and legitimate interests of agencies, organizations and individuals in cyberspace to address the rise of cybercrime regionally and globally.
The law expands the scope and enforcement powers of regulators, particularly the Ministry of Public Security, to strengthen cybersecurity and combat cybercrime. For example, enterprises providing services in cyberspace will be required to supply IP addresses of organizations and individuals using internet services to regulators for monitoring and cybersecurity assurance. The data localization requirements remain in place; further details are expected to be provided through lower-level legislation such as a decree.
Vietnam's first AI law is expected to promote AI development while ensuring developer and provider accountability and user safety. Similar to the 2025 Cybersecurity Law, the final adopted version of the AI Law has not yet been published. Based on the draft submitted to the legislature for final consideration, AI systems are classified by risk level: high risk, medium risk and low risk. The law also introduces provisions on ethics and responsibility in AI activities, including requirements for transparency, labeling and accountability.
Notably, the AI Law mentions extraterritorial applicability and requires foreign providers of AI services or products that are classified as highrisk AI systems to appoint a local point of contact in Vietnam. In addition, if such systems are subject to mandatory conformity certification prior to use, the provider must also establish a commercial presence or appoint an authorized representative in Vietnam.
The new PDPL takes effect 1 Jan. 2026; the government will issue a guiding decree to clarify several key provisions, including qualifications for data protection officers, DPO service requirements, filing obligations and consent formalities. Recognizing personal data as a valuable resource in the digital age, regulators are expected to remain active in this area.
Additionally, a new sanctioning decree is anticipated, paving the way for stronger enforcement. Monetary penalties for general violations may reach up to VND3 billion (approximately USD115,000). The fine for trading personal data can be up to 10 times the revenue generated from the transaction or VND3 billion, whichever is higher. Violations related to cross-border data transfers may result in fines of up to 5% of the violator's revenue from the previous year or VND3 billion, whichever is greater.
Given the rapid pace of legal developments, enterprises should closely monitor legislative changes to ensure timely and effective compliance.
Previous Editions
- 2025 Global Legislative Predictions
- 2024 Global Legislative Predictions
- 2023 Global Legislative Predictions
- 2022 Global Legislative Predictions
- 2020 Global Legislative Predictions
- 2019 Global Legislative Predictions
- 2018 Global Legislative Predictions
- 2017 Global Legislative Predictions
Note: The 2021 Global Legislative Predictions were not published due to impacts of the COVID-19 pandemic.

This content is eligible for Continuing Professional Education credits. Please self-submit according to CPE policy guidelines.
Tags: