Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.
Not only has compliance become more precarious as privacy laws recognize internet protocol addresses as personal data, but the growing use of IP obfuscation tools is also undermining privacy compliance strategies and making opt-out mechanisms and geoblocking nearly obsolete.
IP obfuscation entails concealing a user's IP address and associated geographic location. It can be done through three basic techniques: Proxy servers, Tor networks and virtual private networks.
The solution may be implementing user verification controls before serving geo-based privacy notices and consent mechanisms — similar to age verification controls used to comply with laws to protect children.
However, while user verification controls may support privacy compliance, they could also create a privacy paradox, shifting the noncompliance blame from website operators, online services providers or even artificial intelligence developers to users, while also undermining culpability of dangerous content and technology.
How IP obfuscation technologies work
Proxy servers are intermediaries between a device and the internet, which forward requests and receive responses on behalf of the user, effectively hiding their true IP. Tor is a free, open-source, decentralized network that routes internet traffic through a global network of volunteer-operated servers, making it extremely difficult to trace back to the actual user. VPNs create a secure, encrypted connection between a device and VPN server and reroute internet traffic through that server.
Due to not only price and availability, but advantages over proxies and Tor networks, VPNs are the primary IP obfuscation service. Proxies generally do not encrypt data, leaving activity visible to an internet service provider or others who might be monitoring a connection, while a Tor can be slower than a VPN due to the multiple layers of encryption and routing through numerous nodes. It is also primarily associated with the dark web.
Because a VPN service re-routes internet traffic, it hides the associated geolocation of the user's IP address and makes it appear as if browsing is from the server's location, not the VPN user's physical location. As a result, it can bypass several restrictions based on events in specific geographies such as bans and censorship. For example, the U.S. TikTok ban increased VPN use by 74% and free VPN services launched to support journalists and free press.
More recently, the growing public awareness of data breaches, online surveillance and privacy concerns have led more people to seek solutions like VPNs. VPN service Windscribe reported 50% of all VPN users globally use the networks to bypass content restrictions, while 66% use VPNs to protect personal information and 40% use VPNs to prevent online tracking.
As of June 2025, Demandsage reported more than 1.75 billion people worldwide use VPNs and of the 5.3 billion people worldwide who use the internet daily, 31% use VPNs — meaning just over 1.6 billion people.
How IP obfuscation corresponds with privacy law developments
The EU General Data Protection Regulation automatically treats IP addresses as personal data while the California Consumer Privacy Act as amended by the California Privacy Rights Act, classifies IP addresses as personal information if it can be "reasonably linked, directly or indirectly, with a particular consumer or household."
There has also been notable expansion of this point of view in other countries that have explicitly recognized IP addresses as personal data. For example, the Canadian Supreme Court in R. v. Bykovets determined individuals have a reasonable expectation of privacy to their IP addresses under Section 8 of the Canadian Charter of Rights and Freedoms. While this ruling specifically affects law enforcement's ability to obtain IP addresses without a warrant, it also has implications for private organizations under laws like the Personal Information Protection and Electronic Documents Act, as they may need to treat IP addresses as personal information.
More recently, Brazil's General Personal Data Protection Law defined personal data as any information relating to an identified or identifiable natural person. AIf the same interpretation of an IP address under the CCPA/CPRA is used for the LGPD, then IP addresses would be considered "personal data" because it can be used, often in combination with other data to identify a specific individual or household. This is true for both static and dynamic IP addresses.
Even if an IP address is anonymized, it would likely still be considered personal data if the anonymization process can be reversed with reasonable effort.
How IP obfuscation undermines privacy
However, IP addresses and the associated geolocation are often used by websites and services to comply with legal and regulatory privacy requirements that vary across regions. Laws like the GDPR and CCPA require services to tailor privacy policies and consent forms to a user's physical location, which IP geolocation helps to establish. Based on this, some, but not all, consent management platforms use IP addresses as the primary identifier to determine a user's geographic location and then to tailor the consent requests to display an opt-in or opt-out mechanism.
Other consent management platforms record IP addresses as part of documenting consent decisions. Additionally, while websites are generally available worldwide, some publishers — particularly news sites — chose to restrict access from EU countries rather than comply with the GDPR by relying on geoblocking based on IP addresses, domain name system lookups, and other location-based data points.
Both the rise of IP addresses as personal data and the popularity of VPNs undermine privacy compliance strategies. Website operators are now unable to serve opt-out consent mechanisms by relying on methods to determine applicable law based on geography. They can no longer block content to avoid jurisdictional reach.
Addressing challenges to privacy compliance strategies
While some organizations can block VPN connections — for example, blocking any network ports that support VPN connections — most organizations cannot block VPNs all together since organizations like software as a service providers often deploy VPNs to securely enable access to software applications hosted on an organization's network.
To avoid the complications of data subjects using VPNs, and therefore hiding their IP address and geolocation information, organizations may need to forgo different opt-in/opt-out mechanisms or even reliance on any geographic-based compliance strategies for their websites all together.
Instead of automatically collecting IP addresses and other personal information for dynamic content management, website operators and service providers could launch a user verification system, like launching an initial landing page. This would first inform users of personal data collection and allow them to actively and selectively share data, such as location or age before being able to access the appropriate website or content, including a geo-specific privacy notice and applicable opt-in/opt-out mechanism.
Notably, user verification methods are a common and mandatory control for certain website operators and service providers. For example, to comply with laws requiring special protections for children, such as the U.S. Children's Online Privacy Protection Act and Article 8 of the EU's General Data Protection Regulation, website operators and online services already verify parental consent before collecting personal data of children, as defined under the laws.
In recent years, a wave of U.S. state legislation, starting with Louisiana's Act 440 in 2022, has shifted the burden of age verification onto adult websites and service providers. In June 2025, the U.S. Supreme Court upheld a Texas age-verification law in Free Speech Coalition, Inc. v. Paxton, a decision that set precedent for similar legislation across the country. As of August 2025, over 20 states have passed laws requiring verification.
Problems with verifications that rely on user disclosure
Implementing a form of user verification in which a user decides what to disclose before being served content not only undermines a seamless or optimal user experience, but it can also be used to shift liability from the web operator and service provider to the users themselves.
Organizations could argue they relied on what users disclosed about themselves to ultimately blame the user's disclosure as the direct cause for serving inappropriate content. The same defense could likely be used in the case of the "wrong" privacy notice and consent form from being served to users who deployed IP obfuscation.
Moreover — and of considerable significance during a period of wrongful death lawsuits against AI deployers resulting in demands from attorney generals in California and Delaware to improve AI safety — is the defensibility this would bring to AI developers. Specifically, companies could claim these controls were not activated or otherwise undermined because of what the user disclosed as part of the verification process.
Lisa Nee, CIPP/E, CIPP/US, CIPM, CIPT, FIP, is senior counsel, privacy at Lenovo U.S.