Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.
Twenty-first century technologies have reinvigorated the frontiers of discovery, allowing people to not just explore, but also to create entire new worlds.
These emerging digital realms, however, remain subject to existing and rapidly evolving laws and political realities within our physical world.
Therefore, companies developing or implementing these revolutionary technologies need to take the time to simultaneously pursue compliance, particularly with those laws concerning privacy, cybersecurity and artificial intelligence. The best way for companies to do so is to promote a close collaboration among development, operations and legal teams throughout the entire product life cycle. This compliance-by-design approach better ensures the long-term success and profitability of these innovative products and experiences.
But how to apply traditional laws to a highly integrated virtual world is the challenge. This pragmatic guide can help navigate the compliance challenges of immersive experiences and products.
Jurisdiction: Navigating the borderless terrain
As a threshold matter, it is important to first identify which U.S. states or countries' laws apply in a world that exists in the ether.
To be clear, cyberspace is not a global commons. State laws generally apply depending on where the organization, or its servers, are physically located, or depending on the residency or location of the impacted individuals. Of course, in the decentralized and increasingly interoperable metaverse, it is often complicated to determine where and who the individual is, particularly as virtual private networks can obscure locations, or place the user in a different jurisdiction, and avatars can increasingly step across platforms and enter different worlds controlled by different organizations.
Amid this difficulty, it is often best to address and mitigate challenges at the earliest phase. Product and legal teams should sit side-by-side to establish an agreed-upon, reasonable approach for determining and accommodating the legal jurisdiction applicable to an individual or an interaction.
One option is to create jurisdiction-specific variations. Automakers can leverage software-defined systems to create jurisdiction-specific vehicle variations, adapting technologies to local laws based on the user's location. For instance, they already tailor vehicles for language and speedometer units — mph or kph — and can similarly adjust tech features or implement targeted disclosure and consent protocols before activation.
Another option is to develop contractual disclaimers — for example, "by participating in this immersive experience you agree that California law applies" — or to develop contractual rules-of-the-road among developers within a virtual world on how personal data is to be governed, leveraging joint controller agreements and data processing addendums.
Technical controls can also be leveraged, like requiring individuals to indicate their jurisdiction when registering their avatar or otherwise entering a metaverse, augmented or virtual reality experience. Consider, for example, a virtual passport. Technical controls can also be helpful in maintaining an accurate map of the data flows to better identify relevant jurisdictions.
Privacy: Bridging real-world privacy laws and virtual world data collection
Once the applicable jurisdictions are determined, the next step is to accommodate the relevant privacy laws by first identifying what data constitutes personal data.
Immersive experiences run on personal data, either when presenting as an avatar or themselves. Consider, for example, eye or motion tracking and spatial mapping required for various virtual games or avatar performances. Consider also where personal information is legally required or otherwise enhances safety, as with technologies designed to prevent impaired or distracted driving.
To comply with leading privacy laws like the California Consumer Privacy Act, the EU or U.K. General Data Protection Regulation, or Brazil's General Data Protection Law, metaverse companies may want to start by developing a reasonable process to differentiate between digital identifiers and digital identities.
The former is information that can reasonably be linked to a natural person, such as names, ID numbers and location data. The latter is data tied to a digital identity that cannot reasonably be traced back to a person, like an avatar's birthdate or physical traits.
It is an increasingly fine line between the two, but developers and implementers should consider a deliberate, documented approach to identifying what qualifies as personal data, perhaps by continuously asking questions like: "What could I do to a person with this data? How could I sell something to this individual? What can I find out about them?" So, if for example you are looking at wingspan data from handsets: "Can I infer gender or disability data?"
When it comes to data breach notification laws, digital identifiers are more likely to trigger regulatory and individual notifications, often depending on the risk of harm to the individual. For example, a compromise of an avatar's "personal details," as a digital identity, is not likely to trigger breach notification laws — but compromised voice recognition data or biometrics used within a virtual game, as a digital identifier, would.
As another example, a conversation between avatars may trigger privacy laws in Japan under the Telecommunications Act, and in other jurisdictions, depending on what the conversation reveals — for example, that an individual isn't home or the individual's payment card details — and how likely a threat actor can tie that virtual conversation to real people. Surreptitious recording of those conversations, or simply recording without all-party's consent, can also trigger privacy violations.
With an established process to identify what constitutes personal data, organizations can then integrate necessary disclosures, consent-collection, safeguards and other jurisdiction-specific requirements over that data, while also considering leveraging advanced privacy enhancing technologies.
Cyber and physical security: Securing immersive experiences
Cybersecurity and safety by design is then also critical for the sustainable development of immersive experiences. Particularly the more personal data these experiences require, the more they attract attackers and nation-state competitors. Personal data, after all, is the gold bullion of today over which nations increasingly compete.
Despite rising trade and data barriers, supply chains remain long and complex, increasing their vulnerability. Companies building virtual worlds must assess their cybersecurity posture by mapping and understanding the full life cycle and downstream impacts of their data and ensure reasonable protection while complying with laws like the GDPR's cross-border transfer restrictions and U.S. limits on bulk data transfers to high-risk countries. Web 2.0 was not built with security foremost in mind, and retrofitting has proven a herculean and enormously expensive task. Better to start at the design phase for Web 3.0.
Physical safety is also critical to get right early, especially when immersive technologies can be used to operate robotics, detect when infants are left alone in cars, or when threat actors can bring about damaging physical effects via virtual worlds like cyberbullying, financial fraud, or exploitation of minors.
Physical town squares have police forces, countries have military forces, and social norms exist to provide safety and security. Similarly, designers and implementers of virtual worlds need to think about how to govern their worlds to promote security and safety, beyond mere "Acceptable Use Policies" and "Terms of Service." Of course, as is in the physical world, developers and operators will need reasonable processes to balance privacy with safety.
AI and automated decision-making: Rules of virtual reality
Related to privacy, and with additional rules and regulations, are immersive experiences and worlds that make decisions impacting individuals using AI. Consider a virtual bank that decides credit lines for virtual currencies in a virtual game, or virtual insurance companies underwriting activities in an immersive experience, or even the precautions a vehicle may take prior to an expected collision. Additionally, devices can grow increasingly tailored by responding automatically to various inputs from the senses or even emotions.
While U.S. President Donald Trump is easing direct AI regulation at the federal level to promote innovation over security-by-design, some U.S. states like California and New York continue to strike a more active regulatory stance. Colorado's AI Act also mandates safeguards for companies using "high-risk" AI in areas like employment and health care to prevent algorithmic discrimination. On the other hand, Utah is trying a middle approach with innovation safety through AI sandboxes.
While the U.S. may deprioritize federal regulation of advanced technologies, existing laws' statutes of limitations outlast any administration, and selective prosecutions remain a risk. In addition, older federal laws like the Fair Credit Reporting Act, alongside newer ones like the EU AI Act, also remain very much at play.
Meanwhile, there is pressure for newer comprehensive privacy laws to expand the definition of personal information to include data within large language models — California is adopting a broad approach, for example, while the Hamburg Data Protection Authority is taking a narrower stance. Developers, implementers and legal partners must monitor these shifts, identify significant automated decisions, and ensure compliance by providing appeal options or human oversight.
Conclusion
The great frontier of virtual worlds and immersive experiences are still governed by the physical world, especially laws governing privacy, cybersecurity and AI. Even new laws specifically designed for virtual worlds remain grounded in real-world impacts, enforced by real-world regulators, and presenting real-world jurisdictional challenges.
The best approach is to acknowledge that reality and engage in a close partnership with legal representatives to create documented, reasonable processes to promote governance by design that companies would be proud to show regulators. That requires lawyers who are comfortable with technology and provide risk-based advice, and developers and operators that are comfortable receiving advice and trimming sails accordingly.
Michael Cole, AIGP, CIPP/C, CIPP/E, CIPP/US, CIPM, CIPT, FIP, PLS, is managing counsel for Mercedes-Benz and Michael Bahar is partner and co-lead of global cybersecurity and data privacy at Eversheds Sutherland.
Editor's note: Cole and Bahar, with Meta Policy Manager AR/VR Anne Hobson, will hold a panel discussion on immersive experiences at the IAPP Global Privacy Summit 2025.