Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.

Technology has reached the next level, breaching the final frontier of privacy: our innermost thoughts and feelings.

We will soon live in a world where technology can access the mind. In the near future, this technology could be a part of mainstream consumer products. In the distant future, it might not be optional.

Businesses developing neurotechnologies are at a pivotal moment and must decide how to design their products with mental privacy concerns in mind, to bring about the best possible outcome for themselves, consumers and society.

Neurotechnology: Access to the mind

Today, there are devices in development — or already available in some cases — that can be implanted in the brain or worn externally to detect electrical activity and other brain signals, essentially reading the mind. Aided by artificial intelligence, these devices can detect whether you're paying attention, whether you like what you're seeing, your political leanings and even whether you've seen something before.

Brain signals can reveal a great deal about a person: their truthfulness, personal feelings, political leanings, propensity to spend money and risk tolerance.

But it doesn't stop there. These devices not only read the mind — they can "write" to it. They can stimulate the brain to cause a reaction.

This technology holds remarkable promise. It could help ALS patients and individuals with paralysis use their minds to control their environment — keyboards, exoskeletons, wheelchairs. It could allow people who are unable to communicate with others to speak for the first time in decades. It could warn an epileptic of an impending seizure or train someone to reach a meditative state to manage anxiety or stress.

It could also be used to investigate crimes by detecting whether a suspect has seen a crime scene, monitor employee engagement or student attentiveness, gauge consumer reactions to advertisements or products, and enhance soldiers' cognitive abilities — or sabotage those of adversaries.

During the last few decades, as technology has advanced to the detriment of personal privacy, one assumption has remained intact: that the mind — the sanctity of one's innermost thoughts and feelings — would always remain private. These emerging neurotechnologies challenge that assumption, presenting businesses, consumers and regulators with an entirely new paradigm to navigate.

The role and promise of self-regulation

As history has repeatedly shown, industries that fail to self-regulate often find themselves facing legislative intervention. When it comes to privacy, the lesson is clear: businesses can shape their own destiny by proactively adopting ethical design principles and building trust with consumers.

Otherwise, statutory regulation — often reactive and ill-suited to the fast-paced evolution of technology — will almost certainly emerge.

For businesses in the neurotechnology space, this is a moment to act, not just to avoid future compliance headaches but also to build a competitive advantage by leading the charge on privacy innovation.

What regulation might look like: Learning from the past

Whether mental privacy becomes protected by self-regulation or statutory regulation, businesses can learn from history to anticipate what the future might hold. Examining the successes and failures of privacy regulation over the last three decades provides valuable lessons for charting a path forward.

Notice and choice have historically been the foundation of many global privacy laws. This framework requires businesses to notify individuals of their data collection and provide them the option to opt out of certain business uses and disclosures of their private information.

While this approach may seem straightforward, many feel it has proven inadequate in practice. Real-life consumers rarely read or fully understand privacy notices. Moreover, neural data, due to its intrinsic and involuntary nature, can reveal far more about an individual than they might expect.

As a result, many privacy experts now suggest a shift away from reliance on notice and choice. Instead, they propose establishing clear rules delineating what businesses can and cannot do with sensitive personal data, including neural data. Early signs indicate mental privacy laws may move in this direction, reflecting a more proactive and protective regulatory approach.

Nascent mental privacy laws

Currently, three U.S. states — California, Colorado and Montana — have introduced specific provisions protecting neural data through amendments to their broader consumer privacy laws. These states recognize data derived from the central or peripheral nervous system as sensitive personal information deserving heightened legal protection.

The Colorado Privacy Act says businesses must:

  • Post a privacy notice informing individuals about the neural data they collect and their use, retention and disclosure of this information, including each purpose for which each kind of personal information is used and the kinds of third parties they share it with.
  • Obtain clear, freely given, informed, specific, affirmative and unambiguous consent from an individual before processing their neural data, which such consent must include a disclosure regarding the names of any third parties to which the information is sold.
  • Refrain from using dark patterns when obtaining consent from individuals.
  • Refresh each individual's consent every 24 months, absent having interacted with the individual in the meantime, or provide a user-controlled interface for the individual to manage their opt-out preferences at any time.
  • Delete or de-identify this information when it is no longer necessary for the purpose for which it was collected, and in any event when an individual has withdrawn consent for its use.
  • Inform individuals of the purposes for which it uses this data and only collect such information that is reasonably necessary to fulfill, or that is compatible with, those purposes, absent additional consent.
  • Inform and afford individuals about their rights and ability to access, correct and delete this information in the business's possession or control, and to opt out of the sale of this information or use for targeted advertising or to make important automated decisions.
  • Conduct data protection assessments addressing the collection, use, retention and disclosure of this information.
  • Do not use this data for unlawful discrimination.
  • Take reasonable measures to secure this data.

The California Consumer Privacy Act is similar to Colorado's law, but less stringent in some ways and more stringent in others. Instead of requiring affirmative consent, it grants consumers the limited right to opt out of the collection and use of their neural data. If a consumer takes no action, businesses may collect their data by default. However, California's law extends its protections to employees as well as consumers, broadening its scope compared to Colorado's legislation.

Also, California's law requires that businesses present a privacy "notice at collection" to consumers, in addition to a posted privacy policy, and the notice of collection must inform consumers how long they will retain neural data, or the criteria they will use to determine the retention period.

Montana's law goes beyond the Colorado and California laws' requirements. The Montana legislature passed Senate Bill 163 in recognition that the use of neurotechnologies outside of medical settings is not covered by health data privacy laws, and therefore was left unregulated. In the bill's declarations, the legislature recognized that each human brain is unique, and that neural data is specific to the individual from whom it is collected. Because it contains distinctive information about the structure and functioning of an individual's brain and nervous system, the Montana legislature declared neural data can be linked to an identified or identifiable individual.

This finding is relevant under other laws, the Colorado Privacy Act, for example, that only apply to biological data that is intended to be used to identify a specific individual, leaving it unclear whether it applies to neural data. The Montana legislature apparently thinks that it could. Montana's bill became law on 2 May, after 10 days passed without the governor signing or vetoing it.

In addition to California, Colorado and Montana, there are 15 state bills pending that would specifically regulate neural privacy.

Outside the U.S., regulatory bodies in Europe and the U.K. have crafted similar guardrails for the collection and use of neural data.  In South America, Chile's constitution contains a privacy right for neural data, which has already been successfully enforced in a court. These early guidelines reflect a growing consensus on the need to treat neural data as highly sensitive and deserving of robust protections.

Principles for mental privacy self-regulation

Both self-regulation and statutory regulation for mental privacy are still in their infancy. However, neurotechnology companies can take proactive steps now to anticipate future regulatory trends, align with consumer expectations, and avoid costly compliance pitfalls. There are key elements businesses should consider incorporating into their product development processes.

Achieve true transparency. Go beyond privacy policies to meaningfully inform consumers of what your products do and how their neural data will be used. Communicate this information in ways that are upfront, clear, timely and easily understood.

Provide a tangible value exchange. Demonstrate to consumers the value they receive in exchange for sharing their neural data. For instance, sharing brain data might enable your business to improve the product for everyone who uses it.

Define long-term use cases: Think ahead and be upfront about all potential future uses of neural data. Avoid surprising consumers with new purposes that were not disclosed at the outset.

Empower consumers with control. Offer individuals the ability to access, correct or delete their brain data and to decide about any secondary uses not necessary for the primary service they requested.

Build security by design. Protect neural data from unauthorized access by storing it locally on consumer devices, encrypting it, and ensuring the business does not control the decryption keys. Configure the app with unique credentials so the data is not unlocked via single sign-on with the device itself or a third-party account. If your technology can stimulate the brain, build in guardrails to prevent this from happening inadvertently, or worse, by nefarious actors.

Future-proof data anonymity. Anticipate advances in identification technology by exceeding current de-identification standards.

Prepare for law enforcement requests. Mental data could be highly valuable in a criminal investigation. Establish clear policies for handling requests from law enforcement to access neural data. Proactively communicate these policies to your customers to build trust.

Prepare for diligence scrutiny. Emerging companies should design privacy-forward products with the expectations of potential investors or acquirers in mind. Design products with these considerations in mind and document them through internal and external written policies and procedures.

Why businesses must act now

By prioritizing privacy and ethical design now, businesses can not only gain consumer trust and differentiate themselves in the market but also shape the regulatory landscape to work in their favor, ensuring long-term success and sustainability.

In doing so, neurotechnology companies can navigate this frontier with foresight, balancing innovation with respect for the most personal frontier of all: the human mind.

Kristen Mathews, CIPP/US, is a partner in the cybersecurity, data protection and privacy practice group at Cooley LLP.