Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.

Just as the neurorights debate fires up globally, the EU Artificial Intelligence Act adds a new flavor to the regulatory soup of neurotechnologies in the EU.

Briefly, the AI Act applies when an AI system, either on its own or as a part of a product, is placed on the market in the EU, irrespective of where the provider — manufacturer — may be based, or when it is used by a deployer in the EU. It also applies to any provider or deployer, regardless of their place of establishment, if the output produced by the relevant AI system is intended to be used in the EU.

These obligations are in addition to existing legislation operators may already be subject to, such as the EU Medical Device Regulation and the General Data Protection Regulation.

Exceptions nonetheless exist, such as when an AI system or model is developed and used for the sole purpose of scientific research, pre-market research and testing — excluding testing in real world conditions — systems developed exclusively for military, defense or national security purposes, and personal or nonprofessional uses.

What is an AI system?

An AI system under the act is "a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments."

This definition would include complex machine learning algorithms increasingly used in the field of neuroscience. This is especially so for cognitive and computational neuroscience which use AI to extract features from brain signals and translate brain activity. For example, convolutional neural networks can be used to decode motor activity intentions from electroencephalography data, translating them into outputs like movement of a robotic arm. In another example, generative adversarial networks can be used for reconstructing visual stimuli from brain activity.

An AI system can be used on a standalone basis or as a component of a product. In other words, it does not matter whether the AI system is physically integrated into a product or serves the product's functionality independently. To give an example, an EEG headset does not need to have AI embedded into its hardware. If the AI system supporting the headset sits in a connected app or cloud software, it would still qualify as an AI system.

However, qualifying as an AI system does not automatically mean coming within scope — and therefore regulation — under the AI Act. It would still need to fall under one of the risk categories mentioned in the act.

Brain-computer interfaces as tools for subliminal techniques

The AI Act prohibits AI systems using subliminal techniques beyond a person's consciousness that distort human behavior materially and subvert free choice which in turn causes or is reasonably likely to cause significant harm to that person, another person, or a group of persons.

With regards to neurotechnologies, Recital 29 suggests such subliminal techniques could further be "facilitated, for example, by machine-brain interfaces or virtual reality"with the European Commission guidelines adding that "AI can also extend to emerging machine-brain interfaces and advanced techniques like dream-hacking and brain spyware."

Dream hacking. There are studies which claim it is possible to induce lucid dreaming through technology such as sleeping masks or smartwatches connected to smartphones. In theory, these systems detect when a person is in REM sleep through measurements such as EEG brain waves, eye movements and heart rate, and trigger the lucid dreaming state through sensory cues such as light or sound. In a small study, individuals have reportedly been able to communicate with the outside world in their dream by responding to simple mathematical equations or yes/no questions via predefined eye movements or muscle twitches.

Having said this, research is still in considerably early stages and there are challenges with deployment outside the laboratory and interpretation of the data which may be mixed with other actions during the sleep. Therefore, it is not clear whether there is a real-life scenario, currently, for dream hacking that materially distorts human behavior and causes — or is reasonably likely to cause — significant harm.

Brain spyware. The guidelines give the following example: "a game can leverage AI-enabled neuro technologies and machine-brain interfaces that permit users to control (parts of) a game with headgear that detects brain activity. AI may be used to train the user's brain surreptitiously and without their awareness to reveal or infer from the neural data information that can be very intrusive and sensitive (e.g. personal bank information, intimate information, etc.) in a manner that can cause them significant harm."

On personal bank information, while the guidelines do not clarify which BCI modality could reveal such information, they are likely referring to a well-known study suggesting that under very controlled conditions, hackers could guess the passwords of users from their brainwaves. However, before interpreting this as "mindreading," nuances around this technique need to be explained.

At present, virtual reality gaming headsets with BCI functionality generally incorporate EEG. In simple terms, EEG measures electrical activity of the brain and is mostly used for moving characters or item selection in games. This means it can, in principle, reveal information relating to motor commands, such as the brain sending a signal to the index finger to press down, or visual attention such as checking where on the screen the person is looking.

The well-known study above does not relate to "training the brain of the individual" as the guidelines suggest, nor is it about recalling a person's memory or knowledge of their personal bank information without their awareness.

It is instead about hackers learning, through passive observation of an individual's activities, which type of brainwave corresponds to which muscle movement for that individual as they enter their password on a keyboard. It is akin to having a spy camera capture a person write their password on a piece of paper and concerns a cybersecurity issue. It requires the person to have the intent to enter the information on a keyboard rather than the individual's brain being trained surreptitiously, behavior being "materially distorted" or their free choice being subverted. Therefore, unless there is another piece of aligned research, the prohibition in Article 5(1)(a) is unlikely to apply to this example in the guidelines simply because it would not fulfil all requisite criteria.

Emotion recognition systems using neurotechnologies

The use of emotion recognition systems in the workplace or education institutions is banned — except for medical or safety reasons — whereas their use in other environments is classified as high-risk. Notably, ERSs cover inference or identification of both emotions and "intentions."

According to the act, examples of emotions and intentions include"happiness, sadness, anger, surprise, disgust, embarrassment, excitement, shame, contempt, satisfaction and amusement."The guidelines also add boredom, aggression, emotional arousal, anxiety, interest, attention and lying to the list.

On the other hand, "physical states such as pain or fatigue" or readily available expressions and gesturesare not considered emotion or intention unless they are used to infer emotion or intention.

While the guidelines expressly refer to using EEG for ERSs, this could extend to all neurotechnologies when used for detecting or inferring emotions and intentions, for example: neuromarketing using fMRI or EEG to infer consumer sentiment towards a brand and to customize advertising; monitoring employee engagement or anxiety with EEG; assessing a student's stress levels during learning tasks; and in more speculative instances, using P300 brain waves in courtrooms as a lie detection tool for evidence of a defendant's familiarity with details of a crime scene that were not known to the public.

In some circumstances, it may be difficult to decide when a neurotechnology should be classified as an ERS and when it should not. For example, fatigue is classified as a physical state and not an emotion. As such, unless there is a distinction between physical and mental fatigue in the act, any inference being drawn would not make the neurotechnology an ERS. On the other hand, measuring attention, according to guidelines, would classify the neurotechnology as an ERS. However, these inferences are closely related. When a person is tired, they also lack attention. Therefore, application of the act's provisions may be challenging for providers and deployers in practice.

Biometric categorization

The AI Act prohibits biometric categorization systems that categorize individuals based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or political beliefs, sex life or sexual orientation, unless the AI system is ancillary to another commercial service and strictly necessary for objective technical reasons.

When neurotechnologies are combined with other modalities such as eye tracking, they can, potentially, allow for inferring sensitive information like arousal. This is especially important for virtual reality headsets where both the content shown to an individual as well as their physiological reaction to said content can be observed. Therefore, the use of neurotechnologies to make such inferences and categorize individuals into such groups would be prohibited.

On the other hand, categorizing individuals according to health or genetic data would be classified as high-risk. This could be relevant for example if EEG in combination with other biometric data was used to infer a person's likelihood of developing Parkinson's disease, epileptic seizures or their mental health status, and they were put into groups with other people on the same basis.

Finally, it is important to note that the same AI system can fall under multiple high-risk or prohibited system categories under the AI Act. Therefore, providers and deployers of neurotechnologies should assess intended and reasonably foreseen use cases of their AI systems from a wide lens.

Nora Santalu is a senior associate at Bird & Bird.