Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.

The proliferation of autonomous robots with embedded artificial intelligence is growing across industry sectors, bringing advanced capabilities and applications to public spaces. Our relationship with technology is changing too, as the integration of autonomous engagement in society increases.

Autonomous robots, such as Waymo's self driving car, Knightscope's security robot and Unitree's Go1 dog, are examples of the more adaptive and interactive AI-powered machines evolving to not only interact with us, but also identify humans. For cities and municipalities looking to adopt widespread, operational robot use, safe and ethical deployment is critical, as is the need for clear guidelines.

As existing privacy mitigations may be insufficient for human-robot interactions, a robotics privacy framework is necessary to promote privacy-preserving design in the responsible deployment of robots with embodied AI. Failure to address privacy issues in these novel interactions may cause significant harm and erode public trust.

Challenges in human environments

Robots are not new, but their existence with new capabilities in public human environments is. This raises several novel considerations.

Autonomous engagement. In traditional interactions between humans and technology, the user initiated and controlled the engagement. Much in the same way, a user initiates an interaction by downloading an app or types in the URL address of a website. With autonomous robots, a robot can now initiate engagement with a human.

Embodied AI creates a new user experience, where professionals in Silicon Valley, like myself, are already familiar, but humans in metropolitan areas will soon interact with a robot's autonomous behavior for the first time.

The challenge for privacy professionals in this new user experience is that previously, only the evaluation of a human's behavior was necessary to determine privacy considerations. With the engaging capabilities of today's autonomous robots, both human and a robot's behavior in relation to the human must be considered.

Control in the environment. An individual's perception of their right to privacy is influenced by the environment in which a human-robot interaction takes place, and who has control during the engagement. In an uncontrolled environment, where the robot's purpose, capabilities and operation are perceived as unclear, unintended privacy harms or consequences can occur.

Unlike humans, who communicate through speech, nonverbal cues and can make decisions intuitively, robots function based on input commands, relying on previously stored data trained on a model. In a shared public environment, humans may rely on their own mental model of previous technology, for example surveillance drones, to determine how to interact or react with an unknown moving robotic object.

The uncertainty of who is in control, whether a human, embedded AI, or a remote person operating the robot, can create an unexpected power dynamic, especially when movement or communication needs to be negotiated.

Imagine ordering food from a restaurant and having your name and order displayed on the delivery robot for everyone to see as it arrives at your apartment building. How comfortable are you in disclosing your personal information? Knowing who or what is controlling the interaction, as well as who is receiving the information highlights the concern for privacy in public spaces and raises issues of trust with autonomous robots.

Existing privacy mitigations no longer work. Before the rise in autonomous engagements, humans interacted with machine learning through applications or devices, like mobile phones, wearables and smart devices. The interactions were familiar and consistent, giving users significant control over their experiences. Data exchanges happened within the controlled environments of first-party websites and apps, governed by in-product privacy workflows.

In newer human robot interactions, individuals may not know the entity — the company owned robot — who is influencing the engagement — remote operator or embedded AI — or how much choice or control they have over their privacy. This disruption of traditional privacy mitigations renders existing safeguards ineffective, highlighting the need for transparency in autonomous robots.

A privacy framework for embodied AI

Engineering teams across tech are being encouraged to break new ground in a greenfield of ambiguity, without compromising privacy. The following guidelines will help those seeking to minimize privacy risks in human robot interactions and address three key principles — control, transparency and identification during robotic systems design.

Identify and minimize data collection

Principle: Unless there is a specific need for identification, a person's identifiable attributes should always be protected.

A robot must be able to process data without collecting and storing personally identifiable information. To prevent identification:

  • Identify the necessary data for robot operation. This may include sensitive data types.
  • Minimize PII collection. In cases where identification is necessary, remove or pseudonymize any human identifiers whenever possible. Pseudonymization makes it more difficult to link data back to an individual.

Key indicators considered privacy impacting:

  • Single data attributes. Robotic attributes that can directly identify an individual, such as biometric signatures — fingerprinting, facial recognition or retina scans — or sensitive personally identifiable information — driver's license numbers.
  • Combinations of data attributes. Robotic attributes that can be combined to identify an individual, such as location data, street addresses, temporal data or environmental data, like temperature.

If an individual's identity is accidentally captured, limiting retention periods for processing and storage can reduce identification risk. In some cases, you may want to assess whether anonymization, deidentification or other similar techniques are feasible to limit the risk of identification.

Anonymization makes it impossible to link data back to an individual while deidentification replaces identifying information.

Even with the best anonymization techniques, there is a risk that individuals in images or videos can be reidentified.

Assess control in the environment

Principle: A person in a shared space with an autonomous robot should be empowered to make informed decisions about their privacy.

Consider these factors that may influence an individual's concern for privacy in an environment where robots persist:

  • Environmental control. Privacy concerns may arise when individuals cannot control or regulate their environment, such as in public areas. Awareness of the robot's location and who has access to the same environment can help individuals anticipate and prevent risks associated with an unexpected robotic encounter.

Example: An autonomous robot entering or cleaning a café.

  • Considerations for sensitive locations. Sensitive areas, such as those close in proximity to playgrounds or hospitals, will increase privacy risk. The use of robots in these areas should be carefully considered, and additional safeguards provided.

Recommendation: Provide clear and concise informational notices of robotic activity and data collection capabilities at entrances to areas where robots will operate. Also, provide contact information within the notice for possible privacy inquiries.

  • Robot operation. An individual's perception of privacy can be influenced by whether a robot is operating autonomously or has a remote operator.

Example: An autonomous robot in a public park with cameras in operation.

Recommendation: Use clear indicators to alert humans of a robot's presence, proximity and operation in a shared space. Examples may include visual or audio cues such as LEDs that flash or change color to indicate the robot's status and whether camera or microphones are active, or speakers that produce distinct sounds to signal the robot's approach or the activation of recording devices.

Enforce transparency

Principle: Individuals should be informed and notified of potential data collection or surveillance. Be transparent about robotic capabilities.

  • Sensory capabilities. Certain sensors on robots, such as cameras and microphones, can raise privacy concerns. Identify and address how these sensors might be perceived and used. Areas may include surveillance of space and objects. For example, sensors like cameras, lidar and radar can be used to survey environments and track moving objects.

Example: Using sensors to track a person's movement or behavior without their knowledge or consent.

  • Capturing images, video or sound. Sensors like cameras, microphones and heat sensors can capture sensitive personal information.

Example: Using sensors to collect facial features or voice data for recognition purposes.

Recommendation. Display awareness of sensor capabilities, where possible. Where visible or audible sensors on a robot can be considered disruptive to people in public, consider incorporating transparency into the design. For example, embodied AI where the presence of screens are available may take advantage of overlays or interstitials to convey awareness of AI.

Conclusion

Autonomous robots powered by AI will become more pervasive in our daily lives and fundamentally alter our modes of interaction. Building upon the success of our recently released AI Security Framework, Google's robotics privacy framework serves as a valuable tool for enabling a safer ecosystem across governments, businesses and organizations seeking to deploy privacy-preserving robotics solutions.

The robotics privacy framework highlights three key areas:

  • Novel interactions. Understanding the impact of autonomous robots in public, or uncontrolled, environments and how these interactions differ from traditional human technology interactions.
  • New challenges of autonomous engagement. The privacy implications of human-robot interactions, including issues of control, transparency and user expectations.
  • Transparency of robotic capabilities. Transparency in data collection and usage practices while addressing privacy concerns related to sensors and data processing.

The framework's structured approach addresses these concerns by recommending:

  • Identifying data. Categorize data types that will be used to operate autonomous robots and classify by sensitivity and purpose.
  • Assessing control in the environment. Ensure individuals have meaningful control over the collection, use and sharing of their data in shared spaces with robots.
  • Enforcing transparency. Promote transparency in the collection and usage practices of autonomous robots.

Erin Relford, CIPT, is a privacy engineer at Google.