Artificial intelligence is advancing at a fast pace. Although it is a useful tool for organizations seeking to increase productivity, it can also create privacy risks. A global study revealed that more than 70% of consumers harbor some sort of fear toward AI. So, are organizations taking the proper steps to consider and mitigate the risks of using AI?

The IAPP-EY Governance Report of 2019 revealed that, by and large, organizations are using already-existing tools to meet the new risks posed by AI. Forty-one percent of respondents said their organization assesses and mitigates these AI-specific risks using their standard privacy risk analysis. On the other hand, only 6% of participating organizations have AI-specific privacy safeguards and guidelines in place. Meanwhile, 16% of respondents answered that their organizations are currently in the process of planning safeguards and guidelines specifically targeted toward AI risks and threats. Conversely, 36% of respondents stated their organizations do not view AI as a unique risk factor at all.

The goal of asking respondents about safeguarding against AI was to determine if it has inspired organizations to adopt specialized risk analysis over the years. AI is something that organizations have been facing for quite some time. A 2017 public debate organized by the French data protection authority, the CNIL, highlighted the unique risks AI could pose to organizations due to the vast amounts of data processed, the potential for bias and discrimination, and the hybridization between humans and machines in which the two work together toward a common yet potentially malicious goal. These unique risks may not be properly assessed using a standard privacy risk analysis, and many privacy experts believe that responsible use of AI means having specific safeguards that extend beyond a standard privacy risk analysis.

AI often requires specific safeguards due to the attacks that stem from the way AI algorithms work internally. These types of attacks make AI susceptible to adversarial machine learning — attackers gaining control and inputting incorrect data to make the AI mechanism take malicious actions. Attackers may make changes in the algorithm that are so minuscule that even human workers miss them, thus, rendering their AI mechanism unreliable, unbeknownst to the organization. One example of this occurred when Microsoft created a Twitter bot that was programmed by internet users to tweet offensive and derogatory statements. These risks may be enough for organizations to limit their use of AI, which may explain why many privacy pros do not perceive AI as a threat to their organization at all.

The question remains: What types of safeguards and guidelines can organizations adopt to analyze the risks of AI? Part six of in the series "Security & Privacy considerations in Artificial Intelligence & Machine Learning" provides useful ways in which organizations can reduce the risk of threats from using AI. In addition to using their standard privacy risk analysis, organizations should focus on incorporating privacy into the AI mechanism. Differential privacy in one approach provides a mathematical framework that limits how much data the AI mechanism can "remember." This helps limit the retention of sensitive data that could potentially reveal the identities of the individuals included in a dataset. Another way to build privacy into AI is to use the Private Aggregation of Teacher Ensembles Framework. This framework applies differential privacy by dividing the private data into subsets and training different models on each of the subsets. The framework then combines the outcomes of the individual models, adding noise to result in "noisy aggregation" of the models’ predictions. For more in-depth information about the PATE Framework, check out this article written by the creators, "Privacy and machine learning: two unexpected allies?"

Federated learning is another approach an organization may be able to take to preserve privacy in its AI mechanisms. Using this approach allows organizations to collect data and learn from subsets of that data before aggregating the results from each subset. This takes away the need to bring large amounts of data together, which reduces the risk of identifying subjects within the data set. Another technique organizations can use to prevent possible threats is called homomorphic encryption, which allows the AI mechanism to perform meaningful computations of encrypted data without ever giving it access to the plain-text data or even an encryption key.

As AI continues to grow at a fast rate and attackers find new ways to infiltrate AI mechanisms, organizations that use AI should start thinking about implementing specific safeguards and guidelines to protect against the unique threats posed by AI. It will be interesting to see how organizations choose to safeguard against AI in the future as the risks of this technology are ever-growing. Perhaps next year there will be an increase in organizations that have specific safeguards in place or are planning safeguards, or maybe AI risks will continue to multiply, leaving organizations unprepared to properly mitigate them. Only time will tell.

Photo by Bernard Hermant on Unsplash