Artificial intelligence technology is proliferating rapidly, becoming a critical component of many organizations' operations. However, the rise of AI-based systems also poses significant challenges regarding reputation, data privacy and new attack vectors for organizations.

AI tools, if tampered with, can cause reputational damage. In 2016, Microsoft's AI chatbot Tay, designed to engage users on social media, was quickly exploited by users who fed it offensive and racist content. Within 24 hours, Tay began generating inappropriate and hateful messages, leading Microsoft to shut it down.

In 2018, it was revealed that Amazon's AI recruiting tool, designed to streamline hiring processes, exhibited bias against female candidates. The tool was trained on historical hiring data that favored men, leading it to penalize resumes containing words like "women's" or names associated with women.

These incidents underscore the need for safeguards against unintended model outputs and the risks of AI systems learning from biased or malicious data.

Developing these safeguards requires a collaborative approach. Given AI's unique nature, it is crucial now more than ever for teams responsible for privacy, security and governance to work collaboratively toward effective AI governance.

Governance teams need to ensure the integrity of these systems, and security teams need to implement safeguards against potential threats. Moreover, with new regulations across multiple jurisdictions being drafted for AI, privacy teams must ensure data handling complies with all relevant regulations respecting consumer data privacy rights.

The proliferation of AI and AI-based systems poses holistic risks to organizations, impacting privacy, security and governance teams.

Privacy risks

AI systems rely on massive amounts of data, often including sensitive personal information, to train their models. Improper handling of this data can lead to privacy violations, legal and regulatory issues, and loss of customer trust.

Keeping up with the regulatory landscape is challenging and brings regulatory and compliance risks. In addition to the EU AI Act, the U.S. National Institute of Standards and Technology's Risk Management Framework, U.S. President Joe Biden's recent executive order on AI, and other regulations like the Canadian AI and Data Act, regulators have been active globally in introducing legislation to address AI.

Understanding AI systems, conducting assessments, understanding the data processed, and the risks posed based on different controls from various regulations, and using this information to create transparency for end-users will be crucial.

Transparency is at the core of establishing trust with consumers and AI brings underlying transparency risks. Given the evolving nature and complexity of AI systems, there is often a lack of understanding of how personal data is acquired and used for training or inference, jeopardizing consumer transparency. Slack faced public backlash when users discovered their data was being used to train AI without adequate transparency or opt-in.

Personal data used to train AI models can be inadvertently exposed and there is a risk of unintended exposure. AI models trained on employee data without adequate controls on outputs might disclose sensitive private employee data to unauthorized users interacting with the model, for example.

The potential for AI models to make inferences that violate privacy — like predicting personal information from seemingly unrelated data points — is an ongoing risk, as well. Racial biases in health care application are one example.

Security risks

AI introduces new cyber vulnerabilities that malicious actors can potentially exploit.

Adversarial attacks can involve poisoning training data or model parameters to manipulate AI outputs. A notable example is the 2016 incident with Microsoft's Tay chatbot, which was manipulated into making offensive statements.

Model extraction attacks can involve stealing proprietary AI models, leading to intellectual property theft.

AI systems pose a new attack vector and external AI models or technology for generating code to elevate the developer experience may introduce backdoors or malicious code into core products. A strategy for integrating AI-generated code carefully into the product development life cycle will require elevated focus.

With the ease at which various scripts and existing malicious code can be modified to evade defenses using generative AI, organizations must be prepared for more targeted attacks. AI technology is arming script kiddies with more powerful tools to identify and exploit vulnerabilities.

Reputational risks

Unethical or biased AI decisions can severely damage public trust and an organization's reputation.

There are discriminatory biases wherein AI models can exhibit biases against protected groups, as seen in the LinkedIn hiring algorithm controversy, where certain candidates were unfairly disadvantaged.

AI systems could introduce a lack of transparency and accountability, which can lead to public distrust in AI decision-making processes.

Errors or misuse of AI systems can cause tangible harm, such as incorrect medical diagnoses or unjust legal decisions, leading to backlash and loss of credibility.

Concerns about the ethical implications of AI, such as decision-making in life-or-death situations — like those involving autonomous weapons or health care decisions — can lead to public concern and reputational damage if the AI systems are seen as lacking ethical safeguards.

Individual roles

To manage these risks effectively, privacy, security and governance teams need to play specific roles. For governance and trust teams, it is important to consider:

  • Establishing an AI task force comprising privacy, security, engineering and legal teams to draft and enforce an AI governance policy.
  • Continuously evaluating the ethical implications of using AI technologies within the organization.
  • Continuously educating and enabling employees on the risks and organizational policies related to AI use, given the rapidly changing landscape.

For privacy teams, it is important to:

  • Own the process of conducting risk assessments for AI projects to identify and mitigate privacy and compliance risks.
  • Implement privacy-by-design principles in AI development to build privacy-focused AI systems.
  • Provide guidance on data anonymization during model training.
  • Build workflows to ensure transparency and manage user consent, preferences and data subject rights effectively.

For security teams, it is important to:

  • Strengthen AI/machine learning infrastructure to handle extensive training data securely.
  • Develop robust monitoring capabilities to detect and prevent theft of proprietary AI models.
  • Implement measures to prevent sensitive data from flowing into unauthorized AI solutions.
  • Educate employees on AI-related risks, such as deepfakes and voice-based vishing attacks.

How can they work together?

Collaboration between these teams is crucial for effective AI governance.

A cross-functional organizational AI task force with representation from key stakeholders across privacy, security and governance can be responsible for developing AI guidelines and policies, approving the procurement of new AI technologies to evaluate and mitigate AI risks carefully, and ensuring AI safety is accounted for when new systems are developed.

Regularly scheduled meetings can provide a space for the AI task force to discuss industry trends, emerging threats and new use cases for AI technologies.

Dedicated communication channels can give employees the ability to ask questions and gain timely information about approved AI tools and their use. Use cross-functional forums like company all-hands or department-specific all-hands meetings to communicate organization-wide updates around AI use and considerations.

The rise of AI systems has led to innovative technologies designed to identify and catalog AI systems within an organization and supply chain. Evaluating and implementing these tools is essential to enhance an organization's AI governance framework. Integrating these advanced technologies can help maintain oversight, ensure compliance, and effectively manage the growing complexity of AI applications.

Conclusion

With the rapid advancements in AI technologies, working in isolation is no longer effective for privacy, security and governance teams.

Creating an AI task force that fosters open communication, clearly defines responsibilities and discusses challenges and risks in a timely manner will provide immediate benefits.

By working together, these teams can ensure AI systems are developed and deployed responsibly, ethically and securely, safeguarding the organization's reputation and integrity.

Sanket Kavishwar is senior product manager at Relyance AI.

Kenneth Moras is security GRC Lead at Plaid.