Artificial intelligence is reshaping the advertising industry, enabling unprecedented levels of personalization, efficiency and audience targeting. The IAPP's AI Governance Profession Report 2025 shows that across sectors, 16% of companies use AI for personalizing experiences and 16% use it for customer interactions. Among marketers, 69% have already integrated AI into their marketing operations, with nearly 20% allocating more than 40% of their budget to AI-driven campaigns.  

However, as AI becomes increasingly embedded in advertising strategies, it also raises significant ethical concerns, from data privacy risks to algorithmic bias and the potential for consumer manipulation. 

These challenges cannot be addressed by technology alone — they require a collaborative approach among stakeholders who shape the advertising ecosystem, including regulators, advertisers, technology companies, civil society organizations and consumers themselves. By fostering transparency, fairness and human oversight, a stakeholder-driven approach can help align AI-powered advertising with ethical principles and societal values. 

Many entities across the advertising sector have built responsible AI use policies that identify and counteract potential risks associated with using AI in their organizations. In a survey of AI risks that sampled entities ranging from trade organizations like the Association of National Advertisers to large marketing companies like Salesforce to self-regulatory agencies like the Children's Advertising Review Unit, some risks popped up again and again: algorithmic bias, hallucinations, data privacy risks, confusion over whether something is AI-generated and intellectual property concerns. 

Algorithmic bias, when an algorithm generates unfair or discriminatory outputs, can harm businesses in several ways. For example, marketing campaigns may be mistargeted based on inaccurate assumptions or businesses may make flawed product decisions about things customers do not actually want. Hallucinations, where an AI generates outputs that are false or fabricated, similarly produce results that can be anywhere from useless to misleading. The most robust policies combat these possibilities both proactively and reactively. 

Marketers can protect against potential algorithmic bias and hallucinations at each step of AI deployment, starting with training the AI on a high-quality, high-quantity dataset. "Biased data and biased models lead to biased results," Jennifer Chase, chief marketing officer and vice president of SAS wrote in an article for Forbes. AI models trained on broad, well-audited datasets are more likely to accurately predict the real world. To this end, multiple companies have created tools that make obtaining and vetting databases easier. Google has released Dataset Search, a large repository of datasets that are freely available on the web. Amazon's SageMaker Ground Truth offers human input at multiple points in the training process, such as giving human feedback on the quality of a model's responses or labeling data to more easily train an AI.  

Policies from companies like Salesforce and PricewaterhouseCoopers stress the importance of building guardrails into the AI before deployment and testing and retesting outputs after deployment. For instance, Salesforce's hallucination reduction policies restrict a model's output to a specified scope, and their mindful friction practice introduces "pauses in the user experience to ensure intentional human engagement at critical junctures." PwC's Responsible AI playbook recommends training employees to know "how to verify GenAI's outputs, create channels to report suspect results, and establish a risk-based system to review these outputs." 

Even when an AI's inputs are good quality, training AI is a continual process, so auditing an AI's outputs can show if the model needs to be updated or corrected. Some marketing policies require regular evaluations using tools like TensorFlow's Fairness Indicators or IBM's AI Fairness 360, which check for skewed data. Human viewpoints can also provide valuable insight; it can be easy to spot larger errors, but experts can identify subtle hallucinations or bias and help to correct them. 

A report and recommendation from the International Advertising Bureau collected advice from such experts, many of whom advocate for vigilant analysis of the AI to avert hallucinations or bias. Noticing flaws in a model does not necessarily indicate malintent — they can happen even "where perfectly well-meaning models deployed by very smart people learned to do things they were not intended to do, (which can) cause brand damage." 

Another way to mitigate the risk of hallucinations or bias is to limit how AI is used. The EU AI Act defines some AI uses as higher risk than others and imposes more stringent requirements for higher-risk applications. Recognizing this concern, some marketing policies advise against using AI to determine if an individual is eligible for employment, credit, health care, housing or other decisions with legally significant effects. 2X Marketing's policy bars employees from using AI for human resource-related matters like recruitment and hiring, while consumer goods giant Unilever's policy requires that "any decision that has a significant life impact on an individual should not be fully automated." Such limitations and human oversight can help recognize potential issues before a consumer ever interacts with an AI. 

Every phase of an AI's life cycle — from training to deployment — depends on data: the datasets that train it, the inputs it consumes, and the feedback loops that refine its model. Each of those stages can expose an individual's personal information, whether from proprietary customer records in a training corpus, sensitive information a customer submits to a chatbot, or usage logs that go into model updates. Given just how much data an AI needs, these risks can seem daunting, but lawmakers, regulators and professionals alike are developing structured ways to evaluate and mitigate these concerns.  

Governance frameworks

Many AI and data privacy laws around the world, including state laws in the U.S., require covered entities to conduct privacy impact assessments, also called data protection impact assessments. The U.S. Department of Commerce's Office of Privacy and Open Government defines a PIA as an "analysis of how information in identifiable form is collected, maintained, stored, and disseminated, in addition to examining and evaluating the privacy risks and the protections and processes for handling information to mitigate those privacy risks."  

Many companies must conduct these assessments already, so extending them to AI systems provides a clear way for companies to understand the provenance of their data and identify points where their systems could be more robust. 

Frameworks like the conformity assessment procedure for AI created by experts from the University of Oxford and the University of Bologna provide additional assessment methods that businesses can use to "prevent or minimise the risks of AI behaving unethically and damaging individuals, communities, wider society, and the environment." These systems give businesses actionable steps to take so they can adapt to rapidly developing legislation and regulations. They also emphasize the importance of clear communication with consumers about how AI interacts with their data. 

The Association of National Advertisers' Ethics Code of Marketing Best Practices advocates for marketers to also implement notice and transparency measures so that consumers know what data is being collected when and for what purpose. Surveys like this one from Pew Research show that consumers often do not know when they are interacting with an AI, and often when a product is advertised as using AI they might not understand what purpose the AI serves. However, consumers express that they want transparency surrounding AI use in marketing and media. They are more likely to trust companies that have policies on how to use AI ethically. 

Weaving technical controls and governance processes through every stage of the AI life cycle, from data collection to data curation to model training to deployment and beyond, builds a more resilient system. By adopting these principles, the advertising industry can ensure that AI-driven marketing remains ethical, consumer-friendly and aligned with broader societal values.  

Tips for businesses of all sizes

Based on the key principles earlier laid out, there are many good practices that any sized business can observe when using AI: 

  • Establish clear guidelines and policies for the use of AI in marketing. 

  • Train employees on ethical AI practices. 

  • Implement appropriate data governance procedures.  

  • Diligently monitor AI systems and conduct audits. 

  • Protect consumer rights and welcome productive feedback. 

  • Verify and fact-check all content, regardless of origin. 

Why it matters

By prioritizing the ethical use of AI in marketing, businesses have a great opportunity to cultivate trust with their consumers. And the ethical use of AI in marketing is not only instrumental for fostering consumer trust, but also for making the most out of the technology itself as businesses are able to align their AI systems with social norms and values.  

Furthermore, with uncertainty around the future of third-party cookies and other foundational advertising methods, the adtech industry can benefit by supplanting their existing methods with AI. Potential uses for AI in marketing continue to multiplying exponentially, ranging from generating ad copy and images to analyzing campaign metrics from anonymized datasets to automating customer service and beyond; indeed, more uses seem to pop up every day. 

However, precisely because AI has so many potential uses, adhering to ethical standards and having infrastructure can help limit those uses to what is necessary or helpful to a business. When you have a hammer, everything can look like a nail, so it's wise to identify what problems AI can help with and how exactly businesses can use it to solve those problems.  

Keeping these principles in mind also aids in future-proofing organizational practices against the whirlwind of shifting legislation, regulations and rules. Adhering to best practices helps businesses to anticipate future compliance requirements, and having governance infrastructure in place can aid in adapting when those compliance requirements change. 

AI continues to redefine the landscape of advertising, so ensuring its ethical deployment will be critical in preserving consumer trust and upholding industry standards. By adhering to principles like fairness, transparency, privacy protection and human oversight, businesses can not only mitigate risks but also harness AI's potential responsibly.  

As AI-driven marketing becomes increasingly sophisticated, a thoughtful, proactive approach will be key to responding to the whirlwind of technological and regulatory changes that define the field of AI right now. Establishing solid, organization-wide principles builds resilience and fosters a future where innovation thrives without compromising ethical standards or consumer rights. 

C. Kibby is a Westin Research Fellow for the IAPP.

Special thanks to Aly Apacible-Bernardo, former legal research associate for the IAPP, for her research contributions during the drafting of this article.