The ethical use of AI in advertising


Contributors:
C. Kibby
CIPP/E, CIPP/US
Former Westin Fellow
IAPP
Artificial intelligence is reshaping the advertising industry, enabling unprecedented levels of personalization, efficiency and audience targeting. The IAPP's AI Governance Profession Report 2025 shows that across sectors, 16% of companies use AI for personalizing experiences and 16% use it for customer interactions. Among marketers, 69% have already integrated AI into their marketing operations, with nearly 20% allocating more than 40% of their budget to AI-driven campaigns.
However, as AI becomes increasingly embedded in advertising strategies, it also raises significant ethical concerns, from data privacy risks to algorithmic bias and the potential for consumer manipulation.
These challenges cannot be addressed by technology alone — they require a collaborative approach among stakeholders who shape the advertising ecosystem, including regulators, advertisers, technology companies, civil society organizations and consumers themselves. By fostering transparency, fairness and human oversight, a stakeholder-driven approach can help align AI-powered advertising with ethical principles and societal values.
Many entities across the advertising sector have built responsible AI use policies that identify and counteract potential risks associated with using AI in their organizations. In a survey of AI risks that sampled entities ranging from trade organizations like the Association of National Advertisers to large marketing companies like Salesforce to self-regulatory agencies like the Children's Advertising Review Unit, some risks popped up again and again: algorithmic bias, hallucinations, data privacy risks, confusion over whether something is AI-generated and intellectual property concerns.
Contributors:
C. Kibby
CIPP/E, CIPP/US
Former Westin Fellow
IAPP