As artificial intelligence legislation and guidance develop across the U.S., the issue of children's privacy has moved to the forefront. In particular, increased use of AI technology could change how children perceive certain advertising practices and subsequently raise privacy risks around serving personalized ads to minors.
Companies and advertisers looking to utilize AI are currently left to follow existing federal and state-level guidelines while a concrete regulatory regime is cultivated. Existing standards also play a role in AI accountability with children's advertising, including the U.S. Federal Trade Commission's Children's Online Privacy Protection Rule and prohibitions on deceptive marketing tactics under Section 5 of the FTC Act.
The FTC has already made several enforcement actions regarding AI technology's collection of children's data, including a fine against Amazon after its voice-activated assistant, Alexa, allegedly collected children's voice recordings.
During a recent BBB National Programs webinar, FTC Senior Attorney Michelle Rosenthal said the FTC Act will remain the agency's best tool to combat AI advertising issues.
"Section 5 is incredibly malleable," Rosenthal said. "It gives us authority to enforce against unfair and deceptive business practices and that's going to apply no matter what technology we are dealing with."
Context is key
Images or videos generated with AI technology are not always harmful or deceptive when used in the appropriate context. "When we think about the technology and its ability to show products in ways that perhaps we've never seen before ... we want to ensure that the rules of the road apply and what that product appears to do it should actually do in real life," BBB National Programs Children’s Advertising Review Unit Director Rukiya Bonner said.
BBB National Programs Privacy Initiatives Senior Vice President Dona Fraser added that context "is always going to matter" as it differentiates between endorsing a product effectively and potentially targeting children with misleading or nonconsensual ads. Fraser indicated CARU’s Self Regulatory Guidelines for Children's Advertising outline the necessary requirements for children’s advertising tools. AI-generated content must ensure the same safety guidelines advertisers use when marketing a product using real children so that it does not unintentionally promote harmful behavior.
"If you have a child by a pool promoting a pool product there still needs to be an adult present whether they are AI-generated or otherwise," Bonner said.
However, Rosenthal indicated companies promoting products for children must assess "potential risks the technology could pose to vulnerable users" and enlist efforts to mitigate those risks, including visual and audio AI disclaimers for transparency.
Advertisements using AI-generated content must also follow general guidelines for children's advertising, including the FTC's Self-Regulatory Principles for Online Advertising that detail behavioral advertising standards, reasonable consumer data collection and transparency practices.
"I often tell people even though our advertising report was not focused on AI specifically ... If you read it I think you will see that all of the principals apply in this space as well. There's a lot of stuff that's quite relevant," Rosenthal said.
Social media influence
The presence of AI-generated influencers and avatars could increase as technologies and systems mature, according to Rosenthal. When those influencers and avatars are deployed, children with access to social media may not understand the difference between content for entertainment versus content for advertising purposes.
Social media accounts that use AI-curated content are not necessarily unethical. However, the advertisement must disclose that the entity was created using AI when endorsing products. Companies have used animations to promote products for years, though Rosenthal claimed AI-generated content may be different.
"I do think that a cartoon character, for example, that's part of (a company's) brand, is a little bit different than having an influencer that appears to be independent and appears to be providing an endorsement, for example, in an independent way," Rosenthal said.
Children who view content created by these accounts may "form these parasocial relationships with some of these entities and tend to trust them," Rosenthal said. With these relationships, children may feel inclined to provide more information to these accounts on social media, raising privacy concerns around how children could unknowingly provide information used to train AI models.
"There may be a world a few years from now where adults may not even be able to realize whether they are looking at an influencer who is real or literally created by AI," Rosenthal said. "As AI advances … I think it’s just going to create more challenges for kids trying to differentiate between what's real and what's not."
Lexie White is a staff writer for the IAPP.