This resource analyzes U.S. Federal Trade Commission enforcement actions between 2018 and 2024, offering guidance on compliance, summarizing major trends and providing deep dives into case studies of interest.
Published: May 2024
Contributors:
Navigate by Topic
Until recently, the U.S. Federal Trade Commission limited its prescribed guidance on data privacy and cybersecurity compliance. Instead, its enforcement actions have offered a new common law of privacy, providing examples of actions and behaviors it considers unreasonable practices. In 2014, the IAPP Westin Research Center analyzed 47 FTC actions spanning 12 years in detail, working backward from the descriptions of privacy and security practices alleged to violate the FTC Act in order to develop best practices and guidelines. In 2018, the IAPP revisited the case study to trace the evolution of enforcement trends, looking at 50 cases from 2014-2018.
These studies illustrate how FTC enforcement has grown from straightforward actions over misrepresentations in privacy policies and data transfer agreements to more technical allegations over issues like facial recognition technology, software development kit usage and expanded definitions of "unfair practices" and "sale of data." Accordingly, privacy taxonomy continues to grow in complexity.
The IAPP has now analyzed 67 FTC enforcement actions between October 2018 and April 2024 in eight primary areas: children's privacy, health privacy, general privacy, data breaches and data security practices, vendor management and third-party access, employee data and management, and artificial intelligence governance. The analysis offers actions companies can take toward compliance, summarizes major trends in each area and dives deep into case studies of interest.
As noted, the IAPP constructed this case study series on the premise that the FTC has not historically provided much explicit compliance guidance to companies. However, this approach to understanding FTC enforcement actions may change in response to the agency's growing use of its business guidance and technology blogs and forthcoming rulemaking signaled by its 2022 Advanced Notice of Proposed Rulemaking on Commercial Surveillance and Data Security. Through this public consultation, the FTC is working to promulgate regulations to reduce ambiguity for organizations seeking to establish strong privacy and security protocols.
Children's privacy
As recent FTC leadership has prioritized protecting the data of more vulnerable populations, children's online privacy has come into focus through 14 enforcement actions since October 2018. With robust — and currently evolving — rules already in place under the Children's Online Privacy Protection Act, children's privacy has ripened within the enforcement agenda as regulators endeavor to protect children online.
Alleged COPPA violations range from simply failing to observe notice-and-consent requirements to knottier issues involving machine learning algorithms and dark patterns. The FTC's COPPA enforcement provides inferred guidance that businesses offering online services, apps or products directed to child users should:
On notice and choice
- Notify parents about a site's or service's collection and use of children's personal information, accounting for the available technological means of doing so.
- Obtain verifiable parental consent before collecting personal information of users 13 years old and younger. "Verifiable parental consent" requires a reasonable approach to ensure the child's parent gives consent. In certain educational situations, this also applies to school administrators or teachers.
- Ensure children's personal information is private by default and that valid consent is given before open communication is enabled.
On data management and governance
- Adopt and adhere to strict policies for data minimization and retention.
- Delete children's data when requested by parents and when children's accounts are inactive.
- Not train algorithms on children's personal information without valid notice and consent under COPPA.
- Limit communication channels between child users and general users for mixed-audience sites or services.
- Implement an in-app purchasing protocol that requires sufficient checks and parental consent to authorize charges in children's games.
- Prohibit third-party advertising networks from collecting personal information, such as persistent identifiers, to serve behavioral advertising if a site is directed to children under 13 years old.
- Periodically review service providers and apps to identify additional child-directed apps.
- Not misrepresent participation in an FTC-approved COPPA safe harbor program.
-
expand_more
Cases analyzed
Case study: US v. Epic Games
Subject: Children's privacy
In December 2022, the FTC filed two complaints against video game developer Epic Games for practices relating to popular online multiplayer game Fortnite. One federal complaint alleged Epic violated the COPPA Rule and FTC Act, while another administrative complaint alleged Epic's illegal use of dark patterns.
The federal complaint alleged Epic failed to notify and obtain consent from parents when collecting personal information from their children. It pointed to Epic's marketing and user surveys to show the company knew children comprised much of its user base. Epic also created a confusing and unreasonable process for parents to request deletion of their children's personal information — at times simply failing to honor requests. In Fortnite, it enabled live text and voice communications by default and allowed children and teens to play against and communicate with strangers, creating elevated risk of harassment and bullying.
Later, the administrative complaint spotlighted Epic's use of dark patterns, or deceptive or manipulative design features, to encourage in-game purchases. Epic allegedly maintained a confusing button configuration that led Fortnite players to incur unwanted charges. As a result, children frequently purchased in-game content without parental or card-holder consent. Epic further ignored more than one million user complaints and several employee concerns about these billing practices.
Businesses processing children's data and hosting accounts used by children should heed the FTC's Epic warnings: Children's privacy settings should not be public by default. Likewise, the FTC's increasing concern over dark patterns indicates businesses must avoid resorting to trickery to boost sales or accumulate personal information.
Health privacy
In the era of wearable devices and personal health mobile apps, the FTC has stepped up enforcement of health data privacy laws and regulations amid questions around the future of the Health Insurance Portability and Accountability Act and the lack of digital health data protections. The application of new enforcement tools, such as the Health Breach Notification Rule, has led businesses to reassess their digital health privacy practices.
The scope of what the FTC considers sensitive health information expanded under actions like GoodRx, where it clarified elevated privacy protections apply to any data that could lead to inferences about sensitive health information, such as life expectancy, sexual and reproductive health, and disability. Businesses collecting or processing personal health information must accordingly heed guidance inferred from relevant actions, including recommendations to:
On notice and choice
- Disclose all purposes for which a health service or its third-party affiliates collect, maintain, use or disclose data.
- Obtain consumers' affirmative express consent before collecting, using and disclosing their health information to third parties.
- Notify consumers of changes in a privacy policy, including those allowing for broader third-party sharing of sensitive genetic or health data.
- Not misrepresent HIPAA compliance, via seal or otherwise.
On data disclosures
- Assume "sale" can also include sharing personal information with third-party advertisers and platforms.
- Employ measures to protect consumers' health information through controls on and monitoring of third-party access and use, particularly in light of technologies that enable disclosure, like software development kits.
- Notify consumers and, when necessary, the FTC and media, about unauthorized disclosure of individually identifiable health information.
On data management and governance
- Operate using a broad definition of health-related data that can include direct inputs from users or even inferences from online activities like search queries and page navigations shared with advertising platforms.
- Not store sensitive information like genetic information on an unencrypted public server or alongside consumers' names and other identifying information.
- Properly inventory consumers' information to adequately process deletion requests.
- Destroy genetic samples after producing results, and contractually mandate service providers to abide by similar policies.
- Take prompt remedial action when alerted by third parties about security vulnerabilities.
-
expand_more
Cases analyzed
Case study: US v. Easy Healthcare (Premom)
Subject: Health privacy
Easy Healthcare developed Premom, a mobile application that collected personal information on menstrual cycles, fertility and pregnancy and allowed users to import data from other apps. Easy Healthcare settled with the FTC over allegations that it deceptively shared users' sensitive personal information with third parties and shared health information for advertising purposes without obtaining consumers' affirmative express consent. The FTC further alleged that Easy Healthcare failed to notify consumers of these unauthorized disclosures, in violation of the HBNR, and failed to implement access controls to address privacy and security risks created by use of third-party automated tracking tools, like software development kits.
Premom illustrates several hallmarks of the modern FTC. It features a previously underutilized enforcement tool, the HBNR, to fill gaps in HIPAA privacy protections, establishing new protection for digital health data. Further, the Premom complaint exhibits technical sophistication that was absent in past actions, including an exposition of SDKs and those responsibilities inherent in using them. Regardless of whether a business falls within the scope of existing health privacy laws, if it traffics in health data, it should study Premom to discern responsibilities assigned by the FTC to companies handling sensitive and/or personal health data.
Individual privacy, financial privacy and consumer reporting
Finance, consumer reporting, advertising technology and other personal data-centric businesses have been frequent enforcement targets as well. Where mature legislation like the Gramm-Leach-Bliley Act and the Fair Credit Reporting Act detail requirements for financial institutions and consumer-reporting agencies, respectively, their lessons can often retain wider applicability.
Analysis of relevant FTC enforcement trends instruct businesses to:
On notice and choice
- Notify customers or obtain consent for human review of private personal information, including photo or video recordings. Do not bury this information in terms of service or a privacy policy.
- Clearly and conspicuously disclose all purposes for which a business may use, sell or share personal information, including, if relevant, disclosing that the business may sell data to governmental organizations.
- Honor all forms of consumer opt outs, including those made at the operating-system or browser level.
- Obtain affirmative express consent if using sensitive personal information to train algorithms or other datasets.
- Avoid representing that data is "anonymized" when it is pseudonymized or can readily be reidentified, such as through use of unique persistent identifiers.
- If considered a "financial institution" under the GLBA, businesses must provide "clear and conspicuous" initial privacy notices to customers under the Privacy Rule, e.g., on the sign-up page. A grey text on light grey background linking to the privacy notice does not adequately highlight a privacy notice. This privacy notice must be accurate, and businesses should require customers to acknowledge receipt to sign up.
On data disclosures
- When a business tracks, provides, sells or shares precise geolocation data, employ technical controls prohibiting third parties from identifying consumers or tracking them to sensitive locations, such as a using sensitive location lists.
On data management and governance
- Ensure compliance with international data transfer agreements through principles, such as purpose limitation, notice and deletion policies, such as deleting data the business no longer needs and deleting personal data "collected from users who requested deactivation of their accounts" or who request deletion.
- Assess default settings to ensure they align with statements made in the privacy notice and other outward representations.
- Follow reasonable procedures to assure maximum accuracy within consumer reports, such as developing in-house protocols to ensure data accuracy, rather than relying on third parties, and regularly assessing queries and identifiers.
-
expand_more
Cases analyzed
- Appfolio
- Avast Limited
- Cafepress
- Chegg
- DCR Workforce
- EmpiriStat
- Flo Health
- InMarket Media
- Kochava
- LotaData
- Medable
- Miniclip
- mResource
- NTT Global Data Centers Americas
- Ortho-Clinical Diagnostics
- Paypal
- ReadyTech
- RealPage
- Ring
- Rite Aid
- SecurTest
- TandMProtection Resources
- Thru
- Trueface.ai
- TruthFinder
- X-Mode Social
Case study: In the matter of Everalbum
Subject: Individual privacy, financial privacy and consumer reporting
Everalbum enabled consumers to upload photos and videos to its cloud servers and used facial-recognition technology to group photos by the individuals appearing in each photo. Most users were unaware of and unable to disable the face-recognition feature. Everalbum allegedly trained its own face technology using photos from users who did not disable the face recognition feature and failed to delete photos and videos of users who requested account deactivation, contradicting its privacy policy. The FTC consent order required Everalbum to disclose its biometric information policy, obtain affirmative express consent from users who provided biometric information, and delete information and models trained on data from users who deactivated their accounts.
Everalbum marks the first FTC case focused on policing facial recognition, an increasingly popular technology that causes extensive concern for privacy advocates. It also spotlights what has quickly become the remedy du jour in the face of expanding algorithmic capacity: Algorithmic disgorgement. It has thus far been ordered in five cases: Cambridge Analytica in 2019, Everalbum in 2022, WeightWatchers in 2023, Ring in 2023, Rite Aid in 2023 and Avast in 2024.
Data security practices and breach incident response
FTC data security guidance has grown significantly more granular in recent years. In the drawn-out litigation of LabMD, the 11th U.S. Circuit Court of Appeals found the FTC's order unenforceable due to insufficient specificity, citing an "indeterminate standard of reasonableness" conveyed by the agency. Since that 2018 decision, consent orders have enumerated considerably more specific prescriptions for businesses, as first demonstrated by the Lightyear Dealer Technologies case.
FTC enforcement in the wake of data breaches sticks to a fairly consistent playbook that features explicit and, at this point, routine security mandates. The agency does not tolerate businesses cutting corners or dealing with security incidents in bad faith. Previous practices discussed in the two previous meta-analyses carry through to current guidance, with continued emphasis on strong passwords, strict access controls, encryption and other reasonable safeguards designed to mitigate identified risks.
With respect to data security, businesses should:
- Maintain adequate customer support for timely responses to concerns about compromised accounts.
- Send security notifications to consumers when account information changes or a new device is added to the consumer's account.
- Follow privacy and security restrictions and/or safeguards provided by third parties.
- Implement multifactor authentication for any account that provides access to personal information.
- Proactively maintain a protected infrastructure and system.
- Avoid storing critical database information on unsecured platforms, such as GitHub.
- Perform vulnerability scanning and penetration testing of their networks.
- Promptly install patches and critical updates on their networks.
- Implement an intrusion detection and prevention system.
- Limit the locations to which third parties can upload unknown files to a network.
- Segment networks to ensure one client cannot access data from another.
- Implement technical measures to log and monitor personal information stored on or in transit through its networks and limit access to employees and vendors needing access as required by their position.
- Assess the sufficiency of any safeguards in place at least every 12 months and promptly following a security incident to address the risks to the security, confidentiality or integrity of personal information, and modify existing information security programs based on the results.
- Take advantage of free monitoring software where appropriate, such as when using Amazon Web Services' data management and storage services.
-
expand_more
Cases analyzed
Case study: In the matter of Uber Technologies
Subject: Data security practices and breach incident response
In May 2014, attackers gained unauthorized access to Uber drivers' personal information. In 2016, Uber suffered a second breach of drivers' and riders' data after attackers used a key posted by an Uber engineer on GitHub, a code-sharing website, to access consumer data stored on third-party cloud providers' servers. The breach exposed unencrypted files containing the names, email addresses, phone numbers and driver's license numbers of tens of millions of Uber riders and drivers.
After the attackers notified Uber that they compromised its databases and demanded ransom, Uber paid them USD100,000 through its "bug bounty" program, which is intended to financially reward individuals for responsible disclosure of security vulnerabilities.
Uber demonstrates federal commitment to hold those responsible for failure to protect privacy and data security accountable. The FTC complaint leaned on deception, alleging the company failed to live up to its claims that it closely monitored employee access to rider and driver data and that it deployed reasonable measures to secure personal information stored on third-party cloud provider's services. The agency admonished the company's evasiveness in failing to notify the FTC or consumers for a year following the second breach — a course of action that rarely ends up well for FTC defendants.
Editor's note: A previous version of this article contained a sentence that misstated the convictions against Uber's Chief Security Officer, which are undergoing appeal. That sentence has been removed.
Service providers and third-party access
Recent FTC actions also reflect an appreciation for the complexity of the data ecosystem. Often, companies share personal data from employees or customers with third parties to receive services from the third parties and/or generate revenue. Downstream management of personal information shared with service providers or third parties has also been a point of emphasis under the FTC. Accordingly, businesses are advised to:
- Adopt and implement a written information security policy to oversee third parties.
- Contractually require service providers to implement appropriate safeguards for personal information.
- Implement and designate employees to coordinate a vendor and service provider oversight program, including regular reassessment of all third parties handling personal information.
- Identify internal and external risks to customer personal information, and regularly monitor, test, evaluate and adjust safeguards meant to control those risks.
- Require service providers to implement and enforce policies and procedures to remediate and investigate incidents involving personal information.
- Take reasonable steps to select and retain service providers capable of obtaining consent for collecting customer information and testing, monitoring and maintaining appropriate safeguards for personal information, including accuracy of AI systems.
- Conduct risk assessment for third-party developers and develop necessary mitigations. Companies must screen developers and their products before giving access to data, especially data designated as private by users.
- Be clear about how much control users have over their data and the extent to which the business grants access to third parties.
-
expand_more
Cases analyzed
Case study: In the Matter of BLU Products and Samuel Ohev-Zion
Subject: Service providers and third-party access
Mobile phone maker BLU contracted with China-based third-party service provider ADUPS Technology to "issue security and operating system updates" to their devices. The FTC alleged BLU and its president "failed to implement security procedures and to oversee the practices" of ADUPS, resulting in the collection of sensitive personal information without consumer knowledge and the creation of security vulnerabilities on consumer devices.
The settlement with BLU includes a mandated data security program that provides guidance applicable beyond the mobile device manufacturing industry, listing practices against which businesses may compare their existing data security programs.
BLU also continues a trend of naming individual defendants, typically high-ranking executives, as parties to an action and saddling them with extensive and ongoing obligations. Significantly, the portions of the consent order mandating action by BLU's CEO follow him to any companies he owns or controls.
Employee data and management
Employees and contractors can create liabilities for businesses, both in terms of personal information about them as well as the information they can access. Companies targeted by FTC enforcement in this area have often failed to implement or follow a privacy or data security program.
Based on actions involving employee data and employee management, businesses should:
- Maintain a written information security policy that includes employee training.
- Document and follow a schedule setting out what personal information the company collects, why it collects the information and when it will delete the information.
- Follow up with the employees responsible for patching vulnerabilities to verify completion.
- Designate an employee, preferably a senior executive, responsible for information security programs.
- Limit employee access to consumer data to what is necessary to perform a job function and/or implement measures to monitor and detect employees' access.
- Extend privacy and security standards and expectations to the data of contract workers, such as drivers and delivery people.
-
expand_more
Cases analyzed
Case study: FTC v. Ring
Subject: Employee data and management
Ring, an Amazon subsidiary selling home security camera systems, allegedly permitted employees to access customers' videos, used customer videos to train algorithms without consent and maintained insufficient security safeguards. Every employee and hundreds of third-party contractors had access to every video from the cameras, regardless of whether their position required access. One employee viewed thousands of recordings of female users in intimate spaces, including bathrooms and bedrooms. In its response, the FTC emphasized businesses should not bury the purposes for data collection in privacy policies and mandated more controls and monitoring of employee access to consumer data.
The lessons of Ring instruct companies to not only protect personal information from outside actors, but also from those professionally responsible for internal data processing. The consent order provides insight into the FTC's requirements for employee access, largely mirroring the previously mentioned data security prescriptions. The FTC does not necessarily distinguish across worker classifications, either. Whether an employee or contractor has access to personal information is often immaterial — proper controls must be in place regardless.
AI governance
Of late, the FTC has brought AI under its spotlight to the extent that use of AI technology may result in harm to consumers. Early enforcement and guidance involving the emerging technology have applied existing frameworks for unfair and deceptive practices in other areas, including data privacy and security, to automated decision-making systems and machine-learning technologies. In turn, AI governance professionals must also heed many of the lessons discussed herein. Failing to adhere to tenets of privacy program management such as providing notice, obtaining consent, systematically assessing risk and honoring opt-outs have been the basis of AI-related actions and provide insight into the FTC's priorities for regulating novel technologies.
With only five total actions related to AI as of March 2024, the FTC has only dabbled in regulation of AI use. Nonetheless, its actions thus far offer valuable insight for businesses in the AI industry or for businesses looking to incorporate AI and other novel technologies into their products and services. As such, based on the FTC's nascent AI common law, businesses are recommended to:
- Provide consumers with notice and a means of submitting complaints related to the outputs of automated decision-making systems, such as facial-recognition technology systems.
- Not automatically apply facial-recognition technologies to users' content unless they affirmatively choose to activate such a feature.
- Not train algorithms on sensitive personal information, including children's personal information or customer videos, without valid notice and consent.
- Not make representations about the use of AI or algorithms without appropriate substantiation.
- Implement and enforce controls for system quality. This may include testing and assessing pre- and post-use accuracy, tracking false positives, monitoring input quality and considering risks associated with the systems, such as misidentification based on characteristics of users or consumers.
-
expand_more
Cases analyzed
Case study: Rite Aid
Subject: AI governance
In perhaps its deepest incursion into AI governance enforcement to date, the FTC alleged drug store chain Rite Aid used facial recognition technology based on AI and automated biometric detection systems to surveil and identify patrons it deemed likely to engage in shoplifting and other criminal behavior. Rite Aid implemented an app that alerted employees to take action against individuals who triggered matches with pictures of known criminals.
Despite countless false positives and other deficiencies, the business relied on the technology for eight years, resulting in emotional, reputational and other harms to its customers. As an example, a Black woman was asked to leave the store following an alert that indicated a match to "a white lady with blonde hair." Elsewhere, an 11-year-old girl was stopped and searched based on a false positive match, causing her mother to miss work due to the child's distress.
The Rite Aid case cautions businesses to develop and maintain AI-governance programs tailored to their respective industries with consumers in mind. The chain failed to take reasonable measures to anticipate and prevent foreseeable harm that might — and later did — befall its customers. Many of these harms could likely have been prevented, mitigated or remedied through assessment of risks associated with the technology's use, testing or assessing accuracy before its deployment or regularly thereafter, monitoring input and output quality, or training and overseeing the employees tasked with its operation. If Rite Aid engaged in appropriate deliberate and regular data and technology governance, it may have avoided many of the pitfalls that invited regulatory scrutiny.
Wrap-up
Entering its 110th year, the FTC has continued to evolve in how it approaches consumer privacy protection and sets its enforcement priorities. Its development of technical expertise has led the agency to a deeper understanding of emerging industries, demonstrated by the growing frequency of references to dark patterns, adtech tools and, of course, AI. It has recognized the vulnerable contours of privacy, concentrating on safeguarding more sensitive categories of data, namely, information related to children, health and personal finances. Its jurisdiction has permeated throughout business relationships and supply chains, extending its influence into third-party access and service-provider management. It has expanded the enforcement tools at its disposal by creatively using the HBNR and GLBA, naming individual defendants, and demanding algorithmic deletion or disgorgement.
These trends will doubtlessly continue as the FTC continues to usher businesses toward its conception of reasonable data privacy and security compliance through enforcement, regulation and guidance.
Additional resources
-
expand_more
Previous editions of this report
-
expand_more
FTC resources
-
expand_more
Privacy enforcement resources
-
expand_more
US federal privacy resources