TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

| Biased AI systems face the music: Analyzing the FTC's Rite Aid enforcement Related reading: A View from DC: The FTC says, if it can be biometric data, it already is

rss_feed

For years, the U.S Federal Trade Commission has been the primary data privacy enforcer in the U.S. On 19 Dec., the consumer protection agency made its first major foray into artificial intelligence bias and discrimination, settling a complaint against Rite Aid regarding the company's use of facial recognition technology for retail theft deterrence. 

This groundbreaking case includes many lessons for privacy professionals and AI governance professionals, including the agency's first roadmap for conducting reasonable bias mitigation of AI systems. Though FTC orders are only binding for the subject company, understanding prior enforcement actions serves as a lodestar for companies that wish to avoid regulatory scrutiny.

In its complaint, the FTC took pains to describe the alleged unfair and discriminatory outcomes from Rite Aid's practices. When consumers can be questioned, detained or banned from stores by automated systems, the FTC said, it is vital that companies ensure this occurs in a transparent and fair manner after efforts to limit false positives and mitigate disparate outcomes to certain groups.

As a result of this FTC action, Rite Aid is prohibited from using facial recognition technologies for five years. After this time, if it chooses to deploy the technology again, it must do so subject to the terms of a detailed governance program that the FTC laid out in its order.

A match made in error

Rite Aid is the third largest American drugstore chain, operating more than 2,000 retail pharmacies. According to the FTC's complaint, since 2021 Rite Aid has used facial recognition technology in many of these locations to identify customers entering the store who are likely to shoplift or engage in other criminal behavior based on photos of suspected wrongdoers previously enrolled in the matching system.

The FTC complaint focuses on Rite Aid's alleged insufficient AI governance practices during multiple stages of deploying unnamed third-party vendors' facial recognition systems to its retail stores. The FTC's concerns include misgivings about Rite Aid's:

  • Vendor selection, including an alleged lack of oversight and diligence in requesting information about accuracy and reliability of the deployed systems.
  • Enrollment process. When face images were placed into the facial matching system, the company allegedly failed to account for reduced accuracy from low-quality images, enrolling many low-quality images from a wide range of sources, training store-level employees to "push for as many enrollments as possible" and retaining enrolled images indefinitely.
  • Match alert process. The company allegedly did not include confidence values on the match alerts sent to store employees when a potential match customer was identified as present in the store.

The FTC alleged thousands of false matches led to real-world harms to Rite Aid's customers, including subjecting them to increased surveillance, banning them from entering or making purchases, publicly and audibly accusing them of past criminal activity in front of friends, family, acquaintances and strangers, detaining them, subjecting them to searches, and calling the police to report their engagement in criminal activity.

In analyzing the alleged outcomes by race and gender, the FTC concluded, "As a result of Rite Aid's failures, Black, Asian, Latino, and women consumers were especially likely to be harmed by Rite Aid's use of facial recognition technology."

In a separate statement released along with the order, Commissioner Alvaro Bedoya described some of the most egregious outcomes and why "biased face surveillance hurts people."

Reasonable measures to prevent harm

Just as the FTC has reflected and clarified best practices for reasonable security, the Rite Aid case shows how the agency will help to refine expectations for AI governance practices around certain types of AI systems.

Transparency is a key value in privacy and responsible AI. Failure to disclose risks of harm to consumers can also lead to FTC Section 5 violations. So, companies that use facial recognition in physical stores should, at a minimum, consider disclosing the practice to consumers. In its complaint, the FTC alleged Rite Aid "specifically instructed employees not to reveal Rite Aid's use of facial recognition technology to consumers or the media."

Moving forward, after the 5-year injunction lifts, if Rite Aid chooses to implement a new "biometric security or surveillance system," as the FTC referred to it, the company will be obligated to inform people when they are enrolled in the system and provide a mechanism to contest their enrollment.

Beyond this straightforward baseline, the FTC's consent order reveals the other best practices it will expect from Rite Aid — and from similar uses of biometric systems in the future. Bedoya called the detailed order a "strong baseline for what an algorithmic fairness program should look like."

Conduct preassessments

In order to consider and address foreseeable harms to consumers flowing from the use of a biometric technology, Rite Aid will be required to conduct a written system assessment of risks "including, at a minimum, risks that consumers could experience physical, financial, or reputational injury, stigma, or severe emotional distress in connection with inaccurate outputs." This assessment must include:

  • An analysis of potential adverse consequences.
  • Documentation of accuracy testing.
  • Data factors and components of the system that could impact accuracy.
  • A review of standard industry practice.
  • The methods by which algorithms comprising the system were developed "and the extent to which these methods increase the likelihood that that inaccurate outputs will occur or will disproportionately affect consumers depending on their race, ethnicity, gender, sex, age, or disability status."
  • The deployment context, including demographics of areas surrounding deployed stores,
  • Policies and procedures governing the operation of the system.
  • The extent of training for operators of the system.
  • The extent to which the system suffers from different rates of accuracy depending on consumers characteristics, alone or in combination.
  • The extent to which consumers are able to avoid the system, including by opting out, without losing access to services.

Test for accuracy and reliability and then implement proper safeguards

Rite Aid will be required to test and assess the accuracy of covered systems before and after deployment, and implement, maintain and document the safeguards designed to control for the risks identified. The vendor whose system was at use in the Rite Aid matter allegedly included a disclaimer that it made no representations or warranties as to the accuracy or reliability of the system. Testing of the system must occur before deployment and at least annually, with specific requirements around documentation.

Annual employee training and ongoing monitoring

Rite Aid will be required to train operators of its systems, at least annually, to understand AI governance risks and best practices, including methodologies for interpreting the validity of outputs, overviews of types of biases, known limitations of systems and the requirements of the FTC order. Employee performance against these and other metrics will need to be documented and reviewed.

Establish calibrated enrollment policies

Quality data inputs lead to quality data outputs. When systems have known accuracy indicators, as in facial recognition technologies, deployment policies and procedures should ensure newly enrolled markers meet the proper quality to be accurate. To mitigate bias, companies should create and enforce written image quality standards necessary for the technology to function properly. Consistent with general privacy and security practices, the company will also be required to create retention limits for biometric information.

Implement clear notices and complaint procedures for customers

If Rite Aid ever deploys a similar system, it will be required to provide written notice to individuals whose biometric information is enrolled in the system. Separate notice is also mandated any time the system is used to take an action "that could result in physical, financial, or reputational harm to the consumers, including in connection with communications of the output to law enforcement or other third parties, unless unable to provide the notice due to safety concerns or the nature of a security incident relating to the output." Any complaints received must be responded to substantively within 30 days.

Mandatory information security program for covered biometrics

The FTC provided a detailed list of expected safeguards for Rite Aid's data security program. The new settlement revised a 2010 data security order against Rite Aid and alleged the company failed to keep to the terms of that order with regard to biometric data.

A widening net of takeaways

This will likely be the first of many AI bias enforcements from the FTC, and its lessons will apply in concentric rings to the governance of many types of AI systems, whether related to biometric information or not.

Retail companies that similarly deploy facial recognition technology should examine the Rite Aid order closely and ensure that they keep up with the FTC's clarified expectations — including at a minimum disclosing the use of facial recognition. 

For any company that uses biometrics, this also serves as a stark warning as the first public enforcement from the FTC after its May policy statement on the misuse of consumers' biometric information. It is worth noting again that the FTC's definition of biometric information, at least in its policy guidance, is quite broad and could include systems that may at first seem more mundane than those at issue in this case. 

For a broader array of AI systems, this order could serve as a template for the best practices the FTC will expect from companies deploying systems that carry foreseeable risks of harm to consumers. It provides a real-world case study to build on the FTC's report on AI harms, which focuses on concerns that AI tools can be inaccurate, biased and discriminatory by design. It also dovetails significantly with the emerging best practices for AI governance in the U.S., from the National Institute of Standards and Technology AI Risk Management Framework to the recent Office of Management and Budget guidance on federal agency use of AI systems.

We have long relied on the "common law" of privacy enforcement to help guide industry best practices. Years from now, it is likely the Rite Aid case will stand as the first foray into a common law of AI governance.


Approved
CDPO, CDPO/BR, CDPO/FR, CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT, LGPD
Credits: 1

Submit for CPEs

Comments

If you want to comment on this post, you need to login.