TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

United States Privacy Digest | A view from DC: State attorneys general allege addiction through data collection Related reading: A view from DC: Sectoral rules make US AI governance policy leader

rss_feed

""

Meta is facing a major enforcement action from 42 U.S. attorneys general, focused on alleged mental health harms to young people facilitated by the company's platforms. This represents the culmination of an investigation that was first announced in 2021.

The multifront coordinated offensive against Meta includes a 233-page complaint filed by 33 states in the U.S. District Court for the Northern District of California, plus separately filed suits in state courts in Massachusetts, Mississippi, New Hampshire, Oklahoma, Tennessee, Utah, Vermont and the District of Columbia. Rounding out the bunch, Florida filed its own federal lawsuit in the U.S. District Court for the Middle District of Florida.

Colorado, one of the states leading the effort, summarized the lawsuit as alleging Meta "knowingly designed and deployed harmful features on Instagram and its other social media platforms that purposefully addict children and teens. At the same time, Meta falsely assured the public that these features are safe and suitable for young users."

The arguments in the complaint for enforcement under state unfair and deceptive practices acts — so-called mini-FTC acts — are novel. In general, they are unrelated to privacy or data protection harms, focusing instead on youth mental health and the addictive nature of platform-based content feeds.

But personal data nonetheless plays a major role in the allegations. As one heading in the complaint alleges, the company "monetizes young users' attention through data harvesting and targeted advertising." The argument places inferred user interests at the core of the allegation that Meta's platforms are designed to maximize engagement, leading to addictive behavior. As the complaint puts it, "The Recommendation Algorithms use data points, or 'signals,' harvested from individual users to choose and/or arrange each new piece of content to display to a user. Such signals include, but are not limited to, overt actions like Liking a post or following a page as well as such unconscious actions such as lingering on—but not otherwise engaging with—certain content or visiting but not following another user's page."

According to the federal complaint, recommendation algorithms, as designed, are addictive because:

  • They allegedly "present material to young users in an unpredictable sequence" consistent with the psychological concept of variable reward schedules.
  • They allegedly "weaponize" data "harvested" from young users to "capture and keep their attention" without effectively apprising them of this fact.
  • They allegedly present "psychologically and emotionally gripping content" periodically throughout the feed to "provoke intense reactions" and keep users engaged.

Personal data inferred from user behavior fuels algorithmic recommendations, which drive engagement. It is a simple argument, and one that may equally apply to a number of other popular digital content platforms. But fitting the alleged harms of such policies into a legal cause of action is a more complex matter. For one thing, under state UDAP statutes, the attorneys general will need to make a showing of materiality. If the company's public statements about the purposes of data collection or the addictive nature of its platforms are deceptive, they must also be so in a way that is material to the user's decision to use the platform.

Unfortunately, major portions of each complaint are redacted, so it is impossible to fully weigh the strength of the evidence against Meta.

Notably, the allegations in this state-led investigation do not closely track those made by the U.S. Federal Trade Commission, despite its continued oversight of Meta, most recently in its decision to re-open its 2020 consent agreement with the company. The closest analogue comes in the first provision of the FTC's proposed order modification, which would impose a "prohibition on the handling of covered information from youth users." Though the FTC has not released detailed findings of fact related to young users, the proposed order would specifically prohibit Meta from collecting, using or selling personal data for "purposes of developing, training, refining, improving, or otherwise benefiting Algorithms or models; serving targeted advertising; or enriching Respondent’s data on Youth Users."

This is the same provision FTC Commissioner Alvaro Bedoya expressed concern about in a statement at the time the case was reopened, cautioning the FTC cannot modify a consent order unless it identifies "a nexus between the original order, the intervening violations, and the modified order." As Bedoya highlights, the company’s alleged privacy shortcomings in the FTC's findings of fact — also heavily redacted — do not focus on issues that particularly affect users under age 18. Perhaps, in narrowing the lens to focus on youth harms, the attorneys general are pursuing a path that the FTC would need to open its own new investigation in order to fully vet.

There is another major set of allegations in the joint complaint with a more direct privacy nexus. The state attorneys general allege that Meta has violated the Children's Online Privacy Protection Act by not collecting verifiable parental consent for child users. Again, it is notable the FTC has not publicly raised these COPPA claims, despite the agency's preference for bringing actions under laws like COPPA that provide for direct fining authority. However, COPPA empowers state attorneys general as equal enforcers of the law.

The lack of verified parental consent is only a COPPA violation if one of two things are true. Either Meta's platforms are targeted to children, or the company has actual knowledge of child users. The company has long maintained that its policy disallowing users under 13, and its evolving procedures to enforce this policy, render COPPA inapplicable to Instagram and Facebook.

In the federal complaint, the attorneys general allege differently, outlining two separate arguments supporting Meta's alleged actual knowledge that its platforms are popular with kids and may even be targeted to children through external advertising, empirical audience composition or by virtue of including child-directed accounts like My Little Pony. Although any relevant details are redacted, the attorney generals appear to be casting doubt on the effectiveness of Meta's age-gating mechanism, its reporting and account deletion mechanisms and its avoidance of more enhanced age verification modalities.

The details are still murky, but privacy pros will be closely watching the contours of these enforcement actions. The cases finally put flesh on the bones of enforcement interest surrounding the addictive nature of algorithmic feeds and the role that personal data plays in fueling over-engagement. Regardless of the outcome, the beginning of this saga sends a strong signal that regulators are looking for platforms to exercise enhanced efforts to protect young users from avoidable harms.

Here's what else I’m thinking about:

  • Eyebrows are raised over U.S. digital trade policy tweaks. The U.S. Trade Representative has reportedly withdrawn its named sponsorship of some of the digital trade clauses being debated through the ongoing World Trade Organization joint statement initiative on e-commerce. Although such a move does not signal a complete change in policy, it does leave room for the U.S. to agree to different outcomes. A USTR representative said the decision was meant to avoid harming "domestic policy considerations" while many countries "are examining their approach to data and source code." This comes after some in Congress criticized the USTR's approach to digital trade including the ongoing Indo-Pacific Economic Framework negotiations, saying it could tie Congress's hands for domestic competition regulation. The U.S. has long supported trade rules that discourage localization and support the free-flow of data, including most recently in terms agreed in the 2020 U.S.-Mexico-Canada Agreement.
  • The White House readies its big AI executive order. Multiple outlets reported the long-anticipated executive order will be released on 30 Oct. The executive order will include a wide range of policy adjustments, within the limits of the powers of the executive branch. This means it will be focused on policies that apply to government agencies, such as an update on AI assessment rules, though changes to immigration policies to retain AI talent and rules for contractors selling AI systems to the federal government are also expected. A related update to AI policy from the Office of Management and Budget is also expected soon. According to Federal News Network, which reviewed parts of an early draft, OMB will lay out "about 10 requirements from naming a new chief AI officer, to developing a publicly released AI strategy, to convening an AI governance board to putting some much-needed guardrails around the use of generative AI." Draft OMB guidance for public comment was expected in the summer, so it would not be surprising if it was released along with the updated executive order.
  • The IAPP hired a Managing Director for the AI Governance Center. I am excited to welcome my counterpart, Ashley Casovan, who will be leading our efforts to define, promote and improve the emerging profession of AI governance. Our editorial team profiled Casovan, who comes to us after leading the Responsible AI Institute. Caitlin Fennessy hosted an engaging LinkedIn Live conversation with her.

Upcoming happenings

  • 2 Nov. at 18:00 EDT: CDT hosts its annual Tech Prom (The Anthem).

Please send feedback, updates and unredacted complaints to cobun@iapp.org.


Approved
CDPO, CDPO/BR, CDPO/FR, CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT, LGPD
Credits: 1

Submit for CPEs

Comments

If you want to comment on this post, you need to login.