This spring, Maryland passed a comprehensive consumer privacy law that includes one of the closest things to a true ban on certain data processing activities we have yet seen from any legislature.
At first glance, the prohibition is as strict as it is simple: A controller "may not sell sensitive data." Full stop. There are no legal bases and no exceptions — not even individual consent — listed within section 14-4607 of the Maryland Online Data Privacy Act.
The definition of sensitive data clarifies that the ban on sales covers genetic and biometric data, personal data of a consumer the controller knows "or has reason to know" is a child under age 13, geolocation data with a precision threshold of 1,750 feet, and data revealing racial or ethnic origin, religious beliefs, consumer health data (anything used to identify a consumer's "physical or mental health status," plus data related to gender affirming treatment or reproductive or sexual health care,) sex life, sexual orientation, status as transgender or nonbinary, national origin, citizenship or immigration status.
In addition, the statute includes a total ban on the sale of minor's personal data using the same expanded knowledge threshold as that for children: "if the controller knew or should have known" that the consumer is under the age of 18.
Has 'ban it' arrived?
For many years, privacy scholars and advocates have questioned the effectiveness of the notice and choice regime, the prevailing structure of privacy law that places individual control at the center of the governance of personal data.
Though individual autonomy and control are core principles of data privacy and these goals have not faded in importance, many argue that an overreliance on these principles leads to privacy failures. Consumers suffer from "consent fatigue" when bombarded by ubiquitous choices. And imbalances of power between firms and consumers make it all too easy to nudge consumers toward self-detrimental choices.
Instead, advocates have fought for the rise of another core privacy principle: data minimization. As co-Director of the Center for Democracy and Technology's Privacy and Data Project Eric Null argued, strong data minimization requirements like those included in recent comprehensive federal privacy bills would ensure "companies collect only data that is necessary to provide the product or service an individual requested. The strictest definition would disallow collecting data beyond the absolutely critical. More appropriate definitions allow data collection for other specified, allowable purposes, including to authenticate users, to protect against spam, to perform system maintenance, or to comply with other legal obligations."
Notably, as Future of Privacy Forum Policy Counsel Jordan Francis, CIPP/E, CIPP/US, CIPM, has already explored, strong data minimization standards are on the rise in state privacy laws, including Maryland, and are part of the table stakes for federal proposals like the American Privacy Rights Act.
But there's an even stricter alternative embraced by other scholars and advocates in certain situations: outright bans of risky technologies or processing activities. Where data minimization can be critiqued for establishing debatable legal thresholds, bans have the virtue of simplicity.
It's the difference between a "NO PARKING" sign and a signpost riddled with complex and conflicting hourly, daily and weekly parking restrictions. Though, admittedly, I have yet to see a parking sign that requires the completion of an impact assessment.
In the context of facial recognition technology, professors Woodrow Hartzog and Evan Selinger have been prominent advocates for the "ban it" approach, particularly concerning facial recognition technology. In their work, they argue facial recognition is inherently dangerous and invasive, posing significant risks to privacy and civil liberties. They emphasize that the technology's potential for abuse outweighs any benefits it might offer.
Hartzog and Selinger's arguments are rooted in the belief that some technologies are simply too harmful to regulate effectively and should be banned outright.
Though we have yet to see such a ban for facial recognition technology, perhaps the idea of data processing bans is catching on among state policymakers.
But wait, is that a loophole?
As always, the devil is in the details. While Maryland's ban looks strict on its face, the law's definition of "sale of personal data" includes exceptions that water down the total ban, including through language about consumer choices. Though sales include any exchange of personal data to a third party for monetary or other valuable consideration, they do not include "the disclosure of personal data where the consumer directs the controller to disclose the personal data or intentionally uses the controller to interact with a third party."
It is hard to say for sure whether the requirement to follow a consumer's direction is stricter or more permissive than the strong consent requirement laid out elsewhere in the Maryland law. As a Maryland consumer myself, do I implicitly direct the disclosure of my sensitive data when I share it in a context where I might expect it to be sold? Or must I give direction through an "unambiguous affirmative action," as required by the law's definition of "consent."
Further, as with all state privacy laws, Maryland's law excludes data and entities covered by existing federal laws from its ban on sales, including consumer reporting agency activities under the Fair Credit Reporting Act and data covered by the Family Educational Rights and Privacy Act, the Health Insurance Portability and Accountability Act, Title V of the Gramm-Leach-Bliley Act, and so on.
Insurance companies also enjoy a broad exemption under the Maryland law. Insurers and their affiliates, when collecting data for insurance purposes, are exempt, though likely covered by other industry-specific requirements.
Maryland is still a high water mark
Even if we add one or two asterisks next to the bans in Maryland's consumer privacy law, it still features other notable — though not totally unique — banning provisions.
For one, Maryland bans the processing of personal data for the purpose of targeted advertising if the company "knew or should have known" that the consumer is under the age of 18 years. A full ban on targeted advertising coupled with the expanded knowledge standard will lead to many open questions for operationalization and enforcement.
Maryland also embraces the trend of banning geofencing for certain purposes, which was first seen under Washington state's My Health My Data Act and has been duplicated a few times in other states. Controllers may not use a geofence to establish a virtual boundary that is within 1,750 feet of any mental health facility or reproductive or sexual health facility for the purpose of identifying, tracking, collecting data from, or sending any notification to a consumer regarding the consumer's consumer health data.
As I wrote before the Maryland law was signed, the innovative drafting in this bill will force privacy pros to reexamine their assumptions. This and other new laws showcase the fact that 2024 brought a true patchwork to the U.S. landscape, as Husch Blackwell Partner David Stauss, CIPP/E, CIPP/US, CIPT, FIP, PLS, and Senior Director of the Future of Privacy Forum's U.S. Legislation team Keir Lamont, CIPP/US, explain further in an IAPP retrospective.
Maryland's law goes into effect 1 Oct. 2025, but processing activities are not subject to enforcement until after 1 April 2026.
Will the next 18 months bring more bans and data minimization requirements? Signs point to yes.
Please send feedback, updates and confusing parking signs to cobun@iapp.org.
Cobun Zweifel-Keegan, CIPP/US, CIPM, is the managing director in Washington, D.C., for the IAPP.