TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

United States Privacy Digest | A view from DC: Cyberspace was colonized; 2024 is AI's turn Related reading: A view from DC: Is protecting kids worth the tradeoffs?

rss_feed

""

"Your legal concepts of property, expression, identity, movement, and context do not apply to us. They are all based on matter, and there is no matter here."

It was almost four decades ago that John Perry Barlow wrote those rebellious words, part of a manifesto that declared the independence of cyberspace from the governance structures of meat space. It was always meant as a provocation, a set of assertions about a worldview that was already being challenged through the migration of institutional control into the digital world.

Throughout the years, piece by piece, Barlow's tablets of digital stone have been chipped away. As time has marched on, his aspirational truisms have sounded more plaintive, more discordant with the digitized world we now inhabit.

Though the self-governing promise of the internet has been upheld in some domains, many others have been subjected to a war of attrition as every sociocultural structure has manifested its power over digital worlds. Far from rejecting power structures, the internet of today largely reflects and amplifies them.

And with good reason. As the internet became more popular and accessible, it also became more complex. It was not Barlow's imagined harmonious and free utopia, but a contested and conflicted domain, where different actors and agendas clashed and competed. Inequality, surveillance, manipulation and exploitation all flourished in cyberspace.

The realities of our networked world taught us that digital systems are not neutral or autonomous; they are shaped by human values, interests and power dynamics. The digital realm is an extension of reality. Our lived experience is no less meaningful — or full of potential harms — if it is plugged into a network.

In response, we began to realize that law and social norms can govern immaterial worlds just as well as they govern the world of matter. The two domains evolve together, like symbiotic creatures. Our physical and digital lives are inextricable. And so, our structures of governance influence and apply to both.

In short, cyberspace was colonized.

But perhaps this outcome is not as bad as Barlow feared. Bits and bytes may be ephemeral, but they are not inconsequential.

Digital systems can prey on the worst angels of our nature. They connect us to the entire world, without regard for whether we have the social bandwidth to handle the connection. They introduce effortless scale to any idea, even those that perpetuate real-world violence, hatred or bigotry. In practice, they also introduce new mechanisms for surveillance and control.

Professor Julie Cohen puts it best in her book "Between Truth and Power:"

"Networked digital information technologies enable new kinds of communication but also supply new infrastructural points of control; platform-based, massively intermediated media infrastructures both facilitate and co-opt bottom-up cultural and political production; and algorithmic intermediation processes optimized for behavioral tuning and user engagement amplify both benevolence and malevolence. It has become increasingly apparent that functioning legal institutions have an indispensable role to play in protecting and promoting fundamental human rights in the networked information era."

Each new technological revolution echoes prior revolutions. The rise of artificial intelligence is no different.

Today, the Silicon Valley discourse is rife with reflections of the magical thinking of the early internet pioneers. For some developers, it feels as though the world of AI is a wild west, where laws and social norms don't apply.

But the history of the internet shows us this cannot be the case. Functioning legal institutions have just as indispensable of a role to play in the age of algorithms as they did in the age of networks. Regulators have already warned that existing anti-discrimination laws apply to AI systems.

After all, as the AI Risk Management Framework from the U.S. National Institute of Standards and Technology tells us repeatedly, AI systems are sociotechnical in nature. They do not exist in an immaterial world. They are reflections and refractions of our messy human existence. As such, they require robust mechanisms of governance, at both the organizational and societal level.

In 2024, we will see the real-world implications of ubiquitous access to AI systems. We will also see the results of the first attempts to govern and control them. Privacy pros and all others concerned with digital governance must remain vigilant in finding ways to incorporate norms of privacy, responsibility and safety into the development and deployment of these systems.

It is already too late to feign ignorance. In one of its last actions of 2023, the U.S. Federal Trade Commission settled a complaint against Rite Aid for its allegedly secretive use of facial recognition systems in the retail security context. The proposed order in the case provides a detailed roadmap of what reasonable bias mitigation will be expected from companies throughout the AI life cycle. Importantly, the requirements include hiring and empowering qualified professionals to manage a comprehensive AI governance program.

The law is not static; it adapts to technical and social changes. In the U.S., 2024 will bring the continuation of coordinated federal agency action to define AI governance best practices. It will also bring legislative solutions at the state level, and maybe even in Congress. Regardless of the source of these ideas, we will need to be paying close attention to properly incorporate them into daily operational practices.

The age of AI is once again challenging legal concepts of "property, expression, identity, movement, and context." Once again, we are adapting. This time, leveraging the lessons of cyberspace policy, maybe we will adapt in time to mitigate the most serious harms.

Here's what else I'm thinking about:

  • Updated rules for children's privacy are coming. With another end-of-year flourish, the FTC announced its much-anticipated update to the Children's Online Privacy Protection Act Rule. The draft includes many proposed tweaks to the COPPA Rule, including limits on nudging, revised personal data collection and processing exceptions, a new opt-in regime for targeted advertising and updates to security and retention requirements. More analysis will be forthcoming.
  • A year of scrutiny for automakers kicked off with a new Kashmir Hill investigation. The New York Times article highlights not only the lingering general privacy concerns about the collection and use of personal data by automobiles, but also the potential for these systems to be used for intimate partner abuse and surveillance.
  • The Future of Privacy Forum presented a risk framework for body-related data in immersive technologies. The helpful framework breaks down extended reality technologies into the various body-based sensors that they incorporate, helping developers and others better conceptualize the layers of privacy risk that can be triggered by these systems across the data life cycle.
  • In case you missed it, the annual IAPP-EY Privacy Governance Report is out. The survey-based report presents some intriguing benchmarks from the past year of privacy governance. One interesting statistic: despite difficult economic conditions, only 14% of organizations saw a decrease in their privacy teams this year. The growth and diversification of data stewardship functions are only likely to continue.

Please send feedback, updates and manifestos to cobun@iapp.org.


Approved
CDPO, CDPO/BR, CDPO/FR, CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT, LGPD
Credits: 1

Submit for CPEs

Comments

If you want to comment on this post, you need to login.