"That is tsaheylu. The bond. Feel her. Feel her heartbeat… her breath. Feel her strong legs. You may tell her what to do—inside." —Neytiri, Avatar (2009)
With each passing year, it seems less like fantasy or science fiction to imagine a steed effortlessly controlled by our intentions. Whether a six-legged direhorse on the planet Pandora or a 480-horsepower electric Mustang on our own Terran pavement, frictionless locomotion is a dream that continues to capture our attention.
Although we may be many years from mind-controlled vehicles, let alone truly autonomous vehicles, the current innovations in the automotive industry serve as a microcosm of emerging risks in privacy and data governance.
Modern vehicles are built with a digital nervous system, an intricate network of chips, sensors and connections that do all the mechanical things a dumb car once did, while also keeping us entertained, optimized, connected and monitored for alertness and safety. A single sportscar now sports as many as three thousand semiconductors and 100 million lines of code. It also sports every manner of sensor and connective technology.
Even setting aside their dozens of mechanical performance sensors, modern vehicles are packed with every conceivable form of monitoring technology to better measure their interior, the world around them, and, increasingly, their driver's behavior. Cars are chock-a-block full of cameras, microphones, telematics, LIDAR and GPS and are continuously connected wirelessly to digital infrastructure, other vehicles and the internet.
As processing power and artificial intelligence-powered inferences expand, automobiles are likely to continue to be the place where the privacy rubber meets the road.
Perhaps this is why the California Privacy Protection Agency chose the mobility industry as its first stated target for its newly minted enforcement powers. In a press release last week, the agency's enforcement division announced a "review" of the data privacy practices by "connected vehicle manufacturers and related CV technologies."
A recent analysis of data sharing practices in the mobility industry by the Electronic Frontier Foundation's Hayley Tsukayama provides another angle on why regulatory scrutiny may be warranted. Tsukayama describes the growing use of data derived from vehicles to inform insurance rates and calls for insurance companies and car manufacturers to embrace principles to limit the types of data used for this purpose:
"But we don't know what, of all of the kinds of personal data that cars already collect — including, for example, footage from in-vehicle cameras — companies might find useful for risk assessment. Today, all the top ten insurance companies have opt-in, voluntary programs that allow consumers to contribute their own telematics data used primarily for pricing auto insurance. Insurance companies should only collect what they need to get a clear, fair assessment of driving risk. To do so, they may not need to collect information such as location data."
The CPPA is not the first regulator to take an interest in the mobility ecosystem. In 2017, the U.S. Federal Trade Commission hosted a workshop on privacy and security issues related to connected, automated vehicles. Closely following this, the U.S. Department of Transportation, through the National Highway Traffic Safety Administration, released a voluntary set of principles for automated driving systems titled A Vision for Safety. NHTSA's webpage on Vehicle Data Privacy still links to two helpful resources from the Future of Privacy Forum published in 2017, including their Data and the Connected Car infographic. Recent advances in telematics, machine learning, computer vision and connectivity mean these resources may be due for an update.
Though mobility is often seen as a niche and insular industry, it also remains at the cutting edge of advancements in monitoring and processing data. Regulatory scrutiny in this area can and should inform data privacy practices across other industries. Other planets will have to wait.
Here's what else I'm thinking about, AI governance edition:
- A first look at the comments on AI accountability. In an analysis for the IAPP AI Governance Dashboard, the folks at BBB National Programs broke down the key themes in the policy community's responses to the inquiry from the National Telecommunications and Information Administration about how to bring accountability to AI-powered systems. Alternatively, you can also take a look yourself at what stakeholders filed with the agency, even as the NTIA undertakes the task of sorting through more than 1,400 comments.
- What if the AI Bill of Rights was binding government policy? Nine prominent civil society groups signed a letter calling on the White House to issue an executive order to "make the AI Bill of Rights binding U.S. government policy, order the U.S. government to implement the best practices outlined in the AI Bill of Rights, and ensure that federal agencies and contractors cannot deploy or must stop using automated systems that do not abide by these principles and practices." The letter echoes the call from outgoing White House advisor Suresh Venkatasubramanian, who claims the White House "already knows how to make AI safer." Both argue the executive order should be issued alongside expected guidance from the Office of Management and Budget.
- Governance is global, national and local. Check out these three analyses of policymaker approaches to regulating AI around the world, at the federal level and in the states. But what do regulators expect today? My colleague Caitlin Fennessy, CIPP, answers that in a recent analysis.
Upcoming happenings:
- 9 Aug. at 5:30 p.m. ET, the monthly Tech Policy Happy Hour will take place (Art and Soul).
Please send feedback, updates and Na'vi wisdom to cobun@iapp.org.