Move over big data and the internet of things — artificial intelligence is poised to be the next major trend that privacy pros should stay on top of. In the past month alone, we have seen the launch of a major industry effort to explore the policy ramifications of AI, and the U.S. Department of Transportation has released a policy roadmap for autonomous vehicles, suggesting that regulators and policymakers are eager to get into the AI game. Even the White House got involved this spring when it announced a series of workshops to explore the benefits and risks of AI.

The first fruits of that White House effort were unveiled last Wednesday with an initial report on the immediate future of these exciting technologies. It includes 23 recommendations aimed at the U.S. government and various federal agencies, and while privacy and data protection are not major focuses of the report, it does introduce a new vocabulary and raises issues that will implicate the privacy space.

Big data begets AI

After years of trying to wrap our heads around "big data," machine learning may now be the heir apparent to big data in the world of privacy buzzwords. Wednesday's report repeatedly notes that big data is the enabler of artificial intelligence and that data provides the "raw material" for new algorithmic developments and approaches to machine learning. Of course, machine learning is only a subset of what constitutes AI, which is also distinct from automation, deep learning, and robotics. Despite the mess of terminology, it is clear that industry is racing to invest in the whole bucket of technologies. If the phenomenon of big data encouraged nearly every company to view itself as a data company, fueling the privacy profession, AI looks to have a similar trajectory for influencing how organizations do business. 

If the phenomenon of big data encouraged nearly every company to view itself as a data company, fueling the privacy profession, AI looks to have a similar trajectory for influencing how organizations do business. 

What that looks like will vary, but it is likely that the same far-reaching and broad worries about fairness and accountability that have dogged every discussion about big data — and informed the FTC’s January Big Data Report will present serious concerns for certain applications of AI. While “Preparing for the Future of Artificial Intelligence” is largely an exercise in stage-setting, the report is likely a harbinger of the same type of attention and focus that emerged within the advocacy community in the wake of the White House’s 2014 Big Data Report. For the privacy profession, the report hints at a few areas where our attention ought to be directed.

First, AI is still a nascent, immature field of engineering, and promoting that maturation process will involve a variety of different training and capacity-building efforts. The report explicitly recommends that ethical training, as well as training in security, privacy, and safety, should become an integral part of the curricula on AI, machine learning, and computer and data science at universities. Moving forward, one could imagine that ethical and other non-technical training will also be an important component of our STEM policies at large. Beyond formal education, however, building awareness among actual AI practitioners and developers will be essential to mitigate disconcerting or unintended behaviors, and to bolster public confidence in the application of artificial intelligence. Policymakers, federal agencies and civil society will need more in-house technical expertise to become more conversant on the current capabilities of artificial intelligence. 

Second, while transparency is generally trotted out as the best of disinfectants, balancing transparency in the realm of AI will be a tremendous challenge for both competitive reasons and the "black box" nature of what we’re dealing with. While the majority of basic AI research is currently conducted by academics and commercial labs that collaborate to announce and publish their findings, the report ominously notes that competitive instincts could drive commercial labs towards increased secrecy, inhibiting the ability to monitor the progress of AI development and raising public concerns. But even if we can continue to promote transparency in the development of AI, it may be difficult for anyone whether they be auditors, consumers, or regulators to understand, predict, or explain the behaviors of more sophisticated AI systems.

But even if we can continue to promote transparency in the development of AI, it may be difficult for anyone whether they be auditors, consumers, or regulators to understand, predict, or explain the behaviors of more sophisticated AI systems.

The alternative appears to be bolstering accountability frameworks, but what exactly that looks like in this context is anyone’s guess. The report largely places its hopes on finding technical solutions to address accountability with respect to AI, and an IEEE effort on autonomous systems that I’ve been involved with has faced a similar roadblock. But if we have to rely on technical tools to put good intentions into practice, we will need more discussion about what those tools will be and how industry and individuals alike will be able to use them. 

The Sky(net) isn't falling, but…

Technophobes may breathe easier after reading the report, which emphasizes several times that artificial superintelligences that can meet or exceed human capacity across a full range of cognitive tasks remain decades away. While the report raises the specter of autonomous weapons systems and cyber warfare, Skynet is no imminent threat. Instead, the report argues that the best way to address the rise of a dystopian AI is to begin pursuing policies to attack and address the risks and challenges presented by rapid development of "narrow AI."

In terms of near-term regulatory and policymaking activity, the report recommends focusing on any immediate concerns around managing safety, security, and yes, privacy. 

From medicine and mobility to, as always, marketing, significant progress has been made in using "a toolkit of AI methods" in specific application areas. The report trumpets the public and private benefits already being realized in fields such as healthcare, transportation, the environment, criminal justice, and economic inclusion. Recent developments in smart cars and drones are touted as major case studies. In terms of near-term regulatory and policymaking activity, the report recommends focusing on any immediate concerns around managing safety, security, and yes, privacy. 

Unfortunately, the report largely punts on what it terms long-term societal and ethical questions, even as some of those concerns may be more near-term than imagined. A number of public comments received by the White House highlight the legitimate threat AI poses to both employment and the job market itself, but the report promises only a future study on automation and the economy. Millions of jobs in transportation will likely be eliminated by automation, but AI will also impact white-collar employment that could conceivably include routine compliance roles. Try as we might to think otherwise, neither lawyers nor privacy pros are special in this respect.

Preparing for the Future

Aside from a follow-up report about AI’s impact on employment, it is unclear what immediate next steps are coming from an Administration with less than a hundred days left in office. But for anyone interested in the future of privacy, getting a handle on the contours of AI, and how it intersects with big data, IoT, and fundamental privacy practices could be increasingly important. 

photo credit: Piyushgiri Revagar Clever Cogs! via photopin (license)