TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

Privacy Perspectives | Why artificial intelligence may be the next big privacy trend Related reading: Understanding marketing privacy: Overlooked aspects, key questions and practical audits

rss_feed

""

""

Move over big data and the internet of things — artificial intelligence is poised to be the next major trend that privacy pros should stay on top of. In the past month alone, we have seen the launch of a major industry effort to explore the policy ramifications of AI, and the U.S. Department of Transportation has released a policy roadmap for autonomous vehicles, suggesting that regulators and policymakers are eager to get into the AI game. Even the White House got involved this spring when it announced a series of workshops to explore the benefits and risks of AI.

The first fruits of that White House effort were unveiled last Wednesday with an initial report on the immediate future of these exciting technologies. It includes 23 recommendations aimed at the U.S. government and various federal agencies, and while privacy and data protection are not major focuses of the report, it does introduce a new vocabulary and raises issues that will implicate the privacy space.

Big data begets AI

After years of trying to wrap our heads around "big data," machine learning may now be the heir apparent to big data in the world of privacy buzzwords. Wednesday's report repeatedly notes that big data is the enabler of artificial intelligence and that data provides the "raw material" for new algorithmic developments and approaches to machine learning. Of course, machine learning is only a subset of what constitutes AI, which is also distinct from automation, deep learning, and robotics. Despite the mess of terminology, it is clear that industry is racing to invest in the whole bucket of technologies. If the phenomenon of big data encouraged nearly every company to view itself as a data company, fueling the privacy profession, AI looks to have a similar trajectory for influencing how organizations do business. 

If the phenomenon of big data encouraged nearly every company to view itself as a data company, fueling the privacy profession, AI looks to have a similar trajectory for influencing how organizations do business. 

What that looks like will vary, but it is likely that the same far-reaching and broad worries about fairness and accountability that have dogged every discussion about big data — and informed the FTC’s January Big Data Report will present serious concerns for certain applications of AI. While “Preparing for the Future of Artificial Intelligence” is largely an exercise in stage-setting, the report is likely a harbinger of the same type of attention and focus that emerged within the advocacy community in the wake of the White House’s 2014 Big Data Report. For the privacy profession, the report hints at a few areas where our attention ought to be directed.

First, AI is still a nascent, immature field of engineering, and promoting that maturation process will involve a variety of different training and capacity-building efforts. The report explicitly recommends that ethical training, as well as training in security, privacy, and safety, should become an integral part of the curricula on AI, machine learning, and computer and data science at universities. Moving forward, one could imagine that ethical and other non-technical training will also be an important component of our STEM policies at large. Beyond formal education, however, building awareness among actual AI practitioners and developers will be essential to mitigate disconcerting or unintended behaviors, and to bolster public confidence in the application of artificial intelligence. Policymakers, federal agencies and civil society will need more in-house technical expertise to become more conversant on the current capabilities of artificial intelligence. 

Second, while transparency is generally trotted out as the best of disinfectants, balancing transparency in the realm of AI will be a tremendous challenge for both competitive reasons and the "black box" nature of what we’re dealing with. While the majority of basic AI research is currently conducted by academics and commercial labs that collaborate to announce and publish their findings, the report ominously notes that competitive instincts could drive commercial labs towards increased secrecy, inhibiting the ability to monitor the progress of AI development and raising public concerns. But even if we can continue to promote transparency in the development of AI, it may be difficult for anyone whether they be auditors, consumers, or regulators to understand, predict, or explain the behaviors of more sophisticated AI systems.

But even if we can continue to promote transparency in the development of AI, it may be difficult for anyone whether they be auditors, consumers, or regulators to understand, predict, or explain the behaviors of more sophisticated AI systems.

The alternative appears to be bolstering accountability frameworks, but what exactly that looks like in this context is anyone’s guess. The report largely places its hopes on finding technical solutions to address accountability with respect to AI, and an IEEE effort on autonomous systems that I’ve been involved with has faced a similar roadblock. But if we have to rely on technical tools to put good intentions into practice, we will need more discussion about what those tools will be and how industry and individuals alike will be able to use them. 

The Sky(net) isn't falling, but…

Technophobes may breathe easier after reading the report, which emphasizes several times that artificial superintelligences that can meet or exceed human capacity across a full range of cognitive tasks remain decades away. While the report raises the specter of autonomous weapons systems and cyber warfare, Skynet is no imminent threat. Instead, the report argues that the best way to address the rise of a dystopian AI is to begin pursuing policies to attack and address the risks and challenges presented by rapid development of "narrow AI."

In terms of near-term regulatory and policymaking activity, the report recommends focusing on any immediate concerns around managing safety, security, and yes, privacy. 

From medicine and mobility to, as always, marketing, significant progress has been made in using "a toolkit of AI methods" in specific application areas. The report trumpets the public and private benefits already being realized in fields such as healthcare, transportation, the environment, criminal justice, and economic inclusion. Recent developments in smart cars and drones are touted as major case studies. In terms of near-term regulatory and policymaking activity, the report recommends focusing on any immediate concerns around managing safety, security, and yes, privacy. 

Unfortunately, the report largely punts on what it terms long-term societal and ethical questions, even as some of those concerns may be more near-term than imagined. A number of public comments received by the White House highlight the legitimate threat AI poses to both employment and the job market itself, but the report promises only a future study on automation and the economy. Millions of jobs in transportation will likely be eliminated by automation, but AI will also impact white-collar employment that could conceivably include routine compliance roles. Try as we might to think otherwise, neither lawyers nor privacy pros are special in this respect.

Preparing for the Future

Aside from a follow-up report about AI’s impact on employment, it is unclear what immediate next steps are coming from an Administration with less than a hundred days left in office. But for anyone interested in the future of privacy, getting a handle on the contours of AI, and how it intersects with big data, IoT, and fundamental privacy practices could be increasingly important. 

photo credit: Piyushgiri Revagar Clever Cogs! via photopin (license)

3 Comments

If you want to comment on this post, you need to login.

  • comment Sheila Dean • Oct 17, 2016
    Jerome made some great mindful notations here. They are well worth considering for companies venturing into IoT and who use algorithms.   
    
    Machine learning algorithms are a form of artificial intelligence. Big data is the process of commodifying (monetizing, categorizing) what the algorithms have scraped up, of course.  So AI has been with us for E-commerce as long as the Internet has used SRE or search criterion.  
    
    What people, or even powerful people, do with the information reaping ecosystems pose immediate and sustained privacy threats due to lack of ethical or compliance infrastructure.
     
    We face a really cynical set of problems when dealing with governments who use selective enforcement as a way to evade equal protections for the governed. 
    
    Companies sell widgets.  So for e-commerce this is all getting very complicated, very fast.  All businesses need to consider adopting a Social Innovation program, spend time with a work group, or hire a CSR consultant to incorporate ethics and legal privacy compliance if they are doing business with Artificial Intelligence.  
    
    It's only going to get hairier and more complicated.  So there is no error if you pause to reflect on your company's direction to course correct or improve upon your human rights performance means & self-protective measures.
  • comment John Berard • Oct 17, 2016
    We used to think keeping dolphins out of the tuna nets was a tough problem, but fishing remains a lot closer to a human endeavor than do the data collection and use behaviors of AI applications that seem to grow further and further away from human view.
    
    Research into the bias of algorithms ought to be a cautionary tale for AI.  We would be well to have life imitate art and apply Asimov's 3 laws of robotics:
    
    A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    
    A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
    
    A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.  
    
    Just a thought on a rainy and dark day.  Just don't want them all to be
  • comment Boiko Sergei • Mar 12, 2018
    Well, the answer I think is obvious, because of the intellectual property of this or that company. All will create their own algorithms, the invention of which will take away a lot of money, time and effort, and no one will give nothing without a mint.
    Check why you need to use big data in your e-commerce projects: https://www.cleveroad.com/blog/big-data-in-ecommerce-industry-application-reasons-you-can-t-ignore