Artificial Intelligence

Artificial Intelligence Topic Page

Navigate by Topic

Artificial intelligence is a broad term used to describe an engineered system where machines learn from experience, adjusting to new inputs, and potentially performing tasks previously done by humans. More specifically, it is a field of computer science dedicated to simulating intelligent behavior in computers. The field of artificial intelligence is rapidly evolving across different sectors and disparate industries.

This topic page regularly updates with the IAPP’s latest resources on AI and privacy.

Featured Resources

TOOL

Global AI Law and Policy Tracker

This tracker identifies AI legislative and policy developments in a subset of jurisdictions.
Read More

CHART

EU AI Act: 101

This chart provides an overview of the EU AI Act, which lays down down a comprehensive legal framework for the development, marketing and use of AI in the EU in conformity with EU values.
Read More

ARTICLE SERIES

Global AI Governance Law and Policy: Jurisdiction Overviews

This article analyzes the laws, policies, and broader contextual history and developments relevant to AI governance in five jurisdictions: Singapore, Canada, the U.K., the U.S. and the EU.
Read More

REPORT

Professionalizing Organizational AI Governance

This report on organizational AI governance focuses on the internal guidelines and practices organizations follow to ensure responsible development, deployment or use of AI.
Read More

RESOURCE ARTICLE

Consumer Perspectives of Privacy and AI

This resource analyzes how consumer perspectives of AI are shaped by the way emerging technologies affect their privacy.
Read More

GLOSSARY

Key Terms for AI Governance

This glossary provides definitions and explanations for some of the most common terms related to AI governance.
Read More


Intro to AI

Artificial Intelligence

Artificial intelligence is a broad term used to describe an engineered system where machines learn from experience, adjusting to new inputs, and potentially performing tasks previously done by humans. More specifically, it is a field of computer science dedicated to simulating intelligent behavior in computers. It may include automated decision-making. Acronym: AI... Read More

Ethics

Training AI on personal data scraped from the web

A cautionary tale is unfolding at the intersections of global privacy, data protection law, web scraping and artificial intelligence. Companies that deploy generative AI tools are facing a "barrage of lawsuits" for allegedly using "enormous volumes of data across the internet" to train their programs. For example, the class action lawsuit PM v. OpenAI LP, filed in San Francisco federal court in late June, claimed OpenAI uses "stolen private information, including personally identifiable informa... Read More

Companies using AI tools for workplace monitoring, prompting privacy concerns

The Washington Post reports companies are using artificial intelligence tools and apps in the workplace to improve employee skills, well-being and connectivity. But some employees are concerned about data collection and privacy. Brookings Institution Center for Technology Innovation Senior Fellow Darrell West said disclosure is "the most important thing." He said, "People need to know how they're being monitored."Full story... Read More

Generative AI: Privacy and Ethical Considerations for Your Business

Original broadcast date: 31 May 2023 Click To Access Web Conference View All IAPP Web Conferences This webinar discusses the intersection of privacy and AI and the implications for your privacy program, asking whether privacy is an enabler or inhibitor of innovation in AI and data utilization. Additionally discussed are how well-crafted and thoughtful data privacy programs can be the gateway for an ethical internet. Host:Marjorie Boyer, Programming and Speaker Coordinator, IAPP Panelist:Al... Read More

A view from DC: Should ChatGPT slow down?

It may not be a viral dance move yet, but the latest hot trend in tech circles is to call for a slowdown on artificial intelligence development. This week, an open letter from the "longtermist" Future of Life Institute called for a six-month pause on the development of AI systems such as large language models due to concerns around the evolution of “human-competitive intelligence” that could bring about a plethora of societal harms. Scholars agree that caution in the development of advanced alg... Read More

Chinese company develops AI program that predicts when employees will leave job

Chinese company Sangfor Technologies has drawn scrutiny for its AI program that can predict if an employee is about to leave their job, South China Morning Post reports. The program can spy on an employee’s browsing activity, such as viewing job posts and sending application emails. The program came to light after a user on Maimai.cn, a professional networking application, claimed he was fired when his company discovered, using a monitoring system, he was applying to other jobs.Full Story... Read More

Humans in the Loop: Building a Culture of Responsible AI

Original broadcast date: 24 June 2021  In this interactive privacy education web conference we will describe a case study of how the governance structures of an enterprise privacy program can be extended to bring to life “responsible AI”, a growing area of research merging the concepts from privacy, data ethics and new areas, such as explainable AI. The speakers share knowledge of industry best practices and demonstrate methods for assessing risk in AI projects and for developing a framework for responsible AI. Read More

Study looks at advances, long-term impact of AI

A new report from Pew Research Center and Elon University’s Imagining the Internet Center found 68% of responding developers, business and policy leaders, researchers and activists do not believe ethical principles focused on public good will be employed in most AI designs by 2030. The report includes written explanations from professionals, including Google’s Chief Internet Evangelist Vint Cerf, who said, “There will be a good-faith effort, but I am skeptical that the good intentions will neces... Read More

Ethical-AI-by-Design: How Responsible AI Yields a Brighter Tomorrow

Original broadcast date: February 24, 2021  This session will elevate, demystify and discuss the current state of meaningful thought leadership around ethical AI. Our cross-disciplinary panel will review how to incorporate ethical AI principles into the build of products and services to mitigate risk. Absent a holistic legal framework to address ethical AI, how can current legal frameworks inform practical risk and legal guardrails? The panel will break-down technical aspects of AI builds and outline the importance of applied ethics in the AI space, in the same way bioethics anchors considerations around the edgiest of medical advancements. Read More

Ensuring that responsible humans make good AI

We are seeing accelerating expansion in the range and capabilities of machine aids for human decision making and of products and services embodying artificial intelligence and machine learning. AI/ML is already delivering significant societal benefits, including improvements in convenience and quality of life, productivity, efficiency, environmental monitoring and management, and capacity to develop new and innovative products and services. The common feature of "automation applications" — the ... Read More

The Privacy Advisor Podcast: Carissa Véliz on privacy, AI ethics and democracy

Artificial intelligence, big data and personalization are driving a new era of products and services, but this paradigm shift brings with it a slate of thorny privacy and data protection issues. Ubiquitous data collection, social networks, personalized ads and biometric systems engender massive societal effects that alter individual self-determination, fracture shared reality and even sway democratic elections. As an associate professor at the University of Oxford's Faculty of Philosophy and the... Read More

AI Forum issues guidelines for 'trustworthy' use

The AI Forum’s Law, Society and Ethics Working Group has published a set of guidelines for designing, developing or using artificial intelligence in New Zealand. The principles include respecting fairness and justice; ensuring reliability, security and privacy; providing appropriate human oversight and accountability; and using the technology to promote the well-being of New Zealanders as much as possible. Using the principles, stakeholders can better manage the “identified risks and (unintended... Read More

London police launch facial-recognition program

The Associated Press reports London police have begun using facial-recognition technology in public spaces to scan for criminals. London Police Commander Mark McEwan said the surveillance is "the most accurate technology available to us" and officers will use the tech as "a prompt" to engage with a suspect. "We don’t accept this. This isn’t what you do in a democracy. You don’t scan people’s faces with cameras. This is something you do in China, not in the U.K.," Big Brother Watch Director Silki... Read More

Police use of automated license plate readers under fire

Following an audit of four California police agencies' use of automated license plate readers, California Sen. Scott Weiner, D-San Francisco, said the “state of affairs is totally unacceptable,” TechCrunch reports. The audit found Los Angeles stores 320 million license plate images, which are shared with “hundreds” of agencies that are lacking privacy policies. Weiner said many agencies “are violating state law, are retaining personal data for lengthy periods of time, and are disseminating this ... Read More

Facial-recognition technology: fundamental rights considerations in the context of law enforcement

This paper from the European Union Agency For Fundamental Rights explores fundamental rights implications that should be taken into account when developing, deploying, using and regulating facial-recognition technologies. It draws on recent analyses and data and evidence from interviews conducted with experts and representatives of national authorities that are testing facial-recognition technologies. The last sections provide a brief legal analysis summarizing applicable European Union and Coun... Read More

European Commission considering 3- to 5-year ban of facial-recognition tech

The European Commission may ban facial recognition for three to five years as it figures out how to curb abusive uses of the technology, BBC News reports. In an 18-page document, the commission plans to introduce new rules to enhance existing privacy laws and propose obligations on artificial intelligence users and developers. The document calls for EU countries to create an authority to keep track of those rules. Should the technology be temporarily banned, "a sound methodology for assessing th... Read More

EU aims to regulate AI ethics

Following the success of the General Data Protection Regulation in setting the global standard for data protection, the European Union is doubling down on its position as the ethical regulator for technology. On April 9, a high-level expert group set up by the European Commission and comprising 52 independent experts representing academia, industry and civil society presented its first set of ethical guidelines for artificial intelligence. Specifically, the aim is to “build trust” in AI by est... Read More

EU stresses lawfulness in new ethical AI guidelines

The winds of regulatory oversight for artificial intelligence are blowing in the U.S. and Europe. The European Commission signed off on its Ethical Guidelines for Trustworthy AI earlier this month, the culmination of several months of deliberations by a select group of “high-level experts” plucked from industry, academia,  research and government circles. In the advisory realm, the EU guidance joins forthcoming draft guidance on AI from a global body, the Organization for Economic Cooperation an... Read More

Lawmakers introduce House resolution on ethical AI

There are few signs pointing to U.S. regulation for artificial intelligence happening anytime soon. But two Congressional Democrats representing Silicon Valley and Detroit want ethical guidelines for AI, and they proposed a House resolution to get the ball rolling. Their ideas, which aim for AI system accountability and oversight, personal data privacy and AI safety, were endorsed by some of the giants of tech developing AI, including Facebook and IBM. In late February, Reps. Brenda Lawrence, D... Read More

AI ethics and moving beyond compliance

The digital economy is at the center of a seismic change with the convergence of big data and artificial intelligence. The oceans of digital information and low-cost computing power are providing endless marketing opportunities. The rapid rate of innovation is proving to be one of the most transformative forces of our time. At the same time, we are faced with ethical dilemmas challenging users’ digital dignity and redefining privacy norms. The past year will likely go down as the year of dubiou... Read More

Generative AI

Can Generative AI Survive the GDPR? (AI Governance Global, an IAPP event 2023)

The spectacular development of generative artificial intelligence has triggered a global thinking about how best to regulate the technology's risks. Several proposals for new rules have been advanced, including during the legislative process for an EU AI Act. When it comes to issues related to data protection, privacy and security, however, generative AI is already regulated by the EU General Data Protection Regulation. Italy's data protection authority, the Garante, only lifted a ban on ChatGPT after OpenAI committed to take a series of actions to address GDPR issues. Still, major questions remain open, and several other DPAs have launched investigations, while the European Data Protection Board created a task force on this. What could be the legal basis for training large-language models with personal data? How to address the other GDPR issues? Is it possible to reconcile the EU's data protection regulation with the need for innovation? And how is the potential governance of generative AI being shaped in the context of U.S. and global regulation? Read More

Data protection issues for employers to consider when using generative AI

The recent explosion of generative artificial intelligence tools coincides with a parallel explosion in privacy legislation, both in the U.S. and around the world. In the U.S., 13 states passed comprehensive data protection laws in less than three years. Globally, most developed countries passed new or stricter privacy laws within the last decade. Many of these laws explicitly regulate the application of AI. Consequently, feeding personal data into generative AI tools and handling personal data... Read More

Me, myself and generative AI

As embarrassing as this is to admit to my fellow privacy peers, my Instagram account was recently hacked. In a moment when I wasn’t thinking logically, I clicked on a link a "friend" had sent me (unbeknownst to me, the friend's account had also been hacked). Ten minutes later, I was kicked out of my account and had my two-factor authentication changed to a different number.  I scrambled to send text messages to as many people as I could that my account was hacked and to not engage with it until... Read More

Generative AI: Privacy and tech perspectives

Launched in November 2022, OpenAI’s chatbot, ChatGPT, took the world by storm almost overnight. It brought a new technology term into the mainstream: generative artificial intelligence. Generative AI describes algorithms that can create new content such as essays, images and videos from text prompts, autocomplete computer code, or analyze sentiment. Many may not be familiar with the concept of generative AI; however, it is not a new technology. Generative adversarial networks — one type of gene... Read More

Generative AI: A ‘new frontier’

When asked to explain the privacy concerns of generative artificial intelligence apps, ChatGPT listed five areas — data collection, data storage and sharing, lack of transparency, bias and discrimination, and security risk — each with a brief description. "Overall, it’s important for companies to be transparent about the data they collect and how they use it, and for users to be aware of the potential privacy risks associated with AI chatbots," ChatGPT said. And it was not wrong. These are am... Read More

Legislation, Regulation and Enforcement

Global AI Law and Policy Tracker

Governance of AI often begins with a jurisdiction rolling out a national strategy or ethics policy instead of legislating from the get-go. This tracker identifies legislative and policy developments in a subset of jurisdictions. The tracker also offers brief commentary on the broader AI context and related developments and identifies laws or policies in parallel professions like privacy. Read More

The EU AI Act: 'We have a deal!' Now what?

Over the weekend, EU co-legislators reached a political agreement on the Artificial Intelligence Act more than two and a half years after it was initially proposed. During this #LinkedInLive, the IAPP's Caitlin Fennessy, CIPP/US, will moderate a conversation between Ashley Casovan (Managing Director, AI Governance Center, IAPP) and Isabelle Roccia, CIPP/E, (Managing Director, Europe, IAPP). They will discuss the significance of the announcement, how we got there, what was agreed upon and what the next steps are. Read More

Luca Bertuzzi on the EU AI Act's political deal and what's next

After a grueling trilogue process that featured two marathon negotiating sessions, the European Union finally came to a political agreement 8 Dec. on what will be the world's first comprehensive regulation of artificial intelligence. The EU AI Act will be a risk-based, horizontal regulation with far-reaching provisions for companies and organizations using, designing or deploying AI systems. Though the so-called trilogue process is a fairly opaque one, where the European Parliament, European Co... Read More

EU reaches deal on world's first comprehensive AI regulation

After three days of intense negotiations, the European Union reached a political agreement 8 Dec. on the Artificial Intelligence Act, which would be the world's first comprehensive regulation of AI.  The trilogue process between the European Commission, Council of the European Union and European Parliament stretched on for more than 32 hours over the course of a three-day period last week, with negotiators announcing the deal late Friday night.  European Commission President Ursula von der Ley... Read More

Voice actors and generative AI: Legal challenges and emerging protections

The disruption from generative AI within the entertainment industry is palpably clear. The Writers Guild of America recently reached a landmark settlement with the Alliance of Motion Picture and Television Producers on — among other things — the future acceptable uses and restrictions of artificial intelligence in the screenwriting process. Along with more recently settled agreements with the Screen Actors Guild, these discussions have been the first major attempts to systematically address the ... Read More

UK First-tier Tribunal overturns ICO enforcement action against Clearview AI

In October, the U.K.'s First-tier Tribunal overturned the Information Commissioner's Office May 2022 fine and enforcement notice issued against Clearview AI. Clearview AI has no presence in the U.K., but its database includes images of individuals in the country scraped from public sites. The ICO issued the fine on the basis that Clearview AI was processing personal data related to the monitoring of the behavior of individuals in the U.K., which triggered the extraterritorial application of U.... Read More

Regulating AI (AI Governance Global, an IAPP event 2023)

A call to action has been sounded around the world for regulators to interpret and apply their mandates to meet the moment of artificial intelligence governance. The spectrum of activity has been as varied as it has been voluminous, with regulators clarifying existing regulatory approaches, censuring certain AI practices, and cohering and coordinating approaches with other domestic and international regulators. Privacy has been an early and dominant domain through which the development and use of AI has been scrutinized and enforced. This panel brought together the perspectives of privacy commissioners and deputy commissioners leading and seized with the importance of the regulatory enforcement of AI. Read More

What's next for US state-level AI legislation

While the picture is becoming clearer on the U.S. response to artificial intelligence policy, there's much to be learned about if U.S. states will follow the federal path or go their own way. Government use, algorithmic discrimination and so-called "deepfake" election advertisements are among the top AI priorities for state lawmakers heading into the 2024 legislative season, state Sen. James Maroney, D-Conn., told attendees of the inaugural IAPP AI Governance Global. AIGG is a first-of-its-kind... Read More

US policymakers, regulators discuss future of AI regulation at AIGG

It's been an earth-shattering week for those following policy developments in the artificial intelligence governance space. Monday kicked off with a sweeping executive order from the U.S. President Joe Biden on AI safety, security and trust. The G7 issued a code of conduct just days after the United Nations named its new AI advisory board, and this week the U.K. is hosting its AI Safety Summit, issuing the Bletchley Park Declaration.  Running in tandem with these developments, the IAPP is hosti... Read More

US Senate subcommittee examines AI's impact on the workplace

A 31 Oct. hearing by the U.S. Senate Committee on Health, Education, Labor and Pensions' Subcommittee on Employment and Workplace Safety explored what regulations may be necessary for artificial intelligence in workplace contexts. IAPP Staff Writer Alex LaCasse reported on the key takeaways and witness proposals from the hearing. Editor's note: Explore the IAPP AI Governance Center and subscribe to the AI Governance Dashboard.Full story ... Read More

UK digital regulators discuss interagency enforcement, AI governance coordination

As the U.K. sets out to develop artificial intelligence regulations, as well as pending legislation for online safety, data security and privacy, a key question is what form its regulatory scheme will take to account for legislative changes and technological advances driven by AI. The U.K. Information Commissioner's Office Executive Director, Regulatory Risk Stephen Almond said both the proposed Online Safety Bill and the Data Protection and Digital Information Bill, if passed, will present dig... Read More

London calling: Digital regulation and AI governance

The UK Digital Regulation Cooperation Forum brings together four UK regulators — the Competition and Markets Authority, the Information Commissioner’s Office, the Office of Communications and the Financial Conduct Authority. It was established to deliver a coherent approach to digital regulation and to ensure a greater level of cooperation given the unique challenges posed by the regulation of online platforms. It has been particularly active on artificial intelligence governance in the run up to the UK's AI Safety Summit later this year, and has also issued guidance across the spectrum of digital regulation, such as age-assurance technologies. Join IAPP's Joe Jones in conversation with the new and first CEO of the DRCF, Kate Jones, and the ICO's Executive Director of Regulatory Risk, Stephen Almond, to learn about the work and priorities of the DRCF on AI governance and beyond. Read More

Poll: Americans want federal regulation of AI

A poll of 1,001 registered U.S. voters conducted by the Artificial Intelligence Policy Institute found the majority want federal regulation of AI, ZDNet reports. Fifty-six percent of those polled support federal regulation and 82% said they do not trust technology leaders to tackle regulation independently. Sixty-two percent reported concerns about AI, with 86% saying they believe it could "accidentally cause a catastrophic event." Editor's note: Explore the IAPP AI Governance Center and subscri... Read More

Beyond GDPR: Unauthorized reidentification and the Mosaic Effect in the EU AI Act

A key concern in today's digital era is the amplified risk of unauthorized reidentification brought on by artificial intelligence, specifically by the large and diverse data sets used to train generative AI models, such as large language models. However, these risks can be effectively mitigated. By adopting technology solutions that uphold legal mandates, organizations can harness the power of AI to realize commercial and societal objectives without compromising data security and privacy. This ... Read More

Catching up with the co-author of the White House Blueprint for an AI Bill of Rights

As automated systems rapidly develop and embed themselves into modern life, policymakers around the world are taking note and, in some cases, stepping in. Earlier this year, the Biden administration took an early step by releasing a Blue Print for an AI Bill of Rights. Comprising five main principles, as well as what should be expected of automated systems, while offering a slate of real-world examples of the potential harms and benefits of artificial intelligence, the Blueprint is a must-read f... Read More

Why AI may hit a roadblock under India’s proposed Digital Data Protection Bill

From policymakers and citizens to businesses and privacy forums, everyone is talking about artificial intelligence these days. Though the term has long been in use, many confuse AI with automation, a process so old it was in use as early as 1500 B.C.E., when timekeeping was automated in Babylon and Egypt. Although automation is the start of the roadmap to AI, what arguably differentiates AI from mere automation is data, as it is dependent on the availability and quality of data from which it le... Read More

Schumer outlines comprehensive US blueprint for AI regulation

U.S. Congress is prioritizing the establishment of rules for the deployment and use of artificial intelligence. While various proposals have begun to surface in recent weeks, Sen. Chuck Schumer, D-N.Y., has stepped up with arguably the most comprehensive plan yet. Speaking at the Center for Strategic and International Studies, Schumer unveiled a two-part strategy to "move us forward on AI" with "one part framework, one part process." The former component of the strategy is the "Securities, Acc... Read More

The EU Artificial Intelligence Act: A look into the EU negotiations

Original broadcast date: 31 May 2023 The IAPP presents an update on the EU Artificial Intelligence Act. Proposed by the European Commission in April 2021, the AI Act has been fiercely debated ever since. The European Parliament will formalize its version in June, opening the way for trilogue negotiations with member states and the European Commission to finalize the law.During this LinkedIn Live broadcast, the IAPP's Isabelle Roccia will moderate a discussion between Laura Caroli, Rocco Panetta... Read More

AI: The transatlantic race to regulate

While artificial intelligence has been around for decades, it has only recently captured the public's imagination and entered the mainstream. The exponential growth of AI tools, including chatbots such as ChatGPT, Bard and Dall-E, Google's Magic Eraser for removing distractions and other people from personal photos, the University of Alberta detecting early signs of Alzheimer's dementia through your smartphone, or AmazonGo streamlining in-person grocery shopping, illustrates the continued spread... Read More

How existing data privacy laws may already regulate data-related aspects of AI

The explosive growth of ChatGPT and other generative artificial intelligence platforms has highlighted the promise of AI to every business leader and investor. ChatGPT is a generative AI language model — a type of machine learning — that allows users to ask questions and receive answers in a manner that mimics human conversation. The irony is that while ChatGPT and similar generative AI applications have captured the headlines, other types of AI have been in development and use for quite some t... Read More

AI governance, regulation top of mind at IAPP CPS 2023

Like other aspects of the modern economy, privacy stands to be fundamentally revolutionized by the rapid development of generative artificial intelligence and algorithmic decision-making systems. Across numerous sessions at the IAPP Canada Privacy Symposium 2023 last week, attendees took an interest in how organizations ensure they build comprehensive AI governance frameworks, while keeping an eye on how the technology could be regulated in Canada, potentially through the proposed Artificial In... Read More

Khan: FTC stands ready to 'vigorously enforce' AI

In an op-ed in The New York Times, U.S. Federal Trade Commission Chair Lina Khan said generative artificial intelligence will be "highly disruptive," and the agency "will vigorously enforce" its laws "even in this new market." She said, "We once again find ourselves at a key decision point. Can we continue to be the home of world-leading technology without accepting race-to-the-bottom business models and monopolistic control that locks out higher quality products or the next big idea? Yes — if w... Read More

As generative AI grows in popularity, privacy regulators chime in

There's no doubt the rapid growth of generative artificial intelligence and large language systems like ChatGPT is getting the attention of the privacy profession and taking the business world by storm. During her keynote address at the IAPP Global Privacy Summit 2023 in Washington, D.C., author and generative AI expert Nina Schick demonstrated the eye-opening growth of ChatGPT, pointing out it only took five days for it to reach 1 million users and two months to reach 100 million users. This g... Read More

UK releases white paper on AI regulatory framework

The U.K. Department for Science, Innovation and Technology published a white paper with its approach to regulating artificial intelligence technologies. The regulatory framework seeks to "build public trust in cutting-edge technologies and make it easier for businesses to innovate, grow and create jobs." The approach consists of five AI principles: safety, transparency, fairness, accountability and governance, and redress. U.K. regulators will roll out guidance within the next 12 months to help ... Read More

Europol report warns against criminal uses of generative AI

Europol published a report warning about the exploitation of OpenAI's ChatGPT and other generative artificial intelligence systems by cybercriminals, Euractiv reports. "While all of the information ChatGPT provides is freely available on the internet, the possibility to use the model to provide specific steps by asking contextual questions means it is significantly easier for malicious actors to better understand and subsequently carry out various types of crime," the report said.Full Story... Read More

A look at European Parliament’s AI Act negotiations

The proposed Artificial Intelligence Act would be the first horizontal regulation of AI in the world, but, as always, the devil is in the details. Though the Council of the European Union has nearly completed its version, the European Parliament is still negotiating its version. IAPP Editorial Director Jedidiah Bracy, CIPP, looks at how other EU lawmakers and stakeholders are crafting this massive, precedent-setting legislation.Full Story The Privacy Advisor Podcast: MEP Tudorache unpacks think... Read More

White House OSTP releases 'Blueprint for an AI Bill of Rights'

The White House Office of Science and Technology Policy published "Blueprint for an AI Bill of Rights," which provides design, development and deployment guidelines for artificial intelligence technologies. Data privacy, algorithmic discrimination protections and user choice principles are among the OTSP's "five common sense protections to which everyone in America should be entitled." The OTSP said the blueprint is "a vision for a society" and its AI use focuses on protections from the onset, i... Read More

CEPS publishes EU AI Act report

The Centre for European Policy Studies published a research paper on the proposed EU Artificial Intelligence Act. The authors said an agreement could be struck by mid-2023, but it may hinge on the ability of co-legislators to "converge on key issues such as the definition of AI, the risk classification and associated regulatory remedies, governance arrangements and enforcement rules.” The paper presents eight major recommendations to avoid overlapping regulations if the AI Act passes, including ... Read More

Sweden's DPA creates pilot program for decentralized AI

Sweden’s data protection authority, the Integritetsskyddsmyndigheten, launched a pilot project with Sahlgrenska University Hospital, Region Halland and AI Sweden. The IMY will provide the three partners with legal guidance for decentralized artificial intelligence. To get ahead of potential privacy and personal data protection pitfalls using AI, the project will allow the entities to test potential AI uses under IMY supervision.Full Story... Read More

Inside the EU's rocky path to regulate artificial intelligence

In April last year, the European Commission published its ambitious proposal to regulate Artificial Intelligence. The regulation was meant to be the first of its kind, but the progress has been slow so far due to the file's technical, political and juridical complexity. Meanwhile, the EU lost its first-mover advantage as other jurisdictions like China and Brazil have managed to pass their legislation first. As the proposal is entering a crucial year, it is high time to take stock of the state o... Read More

FTC takes steps toward privacy, AI rulemaking

As the debate rages on regarding whether the U.S. Federal Trade Commission should or could begin rulemaking on privacy, the commission has signaled it is not willing to wait for a consensus. On Dec. 10, the FTC filed an Advanced Notice of Proposed Rulemaking with the Office of Management and Budget that initiates consideration of a rulemaking process on privacy and artificial intelligence. The filing describes the FTC's intent as seeking to "curb lax security practices, limit privacy abuses, an... Read More

The EU AI Regulation — What’s New and What’s Not?

Original Broadcast Date: May 2021 The proposal for an EU artificial intelligence regulation is major news, with many news outlets citing the regulation’s significant impacts and global consequences. But are these obligations really new or already considered best practice? In this session, ADP Chief Privacy Officer Cécile Georges and Morrison & Foerster Associate Marijn Storm discussed the practical impact of the proposed regulation and some existing best practices that could help cover most req... Read More

Web Conference: The Face-Off — How Regulators Will Take on Facial Recognition Technology

Original broadcast date: Nov. 24, 2020 The EU General Data Protection Regulation sets out an ambitious unified privacy approach for the European Union, but regulatory practice shows facial recognition technologies are treated differently within countries. While a U.K. court found it permissible for the South Wales Police to use facial data to identify individuals at a large football match, Sweden's data protection authority, Datainspektionen, issued a fine of roughly 16,500 GBP to a school board that used cameras in a classroom with the aim of automating the registration process. Examples such as these raise concerns, ranging from privacy to equal treatment. The main issues include lack of transparency and the questionable reliability of algorithms, which could lead to a lack of concise information, biased results and discrimination. Although different jurisdictions may take different tacks, the nature of this technology is global; therefore, national lawmakers and regulators must establish a borderless approach. Hear this roundtable discuss what kind of legal framework may ensure the technology is used in a way that adequately balances concerns with the social and cultural differences among the continents. Read More

EU regulators ponder potential avenues to address AI

The European Union wants to figure out how to properly regulate artificial intelligence, and everyone has an opinion on how to do it. A coalition of 14 EU member states is urging the European Commission to adopt a "soft law approach" to AI to incentivize the development of the technology. The Presidency of the Council of the European Union called for a fundamental rights-based AI approach to "harness the potential of this key technology in promoting economic recovery in all sectors in a spirit ... Read More

Council for Transparency calls for regulations, transparency in AI

Chile’s Council for Transparency is advocating for a regulatory framework and transparency regarding automated decisions that use artificial intelligence. In a release, the council said the framework should determine “the rights and responsibilities of the different actors” and establish “the ideal mechanisms for the control and protection of personal data that are used to build the algorithms.” A representative said approval of the draft Law on Protection of Personal Data and establishing an au... Read More

Privacy and racial justice: Regulating facial recognition technology

Realization of the disparate, negative impacts of facial recognition technologies on different ethnic and racial groups, as well as the lingering privacy concerns related to their use, have made companies increasingly hesitant to tie their bottom line to them. IBM, for example, recently announced that it would exit the facial recognition business due to concerns over racial bias inherent in the technology. The decision was reported to be in response to the killing of George Floyd under the custo... Read More

What Facebook's $550M settlement teaches us about the future of facial recognition

In January, plaintiffs and Facebook reached the largest privacy settlement in U.S. history. Facebook agreed to settle for $550 million for violations of the Illinois Biometric Information Privacy Act in its "tag suggestions" feature, which identifies faces in uploaded photos and suggests users who match the faces. Apple and Google are facing similar lawsuits. This settlement will have far-reaching implications for all businesses using biometric identification techniques, such as facial recogniti... Read More

Lawmakers (continue to) grapple with how to regulate facial recognition

At a U.S. House Committee on Oversight and Reform hearing Jan. 15, lawmakers continued to investigate the potential risks posed by both government and commercial use of facial-recognition technology. The hearing was the third in a series on the topic as the committee seeks guidance on how to legislate use of the controversial surveillance tool, a tool law enforcement cites as essential to thwarting dangerous crimes against the American public and private industry seeks to use for everything from... Read More

US advises EU to avoid heavy regulation of AI

The U.S. White House has advised the European Union to avoid heavily regulating artificial intelligence until risk assessments and cost-benefit analyses have been carried out, Euractiv reports. The U.S. takes this position in a set of regulatory principles that will be formally announced during the Consumer Electronics Show in Las Vegas. “Europe and our allies should avoid heavy-handed innovation-killing models, and instead consider a similar regulatory approach,” a statement from the White Hous... Read More

White House releases guidance on AI and report on automated vehicles

Acting U.S. Office of Management and Budget Director Russell Vought has issued a draft memo regarding guidance on how government agencies should approach the "Executive Order on Maintaining American Leadership in Artificial Intelligence." The guidance seeks to support agencies' efforts to create "a robust innovation ecosystem" around AI while preserving values and principles, including privacy. The guidance also discusses the need for transparency as it relates to the collection, processing and ... Read More

US Senate grapples with how to regulate AI

At a hearing at the U.S. Senate Committee on Commerce, Science, and Transportation June 25, lawmakers aimed to determine what kind of government intervention, if any, is necessary for artificial intelligence given that companies are competing for "optimal engagement" from internet users, something that is often achieved through user manipulation and without their knowledge.  "While there must be a healthy dose of personal responsibility when users participate in seemingly free online services, ... Read More

Want Europe to have the best AI? Reform the GDPR

Artificial intelligence is rapidly transforming the global economy and society. From accelerating the development of pharmaceuticals to automating factories and farms, many countries are already seeing the benefits of AI. Unfortunately, it is becoming increasingly clear that the European Union’s data-processing regulations will limit the potential of AI in Europe. Data provides the building blocks for AI, and with serious restrictions on how they use it, European businesses will not be able to... Read More

Metaverse, VR and AR

Metaverse and privacy

From Facebook's recent decision to rename itself "Meta" to Epic Games' billion-dollar investment in metaverse technologies, the metaverse has dominated the news and will likely continue to do so over the next several years. To date, there is no universally accepted definition for the term "metaverse" and, for many, it suggests a new but undeveloped future of the internet. According to J.P. Morgan, the metaverse is a seamless convergence of our physical and digital lives, creating a unified, virt... Read More

Report highlights key features of metaverse for US Congress to debate

A report compiled by the U.S. Congressional Research Service titled “The Metaverse: Concepts and Issues for Congress” presented several policy issues for lawmakers to consider. The report detailed how technologies such as augmented reality, mixed reality and virtual reality could innovate in a number of capacities, including entertainment, health care, military and engineering. The report included privacy concerns regarding the potential to "collect and monetize personal data" and "track and min... Read More

Epic Games, Lego form partnership to develop metaverse with children in mind

The Verge reports Epic Games and Lego announced a new partnership to develop a metaverse space "with the wellbeing of kids in mind." While the companies have not yet decided what the space will look like, they announced development will "protect children’s right to play by making safety and wellbeing a priority; safeguard children’s privacy by putting their best interests first," and "empower children and adults with tools that give them control over their digital experience." Epic Games has str... Read More

Machine Learning & Facial Recognition

Machine Learning

A subfield of AI involving algorithms that enable computer systems to iteratively learn from and then make decisions, inferences or predictions based on data. These algorithms build a model from training data to perform a specific task on new data without being explicitly programmed to do so. Machine learning implements various algorithms that learn and improve by experience in a problem-solving process that includes data cleansing, feature selection, training, testing and validation. Companies... Read More

Performant risk mitigation for AI and LLMs

Artificial intelligence has existed for several decades, but it is only in recent times that large language models, which are powered by generative pre-trained transformers, have been developed into user-friendly formats. These advancements have seized the attention of the broader population and increasingly fuel generative AI applications that have a growing impact on people's daily lives. LLMs require re-identification risk mitigation due to dependencies on data sharing Multiparty AI project... Read More

How facial age-estimation tech can help protect children's privacy for COPPA and beyond

Kids' online privacy and safety has never been timelier than it is right now. Policymakers around the globe are grappling with how to keep kids — and their data — safe and secure. In the U.S., there have been unprecedented efforts to pass new state laws —  some privacy-specific and others targeting online safety, social media access and "age appropriate" content. There are also numerous new and revived federal efforts on the child online privacy and safety fronts.  The Entertainment Software Ra... Read More

How machine learning can help small businesses deal with data privacy compliance

Data privacy is one of the leading concerns for businesses to ensure confidentiality and preserve trust. Over the last few decades, the digital footprints of our society have shown exceptional growth. But, this digital revolution is striking hard over privacy concerns of individuals.  According to Pew Research, 81% of Americans report the potential risk of data collected by companies overshadowing the benefits they receive from those businesses.  Challenges in executing privacy compliance for ... Read More

Clearview AI extends database access to US public defenders

Clearview AI has begun allowing public defenders to access its facial recognition database of more than 20 billion facial images, The New York Times reports. The access helped a Florida man prove his innocence against accusations of vehicular homicide. Clearview AI's technology was previously limited to law enforcement, but CEO Hoan Ton-That said public defender access would help "balance the scales of justice." Legal Aid Society's Jerome Greco said the new access is merely an attempt "to push b... Read More

Report: Ukraine's use of facial recognition to identify dead Russian soldiers could backfire

The Ukrainian military’s use of Clearview AI facial recognition to identify dead Russian soldiers may carry negative consequences, warned military technology analysts, The Washington Post reports. Ukraine reported using Clearview AI to scan the faces of more than 8,600 Russian soldiers killed in the invasion. The families of 582 soldiers were directly contacted to inform them of family member deaths, with some outreach including an image of the dead soldier. Some military analysts were concerned... Read More

Report: One in two US adults in a law enforcement facial recognition database

Law enforcement in virtually every U.S. state has access to facial recognition software, Wired reports. It cited a Georgetown Law Center on Privacy and Technology report that said one in two American adults are already in facial recognition databases used to identify criminal suspects. Critics claimed police overuse the technology, as research has demonstrated it misidentifies women and people of color more than white men. In an op-ed for NJ.com, a group of 16 privacy and civil rights organiz... Read More

Facebook to close its facial recognition system, but will it start a paradigm shift?

Facebook said it plans to shut down its facial recognition system this month and delete the face prints of more than 1 billion users, a change the company said will “represent one of the largest shifts in facial recognition usage in the technology’s history.”   In announcing the change Tuesday, Jerome Pesenti, vice president of artificial intelligence at Facebook’s newly named parent company Meta, said it is “part of a company-wide move away from this kind of broad identification, and toward na... Read More

Will AI and algorithms truly dictate the future of content?

Nostalgia is a hell of a drug. Like many millennials, I have fond memories of watching "Space Jam" as a child. In fact, I saw it in on the big screen on a snowy December afternoon in Londonderry, NH, with my best friend at the time. To say that I loved it would be a colossal understatement. Once we got it on VHS, you better believe I wore out of that tape. As an adult and above average film aficionado, I can take off those rose-tinted glasses. Michael Jordan isn't a great actor. The dated pop ... Read More

How to Deal with Facial Recognition and Make it Compliant?

Original broadcast date: 29 June 2021  Facial recognition is at the forefront of media. This is because governments are increasingly using it for surveillance and enforcement purposes, and because of Clearview’s leaked customer list. It showed us thousands of businesses are using its facial recognition database for commercial purposes and is key to considering how this technology can be developed and used in a compliant manner. Other relevant rules include monitoring behaviors and automated decision-making. Our panel addresses key questions that must be resolved, such as: how to ensure transparency, and is consent a viable option? This session will provide you with the views of experts from the public sector, the industry and the legal world. Read More

More than Face Value: Facial Recognition Technology & Privacy

Original broadcast date: 15 June 2021  With the increasing adoption and deployment of biometric technology by private sector and government, the privacy implications of facial recognition technology have come squarely into the public eye. A number of sobering media reports and high-profile cases have resulted in pledges by tech companies, announcements of new legislation in various jurisdictions and increased scrutiny by regulators. A panel of Canadian privacy regulators discuss recent investigations into privacy sector uses of FRT (including Clearview AI and Cadillac Fairview), engagements involving law enforcement, guidance for private and public sector uses of FRT, and legislative considerations impacting FRT. This session is of practical value to organizations developing, using or contemplating the use of FRT in the commercial or law enforcement spheres. Read More

Assessing proper regulatory models for facial recognition

The rise of facial recognition is neither a matter of if nor when anymore. No country is immune to the deployment of technology that collects biometric information. What remains in question is the best way to address the use of such tech from a privacy and data protection standpoint. Whether it is fines, notices, warnings or guidance, data protection authorities in different parts of the world have approached dealing with facial recognition in different ways. During an IAPP Global Privacy Summi... Read More

Facial Recognition Tech and Privacy: Who Are You? Everyone Wants to Know

Original Broadcast Date: April 2021 This LinkedIn Live is part of the IAPP Global Privacy Summit Online 2021 web series. With the increasing adoption and deployment of biometric technology by private sector and government, the privacy implications of facial recognition technology have come squarely into the public eye and consciousness. A number of sobering media reports and high-profile cases have resulted in pledges by tech companies, announcements of new legislation in various jurisdictions... Read More

Chatbots and privacy rights after death: Once again, life imitates art

In January 2021, Microsoft secured a technology patent that went mostly unnoticed. In a story pulled straight from Netflix's "Black Mirror," Microsoft's patent detailed a method for "creating a conversational chatbot modeled after a specific person," by culling the internet for the "social data" of dead people — images, posts, messages, voice data — that could then be used to train their chatbots. Setting aside the creepy factor, there are some legitimate privacy questions to be answered here. ... Read More

Banks begin trialing tech to monitor employees

Reuters reports on the use of technology by banks in the U.S. to monitor employees. City National Bank of Florida, JPMorgan Chase and Wells Fargo have rolled out technology to keep tabs on their staff, which include camera software equipped with facial recognition features. City National Chief Information Security Officer Bobby Dominguez said the bank would not "compromise our clients' privacy" through its use of the tech.Full Story... Read More

Facial recognition on rise to authorize mobile payments

An analysis by Juniper Research found billions of smartphone users will be using facial recognition and other biometric authentication technologies in the coming years to authenticate payments made through smart devices, ZDNet reports. The research found 95% of smartphones globally will have biometric capabilities by 2025, and users’ characteristics will authenticate more than $3 trillion in payment transactions. Lead Analyst Nick Maynard said hardware-based systems will be more secure than soft... Read More

Subway facial recognition system raises data security, privacy concerns

Consumer rights groups have initiated a civil lawsuit claiming a subway facial recognition system in São Paulo, Brazil, will not sufficiently protect user privacy, ZDNet reports. The technology will scan faces of 4 million passengers daily, and the groups said its impact is unknown. A study on database security has not been conducted, and data protection policies, specifically regarding children and teenagers, have not been developed, they said, calling the system “inefficient and dangerous.”Ful... Read More

ACLU files class-action vs. Clearview AI under biometric privacy law

Yesterday, the American Civil Liberties Union of Illinois filed a lawsuit against Delaware-based Clearview AI for violating, in an "extraordinary and unprecedented" way, Illinois residents' privacy rights.  The complaint calls for Clearview to stop capturing and storing individuals' biometric identifiers: namely, faceprints. "Using face recognition technology, Clearview has captured more than three billion faceprints from images available online, all without the knowledge — much less the consen... Read More

NZ police test Clearview AI tech without consultation

The New Zealand Police piloted Clearview AI's facial-recognition technology before discussing its potential deployment with the force's leadership or New Zealand Privacy Commissioner John Edwards, NZ Herald reports. Detective Superintendent Tom Fitzgerald admitted the police went through with "a short trial" earlier this year. Edwards said he was "a little surprised" the trial went without a formal review from him or police hierarchy.Full Story... Read More

Facial recognition to monitor pedestrians at Texas border crossing

U.S. Customs and Border Protection will begin using biometric facial-comparison technology to monitor pedestrians traveling through the Brownsville, Texas, border crossing, Government Technology reports. The technology will photograph each pedestrian traveler entering the U.S. and compare that image to passport and ID photos stored in government records. Privacy advocates argue the program violates travelers’ privacy rights, adding CBP is not following an opt-out policy for U.S. citizens.Full St... Read More

Web Conference: Machine Language, Artificial Intelligence and Usage Data

Original broadcast date: March 24, 2020 Many technology relationships are different. The potential roles in relation to secondary data use challenge this binary controller to processor / business to service provider categorization. From these new relationships technological, ethical, practical and legal challenges all emerge. Regarding secondary data use, there is often much value to be gained, so the demand to do so is high. When embarking on these secondary data uses though, often innocuous practices to learn about product use, improve the services generally, or to teach algorithms to make decisions become highly contentious issues causing significant contractual friction. Vendors want to serve their agreed upon processing, but also derive value from learning and making their own insights with the data sets. Development teams view this as "essential", the board room see it as gaining strategic advantage, and legal teams want to ensure compliance. Join us as we explore this conundrum.  Read More

Facial-recognition cameras installed at Irish pitch in 2018

The Dublin InQuirer reports closed-circuit TV cameras equipped with "deep learning" technology were installed at the Bluebell Road football pitch in 2018. The cameras use different artificial intelligence techniques to identify individuals, including facial recognition. Documents revealed the Dublin City Council did not conduct a data protection impact assessment before the cameras were put in place. A spokesperson for the council said it will perform a DPIA. Full Story... Read More

Breaches at our front door: What we can learn from Clearview AI

A new competitor has entered the ring to dethrone Cambridge Analytica as the biggest privacy scandal of recent times: Clearview AI. In case you missed it, Clearview AI is a facial-recognition app that scraped millions of photos from the web to help law enforcement identify unknown people. Not long after The New York Times exposed it as the company that might end privacy as we know it, the plot thickened. The company was breached. In response to the incident, Clearview AI observed that “data bre... Read More

Report: Investors, clients used Clearview AI's facial-recognition tech

The New York Times identified multiple Clearview AI investors, clients and friends who freely accessed its facial-recognition technology for more than a year. These individuals used the technology “at parties, on dates and at business gatherings, giving demonstrations of its power for fun or using it to identify people whose names they didn’t know or couldn’t recall.” Clearview Founder Hoan Ton-That said trial accounts were provided to investors and partners to test the technology.Full Story... Read More

NYT podcast: Kashmir Hill discusses Clearview AI reporting

On the latest episode of The New York Times' "The Daily" podcast, Technology Reporter Kashmir Hill joins host Annie Brown to talk about Hill's recent string of reports on Clearview AI and the privacy issues surrounding its facial-recognition database. Editor's note: IAPP Editorial Director Jedidiah Bracy, CIPP, looks at whether the Clearview story will change the debate around facial-recognition technology in this piece for Privacy Perspectives.Full Story... Read More

Facial-recognition technology: fundamental rights considerations in the context of law enforcement

This paper from the European Union Agency For Fundamental Rights explores fundamental rights implications that should be taken into account when developing, deploying, using and regulating facial-recognition technologies. It draws on recent analyses and data and evidence from interviews conducted with experts and representatives of national authorities that are testing facial-recognition technologies. The last sections provide a brief legal analysis summarizing applicable European Union and Coun... Read More

China uses DNA to create human facial images

Officials in Tumxuk, China, have gathered blood samples from hundreds of Muslim detainees as scientists attempt to use a DNA sample to create an image of a person’s face, The New York Times reports. “In the long term, experts say, it may even be possible for the Communist government to feed images produced from a DNA sample into the mass surveillance and facial recognition systems that it is building, tightening its grip on society by improving its ability to track dissidents and protesters as w... Read More

Platform uses AI, bots to automate compliance practices

Securiti.ai President and CEO Rehan Jalil looked at how organizations conducted their privacy compliance practices and saw areas that could use improvement. He saw solutions that only handled part of an entity’s compliance efforts and a lot of work privacy professionals had to conduct manually. It is why Jalil and his company launched Privaci.ai, a platform designed to automate compliance tasks through the use of artificial intelligence and bots. Privaci was created to tackle data subject acce... Read More

Encryption isn’t enough: Why conversational AI requires more

Expectations of privacy have never been higher among consumers, which means privacy has never been a greater imperative for businesses and their executive teams. At the same time, companies are increasingly under pressure to improve overall customer experiences and customer service via advanced technologies and automation, including conversational artificial intelligence interfaces. These solutions can deliver responsive, on-demand experiences to customers, as well as tremendous cost efficienci... Read More

Additional News and Resources

The EU AI Act: 'We have a deal!' Now what?

Over the weekend, EU co-legislators reached a political agreement on the Artificial Intelligence Act more than two and a half years after it was initially proposed. During this #LinkedInLive, the IAPP's Caitlin Fennessy, CIPP/US, will moderate a conversation between Ashley Casovan (Managing Director, AI Governance Center, IAPP) and Isabelle Roccia, CIPP/E, (Managing Director, Europe, IAPP). They will discuss the significance of the announcement, how we got there, what was agreed upon and what the next steps are. Read More

Luca Bertuzzi on the EU AI Act's political deal and what's next

After a grueling trilogue process that featured two marathon negotiating sessions, the European Union finally came to a political agreement 8 Dec. on what will be the world's first comprehensive regulation of artificial intelligence. The EU AI Act will be a risk-based, horizontal regulation with far-reaching provisions for companies and organizations using, designing or deploying AI systems. Though the so-called trilogue process is a fairly opaque one, where the European Parliament, European Co... Read More

EU reaches deal on world's first comprehensive AI regulation

After three days of intense negotiations, the European Union reached a political agreement 8 Dec. on the Artificial Intelligence Act, which would be the world's first comprehensive regulation of AI.  The trilogue process between the European Commission, Council of the European Union and European Parliament stretched on for more than 32 hours over the course of a three-day period last week, with negotiators announcing the deal late Friday night.  European Commission President Ursula von der Ley... Read More

Urgency to deploy AI cannot be at the expense of effective governance

Artificial intelligence has undoubtedly jolted the market and pushed companies to address and adapt. In my conversations with customers, partners, policymakers and industry peers about this remarkable moment in time, there is a clear recognition of the need for AI policies and governance, but most are still working to put them in place. This sentiment is validated and quantified by the new Cisco AI Readiness Index. In a survey of more than 8,000 private sector, business and IT leaders across 30... Read More

Voice actors and generative AI: Legal challenges and emerging protections

The disruption from generative AI within the entertainment industry is palpably clear. The Writers Guild of America recently reached a landmark settlement with the Alliance of Motion Picture and Television Producers on — among other things — the future acceptable uses and restrictions of artificial intelligence in the screenwriting process. Along with more recently settled agreements with the Screen Actors Guild, these discussions have been the first major attempts to systematically address the ... Read More

In an AI-powered world, marketers need a new data strategy

Consumer data is the lifeblood of modern marketing — but in a world powered by artificial intelligence, leveraging data effectively while avoiding costly slip ups has never been more challenging. Today's marketers deal with consumers who know the value of their data, and expect to be treated with respect by the brands they permit to use it. They also have to navigate a fast changing regulatory landscape patrolled by muscular privacy enforcers. Simultaneously, marketers have to adapt to an indu... Read More

AI governance as an enterprise business initiative

Artificial intelligence governance is gaining momentum. On 30 Oct., U.S. President Joe Biden announced a sweeping executive order to provide more far-reaching federal scrutiny on the use of AI across industries. Earlier this month, at the IAPP's AI Governance Global 2023 in Boston, large companies spoke about the criticality of building formal governance processes around the development and deployment of AI systems to mitigate risk. Conversations around the potential risks of launching AI have ... Read More

Five compliance best practices for a successful AI governance program

The artificial intelligence regulatory landscape is quickly shifting. Most recently, U.S. President Joe Biden's administration issued an executive order on AI, G7 leaders agreed on a set of guiding principles for AI and a voluntary code of conduct for AI developers, and the EU AI Act could become the world's first comprehensive AI regulation. As companies around the world develop and deploy this technology, they are closely watching regulatory developments and recognize the urgency of building ... Read More

Training AI on personal data scraped from the web

A cautionary tale is unfolding at the intersections of global privacy, data protection law, web scraping and artificial intelligence. Companies that deploy generative AI tools are facing a "barrage of lawsuits" for allegedly using "enormous volumes of data across the internet" to train their programs. For example, the class action lawsuit PM v. OpenAI LP, filed in San Francisco federal court in late June, claimed OpenAI uses "stolen private information, including personally identifiable informa... Read More

Establishing governance for AI systems

While various countries are debating the regulation of artificial intelligence, few have implemented any plans. As the race to develop AI tools intensifies, organizations are grappling with the need for governance structures that can manage risks without stifling innovation. Amid this landscape, organizations have realized the urgency of creating a framework for direction, control and monitoring of AI tools. This is where challenges arise. How can organizations build AI governance structures th... Read More

The Alignment Problem in AI (AI Governance Global, an IAPP event 2023)

The alignment problem in artificial intelligence raises important questions about ensuring that AI foundational models and systems align with human values. This panel examined how evolving safety research can be effectively integrated into AI governance and looked at the current state of play of the EU Artificial Intelligence Act with regard to foundational models. The panel discussed how codes of conducts, standards and oversight can support alignment. What can we learn from other relevant frameworks such as the EU General Data Protection Regulation or the novel systemic risk mitigation and co-regulatory approach in the EU Digital Services Act. Read More

Responsible AI (AI Governance Global, an IAPP event 2023)

Today, organizations developing and integrating artificial intelligence systems are generally familiar with the responsible AI principles that have informed governance frameworks globally. But how are organizations translating those principles into practices? What is working well? What challenges have AI governance professionals faced turning principles into practical reality? Learn from leaders in the field about ongoing efforts to implement responsible AI on the ground. Read More

No AI without IP (AI Governance Global, an IAPP event 2023)

Our traditional notions of intellectual property are not prepared for artificial intelligence. As every company navigates policies around the use of generative AI, even bigger debates are raging. Who owns the rights to AI-generated content? How can we harmonize intellectual property laws across different countries? In this panel, experts from across the intellectual property community reviewed these and other challenges to bring order to the evolving chaos. See the lively discussion on the operational, legal and ethical aspects of intellectual property in the context of AI, including generative AI and other emerging technologies. Read More

AI Leadership in Action (AI Governance Global, an IAPP event 2023)

This panel is a window into the efforts of industry leaders to leverage existing governance procedures in privacy and other domains as they tackle artificial intelligence governance challenges. Panelists shared examples and lessons learned from engaging with product teams and the C-suite to ensure AI systems meet privacy requirements, while also addressing the additional equities and risks at issue from the training and deployment of AI. Ohio State University Mortiz College of Law professor Dennis Hirsch shared his knowledge of governance and accountability by highlighting the diverging paths companies are taking to prepare for a regulated AI future. What are the biggest risks we face? In comparing and contrasting these expert perspectives, other professionals can better understand how to adapt their own internal structures to the demands of new risks. When responsibility is shared, how do we make sure we get it right? Read More

Keynote Panel: Moderator Jennifer Strong, Julie Brill, Keith Enright, Christina Montgomery, Rob Sherman (AI Governance Global, an IAPP event 2023)

Listen to technology thought leaders and senior executives from major technology companies to learn about their innovative approaches to artificial intelligence governance. Each panelist shared their perspective on the emerging role of this multidisciplinary field and offered their ideas on how to build safe, secure, trustworthy AI and innovation systems that protect privacy. Read More

What's next for US state-level AI legislation

While the picture is becoming clearer on the U.S. response to artificial intelligence policy, there's much to be learned about if U.S. states will follow the federal path or go their own way. Government use, algorithmic discrimination and so-called "deepfake" election advertisements are among the top AI priorities for state lawmakers heading into the 2024 legislative season, state Sen. James Maroney, D-Conn., told attendees of the inaugural IAPP AI Governance Global. AIGG is a first-of-its-kind... Read More

US policymakers, regulators discuss future of AI regulation at AIGG

It's been an earth-shattering week for those following policy developments in the artificial intelligence governance space. Monday kicked off with a sweeping executive order from the U.S. President Joe Biden on AI safety, security and trust. The G7 issued a code of conduct just days after the United Nations named its new AI advisory board, and this week the U.K. is hosting its AI Safety Summit, issuing the Bletchley Park Declaration.  Running in tandem with these developments, the IAPP is hosti... Read More

US Senate subcommittee focuses on AI in the workplace

Artificial intelligence's capacity to upend employees' relationships with the workplace is on its way to becoming a reality. The real questions are how soon and at what scale will it occur. An 31 Oct. hearing by the U.S. Senate Committee on Health, Education, Labor and Pensions' Subcommittee on Employment and Workplace Safety featured exploration on multiple angles to the potential conundrums AI could raise related to employees' workloads and general employment or hiring processes. U.S. Sen. J... Read More

Leaders of CDT's new AI Governance Lab want to shape policy dialogue

With the U.S. seeing a rush to regulate artificial intelligence at the state and federal levels, policymakers are faced with the difficult task of defining baseline standard operations.  For Miranda Bogen and Kevin Bankston, it is an opportunity to make sure those baselines are ethical. That's one of a few goals for the Center for Democracy and Technology's new AI Governance Lab, which the two Meta veterans launched recently. Bogen is the founding director and Bankston acts as the senior adviso... Read More

UN establishes global AI advisory board

The United Nations launched its High-level Advisory Body on Artificial Intelligence comprised of government, private sector and civil society experts. The group looks to "foster a globally inclusive approach … to undertake analysis and advance recommendations for the international governance of AI." Full story ... Read More

White House rolls out comprehensive executive order on AI

After months of speculation, U.S. President Joe Biden released the federal government's first comprehensive action around artificial intelligence. Among the top priorities in the executive order released 30 Oct. are standards around privacy, security and safety, according to a fact sheet released prior to the text of the order. The sweeping order also seeks to prevent discrimination in systems while trying to protect workers' rights when the technology is used in the workplace. "Biden is rolli... Read More

What AI Governance Leaders are Thinking About

In July 2023, the IAPP AI Governance Center convened a group of leaders from industry, government, academia and civil society in Portsmouth, New Hampshire to discuss the future of AI and responsible governance. This infographic presents some of the most prominent themes and questions identified through the discussions. Read More

Data quality, privacy joined at the hip

The AI revolution has fundamentally changed how we understand data. A decade ago, software was viewed as the core differentiator to companies. That view has shifted. Now, data, when collected and deployed smartly and responsibly, can provide that substantial competitive edge and we can think of artificial intelligence as the catalyst that will take this data-driven economy to the next level. How data gets applied to software and how businesses manage risk and capture value from AI initiatives ... Read More

Examining India's efforts to balance AI, data privacy

Enterprises and individuals worldwide find themselves amid a historical tipping point as artificial intelligence continues to transform how organizations and customers conduct business. AI-based technologies are poised to reshape industries, enhance operational efficiencies and improve overall quality of life. However, as AI integration becomes more pervasive, it also brings forth significant privacy concerns that demand careful consideration. From a legal and regulatory perspective, the recent... Read More

AI's emergent abilities a 'double-edged sword'

In recent months, the focus on artificial intelligence shifted to generative pretrained transformers that rely on large language models and tools, such as OpenAI's ChatGPT or Google's Bard, as they became widely available to the public.  Generative pre-trained transformers are AI models specifically trained to understand and generate human-like text and process vast amounts of textual data. With recent developments of LLMs came the phenomenon of "emergent abilities."  Emergent abilities are un... Read More

Performant risk mitigation for AI and LLMs

Artificial intelligence has existed for several decades, but it is only in recent times that large language models, which are powered by generative pre-trained transformers, have been developed into user-friendly formats. These advancements have seized the attention of the broader population and increasingly fuel generative AI applications that have a growing impact on people's daily lives. LLMs require re-identification risk mitigation due to dependencies on data sharing Multiparty AI project... Read More

Evaluating the use of AI in privacy program operations

The privacy implications and questions surrounding artificial intelligence dominate discussions among many privacy professionals. How do we untrain an AI model previously trained on personal information in response to a data subject request? How do we explain how a particular AI model processes personal information in our privacy notice? What role does the privacy team play in AI governance? How do we secure our legitimate interests or consumer consent to process data in an AI model, and what do... Read More

Building effective AI through collaboration

The need for cross-departmental collaboration when deploying artificial intelligence models is not just advisable. It's essential. As head of data privacy and product compliance at Collibra, it is my job to make sense of the emerging AI legal and regulatory landscape and to interpret its implications for our business.  But this is not work I can do alone. I need input from a range of stakeholders to get a full picture of the proposed AI use case — its intended purpose, leveraged data, outputs ... Read More

AI incident response plans: Not just for security anymore

It's a given that artificial intelligence systems will fail. What shouldn't be inevitable is a catastrophic outcome to that failure. For organizations that design, develop, sell or operate AI systems, preparing for the day a failure happens is a must — and can make all the difference between a timely, controlled response and chaos. Types of failures and risks unique to AI systems Based on a breakdown of the most common failure modes among reported incidents on the AI Incident Database, AI fail... Read More

Argentina's AAIP creates AI transparency and protection of personal data program

On 4 Sept., Argentina's data protection authority, the Agency of Access to Public Information, published Resolution No. 161/23. which created the Transparency and Protection of Personal Data Program in the use of Artificial Intelligence. The Director of the AAIP, Beatriz Anchorena, said, "We continue to strengthen institutional capacities to incorporate transparency and the protection of personal data … in the development and use of artificial intelligence."  The program's general objective is ... Read More

Facebook, Google may have unreleased biometrics capabilities

The New York Times reports engineers at Google and Facebook created tools several years ago that they claim could recognize and name any face, but have yet to release them. During an internal meeting in 2021, Meta Chief Technology Officer Andrew Bosworth said face recognition technology with the ability to identify an individual at a dinner party, for example, was "hugely controversial" and widespread access to it was "a debate we need to have with the public."Full story... Read More

UK digital regulators discuss interagency enforcement, AI governance coordination

As the U.K. sets out to develop artificial intelligence regulations, as well as pending legislation for online safety, data security and privacy, a key question is what form its regulatory scheme will take to account for legislative changes and technological advances driven by AI. The U.K. Information Commissioner's Office Executive Director, Regulatory Risk Stephen Almond said both the proposed Online Safety Bill and the Data Protection and Digital Information Bill, if passed, will present dig... Read More

London calling: Digital regulation and AI governance

The UK Digital Regulation Cooperation Forum brings together four UK regulators — the Competition and Markets Authority, the Information Commissioner’s Office, the Office of Communications and the Financial Conduct Authority. It was established to deliver a coherent approach to digital regulation and to ensure a greater level of cooperation given the unique challenges posed by the regulation of online platforms. It has been particularly active on artificial intelligence governance in the run up to the UK's AI Safety Summit later this year, and has also issued guidance across the spectrum of digital regulation, such as age-assurance technologies. Join IAPP's Joe Jones in conversation with the new and first CEO of the DRCF, Kate Jones, and the ICO's Executive Director of Regulatory Risk, Stephen Almond, to learn about the work and priorities of the DRCF on AI governance and beyond. Read More

Mila's Gauthier: AI is 'happening now,' moving 'fast'

As technological advances and uses of artificial intelligence have skyrocketed, there has been speculation the burgeoning technology will be bigger than the advent of the iPhone, the internet or even electricity. Justine Gauthier, director of corporate and legal affairs and privacy officer at Mila - Quebec Artificial Intelligence Institute, a Montreal-based nonprofit research institute specializing in AI, said she doesn't know if she would go so far as to say the technology's impact will surpas... Read More

Contentious areas in the EU AI Act trilogues

The European Union's Artificial Intelligence Act is on track to become the world's first comprehensive regulation of this emerging technology. As a first mover, and by virtue of the "Brussels Effect," the AI Act may be talked up as one of the global standards for the regulation of AI — much as the EU General Data Protection Regulation has been for the regulation of data protection. Following a series of amendments adopted by the European Parliament in June, the final legislative deliberations of... Read More

OpenAI launches ChatGPT Enterprise

OpenAI announced it is launching ChatGPT Enterprise, a version offering "enterprise-grade security and privacy, ... advanced data analysis capabilities," and more. OpenAI said the release "marks another step towards an (artificial intelligence) assistant for work that helps with any task, is customized for your organization, and that protects your company data." The company said businesses "own and control" data in ChatGPT Enterprise, it is not trained on business data or conversations, and conv... Read More

Poll: Americans want federal regulation of AI

A poll of 1,001 registered U.S. voters conducted by the Artificial Intelligence Policy Institute found the majority want federal regulation of AI, ZDNet reports. Fifty-six percent of those polled support federal regulation and 82% said they do not trust technology leaders to tackle regulation independently. Sixty-two percent reported concerns about AI, with 86% saying they believe it could "accidentally cause a catastrophic event." Editor's note: Explore the IAPP AI Governance Center and subscri... Read More

5 things to know about AI model cards

Every conversation about artificial intelligence nowadays seemingly ends with a warning to balance innovation with responsible governance. One might even say with great computing power comes great responsibility. It is clear why responsible AI is important. Fear of discrimination, the inability to distinguish misinformation and even the future of TV rides on the safe and transparent use of the technology. Organizations developing and deploying AI, specifically generative AI, have turned to model... Read More

AI vs. privacy: How to reconcile the need for sensitive data with the principle of minimization

It can feel like something of a catch-22. In the interest of good privacy practices, companies limit or avoid the collection of sensitive data, such as race or ethnicity, but then realize that without it, they are less able to engage in adequate bias testing. It is not unusual for us to begin artificial intelligence audits with corporate clients where their data scientists and lawyers are at a standstill over how to test their AI models adequately.  For that reason, there appears to be an inher... Read More

Beyond GDPR: Unauthorized reidentification and the Mosaic Effect in the EU AI Act

A key concern in today's digital era is the amplified risk of unauthorized reidentification brought on by artificial intelligence, specifically by the large and diverse data sets used to train generative AI models, such as large language models. However, these risks can be effectively mitigated. By adopting technology solutions that uphold legal mandates, organizations can harness the power of AI to realize commercial and societal objectives without compromising data security and privacy. This ... Read More

Data protection issues for employers to consider when using generative AI

The worldwide generative artificial intelligence boom runs parallel with a growing network of privacy legislation around the world. This collision is of great interest to employers who manage sensitive employee data and, subsequently, risk legal noncompliance as they deploy generative AI tools. Littler Mendelson Shareholder Zoe Argento, CIPP/US, outlines three key areas to bear in mind within the balancing act.Full story... Read More

AI regulatory enforcement around the world 

Who you gonna call (and who's gonna call you)? With artificial intelligence-powered products flooding the market and new draft AI regulations emerging worldwide, regulators are scrambling to clarify their fields of competence, enforcement, and call for or initiate reforms to better marshal their resources to meet the AI governance challenge. Whether the task will fall to existing agencies or newly created, potentially supranational enforcement bodies, is an issue playing out in real time. It is... Read More

Spilling the tea on AI accountability: An analysis of NTIA stakeholder comments

Friday afternoons are generally not a time for significant news to break, but when leading executives at seven major U.S. artificial intelligence companies recently met with President Joe Biden at the White House to adopt a commitment toward a shared voluntary framework governing new standards for privacy, security and accountability in the development and deployment of powerful AI, it was a big deal. Gathering for more than just a photo op, the companies agreed to meaningful, proactive protect... Read More

Canada rolls out generative AI code of practice

The Government of Canada published a code of practice for generative artificial intelligence development and use. In anticipation of lawmakers passing the proposed Artificial Intelligence and Data Act, the government's voluntary code will help potential covered entities "avoid harmful impacts, build trust in their systems, and transition smoothly to compliance with Canada's forthcoming regulatory regime." The code includes principles for safety, fairness, transparency and human oversight. Editor... Read More

Catching up with the co-author of the White House Blueprint for an AI Bill of Rights

As automated systems rapidly develop and embed themselves into modern life, policymakers around the world are taking note and, in some cases, stepping in. Earlier this year, the Biden administration took an early step by releasing a Blue Print for an AI Bill of Rights. Comprising five main principles, as well as what should be expected of automated systems, while offering a slate of real-world examples of the potential harms and benefits of artificial intelligence, the Blueprint is a must-read f... Read More

Third-party liability and product liability for AI systems

Artificial intelligence-specific legislative and regulatory trends are uncertain and evolving, and it can be difficult to make informed predictions about future oversight requirements. Despite the inconsistent and uncertain changes taking place, it is clear vendors of AI-based systems will need to implement greater controls to manage the risk of their own liability burdens, expand their oversight, and plan around these issues and legal trends relating to third-party and product liability for AI ... Read More

What does AI need? A comprehensive federal data privacy and security law

Artificial intelligence is ushering in a transformative era at an accelerated pace that could fundamentally alter how our society operates. And undoubtedly, it has caught the attention of many members of U.S. Congress, mostly due to fears about how the technology might be misused or the risks associated with it rather than its potential benefits. One key AI fear surrounds data privacy and security.  Recent action on this front came 21 June, when U.S. Sen. Chuck Schumer, D-N.Y., revealed his SAF... Read More

Can AI regulators keep us safe without stifling innovation?

Suddenly, everyone's afraid of artificial intelligence. Geoffrey Hinton, the godfather of AI, used to shrug off concerns about the breakneck pace of AI innovation by quoting Oppenheimer: "When you see something that is technically sweet, you go ahead and do it." Now, though, he's left Google to speak out against the technologies he helped develop while tech bigwigs, including Elon Musk and Steve Wozniak, called for a "pause" in generative AI innovation to give regulators a chance to catch up. S... Read More

Argentina issues recommendations for reliable AI

During the last few months, much has been said about the use of artificial intelligence in all industries. Particularly, many have discussed the use of generative AI and, more precisely, ChatGPT (in its different versions), together with a letter signed by many technology industry leaders calling for precaution in developing and deploying AI tools. In that regard, Argentina does not have specific legislation regulating AI use, development and/or deployment. Although "artificial intelligence" ap... Read More

Connecticut takes a first stab at regulating government use of AI

Artificial intelligence can and has already positively impacted the work of government agencies, making it more efficient to analyze data and make critical decisions related to the issuance of government benefits, hiring and more. However, state agencies that use AI-enabled tools may not necessarily understand them, and there are ample examples of AI systems that render biased, discriminatory or inaccurate outcomes. As state-level legislation continues to focus on the development and use of AI b... Read More

AI: The transatlantic race to regulate

While artificial intelligence has been around for decades, it has only recently captured the public's imagination and entered the mainstream. The exponential growth of AI tools, including chatbots such as ChatGPT, Bard and Dall-E, Google's Magic Eraser for removing distractions and other people from personal photos, the University of Alberta detecting early signs of Alzheimer's dementia through your smartphone, or AmazonGo streamlining in-person grocery shopping, illustrates the continued spread... Read More

Why AI may hit a roadblock under India’s proposed Digital Data Protection Bill

From policymakers and citizens to businesses and privacy forums, everyone is talking about artificial intelligence these days. Though the term has long been in use, many confuse AI with automation, a process so old it was in use as early as 1500 B.C.E., when timekeeping was automated in Babylon and Egypt. Although automation is the start of the roadmap to AI, what arguably differentiates AI from mere automation is data, as it is dependent on the availability and quality of data from which it le... Read More

The latest dimension of the global race for an AI governance framework

Discussions on the need to establish a governance framework for artificial intelligence took off following the public release of ChatGPT last November, which showed the world the impressive pace large language models, and generative AI in particular, are progressing. The breakneck speed of AI development prompted some business leaders and technologies to call for a six-month hold on the release of powerful models. However, many have written off the initiative as an attempt for Elon Musk to play... Read More

The Atlantic Declaration: Data bridges, privacy and AI

On 8 June, U.K. Prime Minister Rishi Sunak and U.S. President Joe Biden announced the Atlantic Declaration: A Framework for a Twenty-First Century U.S.-UK Economic Partnership. It is the latest, most high level (it doesn’t get higher) and most conclusive development in the development of a comprehensive U.S.-U.K. partnership on data and artificial intelligence. Data Sharing data across borders is a fact of life for all organizations doing businesses or operating internationally. Yet, doing so ... Read More

The case for appointing an 'AI custodian' for AI governance

With the growing importance of artificial intelligence governance and legislation in the EU and elsewhere, there will be many practical questions regarding the implementation and day-to-day management of AI systems in terms of compliance and trustworthiness. Current frameworks, such as the U.S. National Institute of Standards and Technology's AI Risk Management Framework, and guidelines, such the European Commission's High-Level Expert Group on AI, clearly describe certain characteristics of tr... Read More

Schumer outlines comprehensive US blueprint for AI regulation

U.S. Congress is prioritizing the establishment of rules for the deployment and use of artificial intelligence. While various proposals have begun to surface in recent weeks, Sen. Chuck Schumer, D-N.Y., has stepped up with arguably the most comprehensive plan yet. Speaking at the Center for Strategic and International Studies, Schumer unveiled a two-part strategy to "move us forward on AI" with "one part framework, one part process." The former component of the strategy is the "Securities, Acc... Read More

In scope or not? An EU AI Act decision tree and obligations

As with a colorblindness hue test, if you stare at the new version of the EU Artificial Intelligence Act for long enough, some patterns form (or maybe you are just losing your vision or your mind). Below is a decision tree to help assess whether you fall in scope. Is it an AI system? Per the pending legislation, an AI system is "a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predicti... Read More

Launching an AI governance program? Start with your ‘why’

Nearly 30 years ago, the great astrophysicist Carl Sagan wrote, "I have a foreboding of an America in my children's or grandchildren's time, when awesome technological powers are in the hands of a very few and no one representing the public interest can even grasp the issues." His words seem prescient now that a tool as powerful as artificial intelligence is pervasive in our everyday lives. Companies are increasingly leveraging AI in various capacities to drive both revenue growth and operation... Read More

The EU Artificial Intelligence Act: A look into the EU negotiations

Original broadcast date: 31 May 2023 The IAPP presents an update on the EU Artificial Intelligence Act. Proposed by the European Commission in April 2021, the AI Act has been fiercely debated ever since. The European Parliament will formalize its version in June, opening the way for trilogue negotiations with member states and the European Commission to finalize the law.During this LinkedIn Live broadcast, the IAPP's Isabelle Roccia will moderate a discussion between Laura Caroli, Rocco Panetta... Read More

Generative AI: Privacy and Ethical Considerations for Your Business

Original broadcast date: 31 May 2023 Click To Access Web Conference View All IAPP Web Conferences This webinar discusses the intersection of privacy and AI and the implications for your privacy program, asking whether privacy is an enabler or inhibitor of innovation in AI and data utilization. Additionally discussed are how well-crafted and thoughtful data privacy programs can be the gateway for an ethical internet. Host:Marjorie Boyer, Programming and Speaker Coordinator, IAPP Panelist:Al... Read More

Why we do not need to reinvent the wheel for an ethical approach to AI

Artificial intelligence is expected to increase global gross domestic product 14% by 2030. The total AI investment surged to a record high of USD77.5 billion in 2021, up from USD36 billion last year. However, harms associated with AI result from a combination of human and machine decisions, human training of machine learning and observational training of machine learning from the external world. It is not to say all AI has associated harms, but enough AI applications have the potential for harm... Read More

Europe's rulebook for artificial intelligence takes shape

The European Union has been working on the world's first comprehensive law to regulate artificial intelligence. The file is approaching the finish line two years after the legislative proposal was presented. The EU AI Act has the potential to become the international benchmark for regulating the fast-paced AI field, much like the General Data Protection Regulation inspired data protection regimes in countries worldwide, from Brazil to Japan to India. "We are on the verge of building a real lan... Read More

AI governance, regulation top of mind at IAPP CPS 2023

Like other aspects of the modern economy, privacy stands to be fundamentally revolutionized by the rapid development of generative artificial intelligence and algorithmic decision-making systems. Across numerous sessions at the IAPP Canada Privacy Symposium 2023 last week, attendees took an interest in how organizations ensure they build comprehensive AI governance frameworks, while keeping an eye on how the technology could be regulated in Canada, potentially through the proposed Artificial In... Read More

OPC announces investigation of OpenAI at IAPP CPS 2023

During the opening keynote address at the IAPP Canada Privacy Symposium 2023 on 25 May, Privacy Commissioner of Canada Philippe Dufresne announced his office is launching a joint investigation into OpenAI in concert with several provincial data protection authorities. The Office of the Privacy Commissioner originally opened its own investigation into OpenAI generative artificial intelligence chatbot ChatGPT in April. The OPC will now be joined in the investigation by the Office of the Informati... Read More

BigID's LLM-based BigAI seeks to automate data security, risk management

As the uses of artificial intelligence expand widely across industries, the fields of data security and risk management are emerging sources of innovation. Data discovery and classification provider BigID recently launched BigAI, which applies a new language learning model that scans and categorizes organizations' data by leveraging AI to improve data classification, while optimizing data security and risk management as an additional service for BigID customers. BigID Chief Marketing Officer S... Read More

US Senate subcommittee explores AI risks, legislative solutions

The U.S. Senate Committee on the Judiciary's Subcommittee on Privacy, Technology and the Law held another artificial intelligence probing hearing 25 July with leading AI academics and the founder of a public benefit AI research company, each of whom offered suggestions for how to potentially regulate AI in a variety of applications. Working as a follow-up to the subcommittee's May hearing on the same matter, subcommittee chair Richard Blumenthal, D-Conn., said the objective of the latest meetin... Read More

Khan: FTC stands ready to 'vigorously enforce' AI

In an op-ed in The New York Times, U.S. Federal Trade Commission Chair Lina Khan said generative artificial intelligence will be "highly disruptive," and the agency "will vigorously enforce" its laws "even in this new market." She said, "We once again find ourselves at a key decision point. Can we continue to be the home of world-leading technology without accepting race-to-the-bottom business models and monopolistic control that locks out higher quality products or the next big idea? Yes — if w... Read More

Report on responsible AI and privacy governance – discussion of findings

Original broadcast date: 3 May 2023 In this web conference, panelists will address understanding why 40% of organizations build algorithmic impact assessments on top of their existing privacy impact assessments, the details from your peers on the top three risks to consider when deploying AI systems and how AI governance should adapt, practical approaches to AI governance and how existing privacy governance leads the way, close alignment with privacy fosters higher AI maturity. Find out what motivates organizations to utilize their privacy experience for responsible AI governance and how to prepare guidelines for AI governance within your organization. Read More

GPS 2023: AI opportunities for privacy professionals

The opportunity for privacy professionals to apply their skills to help organizations identify and manage the risks posed by artificial intelligence tools was a major theme at the recent IAPP Global Privacy Summit 2023 which we attended in Washington, D.C. The particular challenges posed by generative AIs, such as ChatGPT4 and DALL-E, weighed heavily on the minds of many panelists and keynote speakers, including author and generative AI expert Nina Schick and U.S. Federal Trade Commissioner Alv... Read More

What's next for potential global AI regulation, best practices

The rise of artificial intelligence is poised to be the great technological revolution of the early 21st century. At the IAPP Global Privacy Summit 2023, stakeholders, including practitioners, privacy advocates and government regulators, exchanged ideas to better understand AI's technological potentials, best practices for governance, privacy risks and possible iterations of future government regulation. Given the rapid development of AI technologies in a space where existing laws around the w... Read More

As generative AI grows in popularity, privacy regulators chime in

There's no doubt the rapid growth of generative artificial intelligence and large language systems like ChatGPT is getting the attention of the privacy profession and taking the business world by storm. During her keynote address at the IAPP Global Privacy Summit 2023 in Washington, D.C., author and generative AI expert Nina Schick demonstrated the eye-opening growth of ChatGPT, pointing out it only took five days for it to reach 1 million users and two months to reach 100 million users. This g... Read More

Chatbots, AI and the future of privacy

Chatbots are now all the rage. They have been the subject of numerous investigative news pieces and countless Twitter posts, and multiple companies are investing billions of dollars to further develop the technology. We have only reached the tip of the iceberg, but chatbots and other generative artificial intelligence tools are here to stay, and they will inevitably revolutionize how we interact with technology and with each other. Though AI and machine learning are nothing new, generative AI i... Read More

Generative AI: Privacy and tech perspectives

Launched in November 2022, OpenAI’s chatbot, ChatGPT, took the world by storm almost overnight. It brought a new technology term into the mainstream: generative artificial intelligence. Generative AI describes algorithms that can create new content such as essays, images and videos from text prompts, autocomplete computer code, or analyze sentiment. Many may not be familiar with the concept of generative AI; however, it is not a new technology. Generative adversarial networks — one type of gene... Read More

A view from DC: Should ChatGPT slow down?

It may not be a viral dance move yet, but the latest hot trend in tech circles is to call for a slowdown on artificial intelligence development. This week, an open letter from the "longtermist" Future of Life Institute called for a six-month pause on the development of AI systems such as large language models due to concerns around the evolution of “human-competitive intelligence” that could bring about a plethora of societal harms. Scholars agree that caution in the development of advanced alg... Read More

Me, myself and generative AI

As embarrassing as this is to admit to my fellow privacy peers, my Instagram account was recently hacked. In a moment when I wasn’t thinking logically, I clicked on a link a "friend" had sent me (unbeknownst to me, the friend's account had also been hacked). Ten minutes later, I was kicked out of my account and had my two-factor authentication changed to a different number.  I scrambled to send text messages to as many people as I could that my account was hacked and to not engage with it until... Read More

UK releases white paper on AI regulatory framework

The U.K. Department for Science, Innovation and Technology published a white paper with its approach to regulating artificial intelligence technologies. The regulatory framework seeks to "build public trust in cutting-edge technologies and make it easier for businesses to innovate, grow and create jobs." The approach consists of five AI principles: safety, transparency, fairness, accountability and governance, and redress. U.K. regulators will roll out guidance within the next 12 months to help ... Read More

Europol report warns against criminal uses of generative AI

Europol published a report warning about the exploitation of OpenAI's ChatGPT and other generative artificial intelligence systems by cybercriminals, Euractiv reports. "While all of the information ChatGPT provides is freely available on the internet, the possibility to use the model to provide specific steps by asking contextual questions means it is significantly easier for malicious actors to better understand and subsequently carry out various types of crime," the report said.Full Story... Read More

Gates calls AI advancements revolutionary

In a GatesNotes blog post, Microsoft founder Bill Gates called developments in artificial intelligence revolutionary. "The development of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone," he said. "It will change the way people work, learn, travel, get health care, and communicate with each other. Entire industries will reorient around it. Businesses will distinguish themselves by how well they use it."Full Story... Read More

The case of the EU AI Act: Why we need to return to a risk-based approach

The benefits of using artificial intelligence to address a wide range of societal challenges and improve our way of living are bountiful. AI can empower public- and private-sector organizations to deliver services across a multitude of industries, including health care and medical research, automotive, agriculture, financial services, law enforcement, education or marketing. In science, open-source AI AlphaFold just solved the complex issue of protein folding and can now predict the structure of... Read More

ENISA creates guide for AI cybersecurity standardization

The EU Agency for Cybersecurity released a guide outlining the standardization process for artificial intelligence cybersecurity. The document contains updates on existing standards, as well as those in the drafting process and under consideration. The guide adopts "a broad view of cybersecurity, encompassing both the 'traditional' confidentiality — integrity — availability paradigm and the broader concept of AI trustworthiness."Full Story... Read More

Federated learning: Supporting data minimization in AI

Artificial intelligence applications, such as language translation, voice recognition and text prediction apps, typically require large-scale data sets to train high-performance machine learning models such as deep neural networks. There can be challenges when the data needed to train the model is personal or proprietary. How can ML algorithms be trained on multiple data sets when, potentially, those data sets cannot be shared? With its capability to train algorithms on various data sets without... Read More

Generative AI: A ‘new frontier’

When asked to explain the privacy concerns of generative artificial intelligence apps, ChatGPT listed five areas — data collection, data storage and sharing, lack of transparency, bias and discrimination, and security risk — each with a brief description. "Overall, it’s important for companies to be transparent about the data they collect and how they use it, and for users to be aware of the potential privacy risks associated with AI chatbots," ChatGPT said. And it was not wrong. These are am... Read More

NIST's Reva Schwartz on the new AI Risk Management Framework

The prospect of day-to-day life with artificial intelligence is no longer a future endeavor. AI systems comprise countless applications across public and private organizations, and through open-source systems such as ChatGPT, AI is now consumer-facing and usable. The U.S. National Institute of Standards and Technology was directed by the National Artificial Intelligence Initiative Act of 2020 to create a voluntary resource for organizations designing, developing, deploying or using AI systems t... Read More

Microsoft adds AI to search engine, web browser

Microsoft unveiled a new Bing search engine and Edge web browser that offer artificial intelligence chatbots, The New York Times reports. The new Bing version was released to a limited number of users and will expand to millions by the end of the month. "This technology will reshape pretty much every software category that we know," Microsoft CEO Satya Nadella said. ChatGPT creator OpenAI's Chief Technology Officer Mira Murati welcomed regulation of AI chatbot technologies, Time reports. "It'... Read More

Using sensitive data to prevent AI discrimination: Does the EU GDPR need a new exception?

Organizations can use artificial intelligence to make decisions about people for a variety of reasons, such as selecting the best candidates from many job applications. However, AI systems can have discriminatory effects when used for decision making. For example, an AI system could reject applications of people with a certain ethnicity, even though the organization did not plan such discrimination. In Europe, an organization can run into a problem when assessing whether its AI system accidenta... Read More

Cheering emerging PETs: Global privacy tech support on the rise

The data economy is facing a paradox. The exponential increase in the processing of personal data has created a wide array of unprecedented possibilities to gain useful insights via artificial intelligence and machine learning. At the same time, these developments expose individuals to new privacy threats. Against this background, most conferences on privacy trends broach the issue of how emerging privacy-enhancing technologies can support privacy protection in the context of AI and ML. AI head... Read More

Are EU AI Act sandboxes viable without GDPR waivers for experimentation?

The proposed EU Artificial Intelligence Act is anticipated to pave the way for a regulated approach to the future development of artificial intelligence. One means of testing new AI technologies is through regulatory sandboxes created by various data protection authorities around Europe. To explore how AI regulatory sandboxes are helping companies develop their machine-learning models, IAPP Managing Director, Europe, Isabelle Roccia hosted a Linkedin Live session Dec. 12 with Secure Practice co... Read More

Artificial intelligence: How to play in the sandbox

In this LinkedIn Live event, IAPP Managing Director, Europe, Isabelle Roccia, Secure Practice co-founder and CEO Erlend Andreas Gjære, European Commission Legal and Policy Officer, Artificial Intelligence Policy Development and Coordination Unit, DG Connect, Yordanka Ivanova, and Norwegian data protection authority, Datatilsynet, Head of Section for Research, Analysis and Policy; Project Manager for the Regulatory Sandbox, Kari Laumann discuss the usefulness of AI sandboxes and how proposed legislation may create a new norm for trial spaces. Read More

A look at European Parliament’s AI Act negotiations

The proposed Artificial Intelligence Act would be the first horizontal regulation of AI in the world, but, as always, the devil is in the details. Though the Council of the European Union has nearly completed its version, the European Parliament is still negotiating its version. IAPP Editorial Director Jedidiah Bracy, CIPP, looks at how other EU lawmakers and stakeholders are crafting this massive, precedent-setting legislation.Full Story The Privacy Advisor Podcast: MEP Tudorache unpacks think... Read More

How machine learning can help small businesses deal with data privacy compliance

Data privacy is one of the leading concerns for businesses to ensure confidentiality and preserve trust. Over the last few decades, the digital footprints of our society have shown exceptional growth. But, this digital revolution is striking hard over privacy concerns of individuals.  According to Pew Research, 81% of Americans report the potential risk of data collected by companies overshadowing the benefits they receive from those businesses.  Challenges in executing privacy compliance for ... Read More

The future of AI regulations and how companies can prepare

The development of machine learning and artificial intelligence has gone into overdrive in recent years. Now governments around the world at both federal and local levels are attempting to get out ahead of what may come from the exponential growth of AI’s potential to fundamentally change the digital world. During the IAPP Privacy. Security. Risk 2022 conference, Uber Privacy and Security Public Policy Lead Shani Rosenstock facilitated a panel discussion about the number of the existing and po... Read More

Keynote: Mo Gawdat, former Google [X] chief business officer, author of ‘Scary Smart’ and ‘Solve for Happy’ (IAPP Privacy. Security. Risk. 2022)

From Austin City Limits Live at the Moody Theater, former Google [X] executive and author Mo Gawdat draws his 30 years of technology experience and explores humanity’s relationship with technology. Gawdat probes our coexistence with artificial intelligence as it eclipses human intellect and how we may ensure a symbiotic future. Read More

Greek DPA imposes 20M euro fine on Clearview AI for unlawful processing of personal data

On July 13, Greece’s data protection authority, the Hellenic Data Protection Authority, imposed a fine of 20 million euros on U.S.-based company Clearview AI for violating multiple provisions of the EU General Data Protection Regulation. This number doubled from the previous largest fine issued by the HDPA, which was 9.25 million euros against the largest telecommunications conglomerate in Greece. The decision of the HDPA was issued following a complaint filed by civil nonprofit organization Ho... Read More

CNIL fines Clearview AI 20M euros

France's data protection authority, the Commission nationale de l'informatique et des libertés, issued a 20 million euro fine to Clearview AI for alleged breaches of the EU General Data Protection Regulation. The CNIL began an investigation into a complaint regarding Clearview's facial recognition database and data processing practices in May 2021. The regulator handed down a formal notice to remedy alleged violations in November 2021 that Clearview did not reply to. With the fine, the CNIL also... Read More

AI text-to-image generator tech outpacing ability to shape 'norms' for its use

The use of text-to-image artificial intelligence generators is expanding “faster than AI companies can shape norms around its use,” The Washington Post reports. Researchers are concerned widespread public use of text-to-image generators can create “dangerous outcomes,” such as “reinforcing racial and gender stereotypes” or forging artists’ work. OpenAI, developers of image generator DALL-E, named after Salvador Dali and Pixar’s WALL-E, do not allow users to replicate images of celebrities and po... Read More

CEPS publishes EU AI Act report

The Centre for European Policy Studies published a research paper on the proposed EU Artificial Intelligence Act. The authors said an agreement could be struck by mid-2023, but it may hinge on the ability of co-legislators to "converge on key issues such as the definition of AI, the risk classification and associated regulatory remedies, governance arrangements and enforcement rules.” The paper presents eight major recommendations to avoid overlapping regulations if the AI Act passes, including ... Read More

Metaverse and privacy

From Facebook's recent decision to rename itself "Meta" to Epic Games' billion-dollar investment in metaverse technologies, the metaverse has dominated the news and will likely continue to do so over the next several years. To date, there is no universally accepted definition for the term "metaverse" and, for many, it suggests a new but undeveloped future of the internet. According to J.P. Morgan, the metaverse is a seamless convergence of our physical and digital lives, creating a unified, virt... Read More

How advanced deep fakes could threaten democracy

Abuse of deep fakes through videos, images and artificial intelligence could pose a threat to democracy, Cincinnati’s 91.7 WVXU FM reports. On the Aug. 8 “Cincinnati Edition” program, the future of deep fakes and the challenge they pose to democratic discourse was examined by Intrust IT Director of Business Growth Dave Hatter, University of Cincinnati School of Public and International Affairs professor and Director Richard Harknett, and Arizona State University School of Computing and Augmented... Read More

Proposed EU AI Act blurs lines between AI developers and data processors under GDPR

The proposed EU Artificial Intelligence Act and its intersections with the EU General Data Protection Regulation could present compliance issues for data compliance officers across the continent, according to IAPP Senior Westin Research Fellow Jetty Tielemans. The AI Act has some similarities with the Digital Services Act and the Digital Markets Act regarding how they clarify the GDPR, Tielemans said during a recent IAPP LinkedIn Live. However, she explained the AI Act differs in that "sensitiv... Read More

UK unveils data reform bill, proposes AI regulation

The U.K. government Monday introduced a pair of post-Brexit data reform initiatives aimed at guiding responsible use of data while promoting innovation in the economy, according to two government releases.  In the House of Commons, the government released the Data Protection and Digital Information Bill. In a separate statement, Minister for Media, Data and Digital Infrastructure Matt Warman said the data protection reform bill will help "transform the UK's independent data laws."  In parallel... Read More

EU Artificial Intelligence Act Proposal: What could it change?

Original broadcast date: July 12, 2022 In this LinkedIn Live, IAPP Europe Managing Director Isabelle Roccia, IAPP Senior Westin Research Fellow Jetty Tielemans, Criteo Vice President of Government Affairs and Public Policy Nathalie Laneret, CIPP/E, CIPM, and Kai Zenner, Head of Office and Digital Policy Adviser to Member of European Parliament Axel Voss, discuss proposed changes to the EU Artificial Intelligence Act, similarities with existing regulatory structures, what it could mean for the U... Read More

AI and Biometrics Privacy: Trends and Developments

Original broadcast date: 2 June 2022 In this web conference, panelists discuss key developments and trends in the developing area of law, compliance with forthcoming laws, an overview of litigation concerning AI and biometrics, and pending and anticipated legislative and regulatory developments around the world.   Read More

Microsoft unveils framework for responsible AI

In a blog post, Microsoft Chief Responsible AI Officer Natasha Crampton outlined the company’s “Responsible AI Standard,” which eliminates use of automated tools that can infer an individual’s emotional state and attributes like gender, age and other facial features. Crampton said the standard provides goals that teams developing AI systems must meet to uphold values, including privacy, security, transparency and accountability. She called the standard “actionable and concrete,” with “approaches... Read More

Singapore launches first of its kind AI governance testing framework, toolkit

Singapore created the world’s first artificial intelligence governance testing framework and toolkit called A.I. Verify, according to its Infocomm Media Development Authority. The framework and toolkit’s purpose would be to promote transparency between companies and their stakeholders by combining tests and process checks. Developers and owners of AI systems can verify their performance against a set of principles through standardized tests. A.I. Verify offers transparency in the areas of safety... Read More

Report: Ransomware gangs may have resources to hire AI experts

WithSecure Chief Research Officer Mikko Hyppönen told Protocol it may only be a matter of time before ransomware gangs are able to deploy artificial intelligence–powered ransomware. Previously, entities that protected against ransomware attacks were the sole parties that could utilize AI technology; however, Hyppönen claimed that is no longer the case. He said the wealth of ransomware gangs may afford them the ability to bring on AI experts to exploit “zero day” vulnerabilities and hire penetrat... Read More

Clearview AI now features 20B facial images

Clearview AI announced its facial recognition software, Clearview 2.0, now features 20 billion publicly available facial images, Biometric Update reports. The images reportedly include photos of suspects, persons of interest and potential victims, and 3,100 law enforcement clients across the U.S. have purchased Clearview AI software, including the Federal Bureau of Investigation and Department of Homeland Security. Clearview AI has come under scrutiny because its software has misidentified peopl... Read More

Inside the EU's rocky path to regulate artificial intelligence

In April last year, the European Commission published its ambitious proposal to regulate Artificial Intelligence. The regulation was meant to be the first of its kind, but the progress has been slow so far due to the file's technical, political and juridical complexity. Meanwhile, the EU lost its first-mover advantage as other jurisdictions like China and Brazil have managed to pass their legislation first. As the proposal is entering a crucial year, it is high time to take stock of the state o... Read More

Chinese company develops AI program that predicts when employees will leave job

Chinese company Sangfor Technologies has drawn scrutiny for its AI program that can predict if an employee is about to leave their job, South China Morning Post reports. The program can spy on an employee’s browsing activity, such as viewing job posts and sending application emails. The program came to light after a user on Maimai.cn, a professional networking application, claimed he was fired when his company discovered, using a monitoring system, he was applying to other jobs.Full Story... Read More

UK supermarkets rolling out AI-based age verification

BBC reports U.K. supermarkets have begun trialing artificial intelligence-powered software to automatically verify ages for alcohol sales. Asda, Co-op and Morrisons will use the verification system, with customer consent, to scan faces and guess ages using algorithms trained on a database of 125,000 anonymous faces ages 6-60. Robin Tombs, chief executive officer of system provider Yoti, said the verification technology helps "keep pace with consumer demands for fast and convenient services, whil... Read More

UK government rolls out global AI standards initiative

The U.K. government announced plans to pilot an Artificial Intelligence Standards Hub through the Alan Turing Institute that will "develop educational materials to help organisations develop and benefit from global standards." More specifically, the new hub will seek to "improve the governance of AI, complement pro-innovation regulation and unlock the huge economic potential" following the U.K.'s departure from the EU. According to a blog post from Hogan Lovells, Spain's government is budgeti... Read More

Privacy and responsible AI

Artificial intelligence and machine learning are advancing at an unprecedented speed. This raises the question: How can AI/ML systems be used in a responsible and ethical way that deserves the trust of users and society? Regulators, organizations, researchers and practitioners of various disciplines are all working toward answers. Privacy professionals, too, are increasingly getting involved in AI governance. They are challenged with the need to understand the complex interplay between privacy ... Read More

FBI agrees to licensing contract with Clearview AI

The U.S. Federal Bureau of Investigation signed an $18,000 licensing agreement with Clearview AI to subscribe to the company’s facial recognition technology, CyberScoop reports. Clearview AI has drawn scrutiny for its practices of harvesting millions of photos from social media users across multiple platforms without their knowledge. Beyond Clearview AI’s licensing agreement with the FBI, Cyberscoop identified more than 20 federal law enforcement contracts in excess of $7 million for either spec... Read More

NIST releases report on AI risk management framework

The U.S. National Institute of Standards and Technology published a concept paper that reviews stakeholder views on its draft Artificial Intelligence Risk Management Framework. The paper includes public feedback from NIST's prior Request for Information and a workshop held in October. NIST now seeks comment on the scope and approach presented in the paper with an eye toward filing the first draft of the framework early in 2022 and finalizing the framework's first iteration by the start of 2023.F... Read More

FTC takes steps toward privacy, AI rulemaking

As the debate rages on regarding whether the U.S. Federal Trade Commission should or could begin rulemaking on privacy, the commission has signaled it is not willing to wait for a consensus. On Dec. 10, the FTC filed an Advanced Notice of Proposed Rulemaking with the Office of Management and Budget that initiates consideration of a rulemaking process on privacy and artificial intelligence. The filing describes the FTC's intent as seeking to "curb lax security practices, limit privacy abuses, an... Read More

US Department of Defense – Responsible AI Guidelines

The U.S. Department of Defense’s Defense Innovation Unit released “responsible artificial intelligence” guidelines, required to be used by third-party developers building AI systems for the military. The guidelines cover planning, development and deployment, and include procedures for identifying users of the technology and those who could be harmed by it, as well as potential harms and how to avoid them. Read More

Web Conference: AI, Algorithmic Risk and What to Do About It

Original broadcast date: 8 November 2021 From setting insurance premiums to deciding who gets a home loan, from predicting the risk of a person re-offending to more accurately diagnosing disease, algorithmic systems – especially those turbo-charged by AI - have the ability to re-shape our lives. As the use of algorithmic systems increases, so too does the need for appropriate auditing, assessment, and review. But where should a privacy professional start, when assessing the privacy considerations raised by algorithmic systems? What should an Algorithmic Impact Assessment include, look for, or test against? This workshop will offer privacy professionals a framework for assessing projects involving AI or algorithms, which cuts through the technical jargon and management-speak buzzwords. By examining the four stages of an algorithmic system - design, data, development and deployment – we can not only identify the risks to avoid, but also start to explore the positive features to build in, to make an algorithmic system trustworthy. Read More

Web Conference: OK, Computer? Privacy, AI and Automation

Original broadcast date: 4 November 2021 Artificial intelligence and machine-learning enabled systems are being rapidly integrated into every aspect of modern business and government activity, from minor enhancements like spellcheck to fully automated factories, autonomous vehicles, facial recognition systems and automated decision-making. In this panel, hear from regulatory, policy and technical experts working at the intersection of personal information and emerging technologies such as AI, Machine Learning, and Automated Decision Making. This panel considers how these technologies are changing the way that companies and governments are using personal information, current risks and mitigations, impending regulation, and the challenges that the privacy community will have to face in future. Read More

Facebook to close its facial recognition system, but will it start a paradigm shift?

Facebook said it plans to shut down its facial recognition system this month and delete the face prints of more than 1 billion users, a change the company said will “represent one of the largest shifts in facial recognition usage in the technology’s history.”   In announcing the change Tuesday, Jerome Pesenti, vice president of artificial intelligence at Facebook’s newly named parent company Meta, said it is “part of a company-wide move away from this kind of broad identification, and toward na... Read More

Survey: 36% in government sector plan to increase AI investments

A new survey from research agency Gartner found 36% of respondents from various levels of government plan to increase AI and machine-learning investments in 2021. "Striking" among the findings, said Gartner's Public-Sector Research Director Dean Lacheca, is use of AI within government did not grow during the COVID-19 pandemic. "There is still a lot of talking about the impact it will have and some experimenting and some great point solutions, but adoption is still not widespread," he said.Full S... Read More

Five Things Lawyers Need to Know About AI

This guide, published by the Future of Privacy Forum, discusses AI, and major points and issues attorneys should be aware of, such as algorithmic bias, over-indexing, difficulties in benchmarking an AI system’s performance and preparing for the future of AI. Read More

UK details plans for national AI strategy  

The U.K. Office for Artificial Intelligence, Department for Digital, Culture, Media and Sport, and Department for Business, Energy and Industrial Strategy released details on Britain’s national AI strategy. The agencies set key actions over the next 12 months and beyond, focusing on investment and planning for long-term needs, transitioning to an AI-enabled economy, and encouraging innovation and investment through governance of AI technologies. “AI will be central to how we drive growth and enr... Read More

From Data Compliance to Data Intelligence

Original broadcast date: 1 September 2021 As privacy programs have matured, their focus has shifted from tick-the-box compliance exercises to automated data intelligence. This trend has been driven by the increasing number and complexity of privacy laws, like the upcoming California Privacy Rights Act, as well as the continued emergence of good privacy practices and compliant data use as a competitive differentiator. Data intelligence takes privacy beyond the manually generated Article 30 report and the siloed privacy impact assessment to focus more on a deep understanding of personal data through data discovery & classification, automated identification of privacy risk and policy violations, and integrated enforcement of policies like retention, access and data minimization across the organization’s IT ecosystem. In this privacy education web conference, learn how you can take the first steps towards data intelligence and advance your privacy program to the next phase of automation and maturity. Read More

Will AI and algorithms truly dictate the future of content?

Nostalgia is a hell of a drug. Like many millennials, I have fond memories of watching "Space Jam" as a child. In fact, I saw it in on the big screen on a snowy December afternoon in Londonderry, NH, with my best friend at the time. To say that I loved it would be a colossal understatement. Once we got it on VHS, you better believe I wore out of that tape. As an adult and above average film aficionado, I can take off those rose-tinted glasses. Michael Jordan isn't a great actor. The dated pop ... Read More

Hong Kong privacy commissioner publishes AI guidance

The Office of the Privacy Commissioner for Personal Data of Hong Kong published its "Guidance on the Ethical Development and Use of Artificial Intelligence.” The guidance is designed to help organizations comply with the requirements of the Personal Data (Privacy) Ordinance as they develop and use AI. It also set out seven ethical principles for AI, including accountability, transparency and privacy.Full Story... Read More

DOJ to study use of AI in analyzing prison phone calls

A U.S. House of Representatives panel has asked the Department of Justice to study the use of artificial intelligence to analyze phone calls in United States prisons, Reuters reports. Prisoners’ advocates and families said the technology, which could transcribe inmates’ phone calls and analyze their tone of voice and language, raises accuracy and bias concerns. Proponents said it assists law enforcement and does not target race, genders or other groups. The technology is already in use at facili... Read More

Local facial recognition bans begin to take hold

Cameras seem to be everywhere in New York City, but there is new citywide regulation affecting cameras operated by commercial establishments. On July 9, New York City’s administrative code was amended to include § 22-1201 - 22-1205, which covers biometric identifier information. The new BII law will protect New York City’s approximately 8 million residents and nearly 65 million yearly visitors from the collection, storing, sharing or using of biometric identifiers by commercial establishments wi... Read More

How to manage privacy and AI risks within the same project

Dealing with privacy risk has long been considered a necessity during project and process management, the same as considering technological risk. As artificial intelligence grows in importance and relevant risks become apparent, methodologies, frameworks and regulatory initiatives emerge — both from the private and most recently the public sectors — to ensure ethical, societal and regulatory requirements are in place when managing AI risks. This includes managing privacy risks resulting from the... Read More

Handbook on Data Protection and Privacy for Developers of Artificial Intelligence in India

The Data Security Council of India published a handbook outlining best practices for implementing data protection into artificial intelligence technologies from the design stage. The handbook maps out key privacy-by-design principles for developers to consider, including transparency, accountability, mitigating bias, fairness, security and privacy. Additionally, developers are provided tools such as checklists, a compliance map and examples of data security techniques to aid proper implementatio... Read More

Report: A rise in AI-based toys threatens children's privacy

CNBC reports on efforts to develop artificial intelligence-based toys for kids and how they could risk children's privacy. Among the notable types of smart toys are smart companions, which learn and interact with children, and programmable toys that employ machine learning to educate kids. World Economic Forum’s Smart Toy Awards’ Judging Committee Chair and singer-songwriter Will.i.am said AI toys "will be smarter than the parent and gather all this data that could one day hurt the child" after ... Read More

Law enforcement using AI software to patrol social media

Wired reports of U.S. law enforcement using artificial intelligence software to monitor social media is sparking privacy concerns. Israeli data analysis firm Zencity, contracted to 200 agencies in the U.S., uses machine learning to formulate custom reports from scans of public conversations across social media platforms. Zencity redacts personal information from the reports and does not allow its users to see individuals' profiles. The surveillance tactic remains controversial, with Pittsburgh C... Read More

How to Deal with Facial Recognition and Make it Compliant?

Original broadcast date: 29 June 2021  Facial recognition is at the forefront of media. This is because governments are increasingly using it for surveillance and enforcement purposes, and because of Clearview’s leaked customer list. It showed us thousands of businesses are using its facial recognition database for commercial purposes and is key to considering how this technology can be developed and used in a compliant manner. Other relevant rules include monitoring behaviors and automated decision-making. Our panel addresses key questions that must be resolved, such as: how to ensure transparency, and is consent a viable option? This session will provide you with the views of experts from the public sector, the industry and the legal world. Read More

Understanding Machine Learning Technology and Developing A Risk-Based Approach

Original broadcast date: June 2, 2021  The rapid expansion of Machine Learning (ML) technology has raised questions regarding ethics, trust, and privacy risks. But what developments should we expect in the future? How should you review privacy notices and conduct assessments regarding your legal basis to process personal data in connection with ML products? What if you receive data subject rights requests involving ML? This session covers the basics of ML technology, how to best review day-to-day ML products for GDPR compliance and how to develop a toolkit for ethical and accountable ML within your organization. Learn how you can leverage the GDPR’s accountability principle to assess the privacy risks of ML solutions and conduct DPIAs. You will hear the perspectives of regulators and engineers in the industry, and gain clarity on relevant legal requirements. Read More

Humans in the Loop: Building a Culture of Responsible AI

Original broadcast date: 24 June 2021  In this interactive privacy education web conference we will describe a case study of how the governance structures of an enterprise privacy program can be extended to bring to life “responsible AI”, a growing area of research merging the concepts from privacy, data ethics and new areas, such as explainable AI. The speakers share knowledge of industry best practices and demonstrate methods for assessing risk in AI projects and for developing a framework for responsible AI. Read More

The EU AI Regulation — What’s New and What’s Not?

Original Broadcast Date: May 2021 The proposal for an EU artificial intelligence regulation is major news, with many news outlets citing the regulation’s significant impacts and global consequences. But are these obligations really new or already considered best practice? In this session, ADP Chief Privacy Officer Cécile Georges and Morrison & Foerster Associate Marijn Storm discussed the practical impact of the proposed regulation and some existing best practices that could help cover most req... Read More

Wash. county bans facial recognition use by Sheriff’s Office, government agencies

The County Council in King County, Washington, banned the use of facial recognition information or technology by the Sheriff’s Office and other county agencies, The Seattle Times reports. The law enables individuals to sue in cases of violations and does not prohibit use of the technology by private groups or individuals. The Council cited privacy threats and bias in supporting the ban. Councilmember Jeanne Kohl-Welles said the technology “raises huge concerns,” including an “encroachment on civ... Read More

How privacy professionals can assess risks in AI, algorithms and automated decision making systems

From setting insurance premiums to deciding who gets a home loan, from predicting the risk of a person re-offending to more accurately diagnosing disease, algorithmic systems have the ability to reshape our lives. Algorithms are increasingly used to make predictions, recommendations or decisions vital to individuals and communities in areas such as finance, housing, social welfare, employment, education, and justice — with very real-world implications. As the use of algorithmic systems increases... Read More

Machine learning compliance considerations

Technology and business research companies expect the artificial intelligence market will continue expanding significantly in the coming years. AI is not a single technology. The ICO defines AI as "an umbrella term for a range of technologies and approaches that often attempt to mimic human thought to solve complex tasks." In our everyday lives, we may encounter AI in the form of machine learning algorithms in personal assistants, personalized advertisements, fraud detection services, facial re... Read More

Why the EU’s AI regulation is a groundbreaking proposal

On April 21, 2021, the European Commission published its bold and comprehensive proposals for the regulation of artificial intelligence. With suggested fines of up to 6% of annual global turnover, as well as new rules and prohibitions governing high-risk AI systems, the announcement has already generated much interest, with speculation about how it will impact both the technology companies that develop AI systems and the industries that utilize them. Due to the critical role that data plays in... Read More

A look at what's in the EU's newly proposed regulation on AI

On April 21, 2021, the European Commission unveiled its long-awaited proposal for a regulation laying down harmonized rules on artificial intelligence and amending certain union legislative acts. The proposal is the result of several years of preparatory work by the commission and its advisers, including the publication of a "White Paper on Artificial Intelligence." The proposal is a key piece in the commission’s ambitious European Strategy for data. The regulation applies to (1) providers that... Read More

FTC publishes recommendations on AI

The U.S. Federal Trade Commission published recommendations for organizations using artificial intelligence. The agency advises companies to ensure datasets include all necessary populations to avoid outcomes that are "unfair or inequitable to legally protected groups." The FTC also recommends organizations test out their AI algorithms to see if discriminatory outcomes are produced, be transparent with their results and open source code to outside inspection.Full Story... Read More

Augmented Reality and Virtual Reality: Privacy and Autonomy Considerations

A new report from the Future of Privacy Forum outlines recommendations for tackling privacy risks associated with augmented and virtual reality technologies. Researchers offered their suggestions for responsible implementation of extended reality tech through the examination of current and future use cases. The recommendations are aimed at platforms, manufacturers, developers, experience providers, researchers and policymakers. Click To View (PDF) ... Read More

Documents highlight relationship between NYPD, Clearview AI

MIT Technology Review reports the New York Police Department used the services of Clearview AI. Documents obtained via freedom-of-information requests found the NYPD exchanged emails with Clearview AI over a two-year period, during which the police department tested out facial recognition technology provided by the company. The Daily Dot reports the NYPD used Clearview's tech to identify police officers who had been drinking before a fellow officer's funeral.Full Story... Read More

Clearview story highlights potential AI collaboration issues between EU, US

Politico reports on the potential collaboration issues with the EU and U.S. regarding artificial intelligence. The European Commission plans to introduce legislation on AI this month, and its stance could conflict with how the U.S. approaches the technology, particularly after a BuzzFeed News investigation found U.S. law enforcement agencies have been using Clearview AI's services. "The illegal use of personal data for facial recognition is not compatible with European fundamental rights and pos... Read More

Army plans facial recognition at automated checkpoints

The U.S. Army is developing a biometric camera system to confirm the identification of drivers entering bases through automated checkpoints, Nextgov reports. The system would compare images of drivers approaching checkpoints with images in a facial biometric database. “The results would be displayed to the guard with a photo of the driver indicating an access granted or access denied response in time to allow uninterrupted vehicle traffic flow for approved users,” an agency announcement states.F... Read More

AI tech on the rise, getting more personalized

The deployment of artificial intelligence technologies continues to grow and is expected to become more personalized over time, The New York Times reports. Machine-learning models are slowly being baked into everyday technologies while also being used for tailoring users' multimedia habits and general health monitoring. Privacy remains a factor in these developments taking hold, but researchers have begun finding workarounds through federated learning and encryption tactics.Full Story... Read More

Web Conference: Less Pain, More Gain: How AI Can Cut Data Breach Response Time and Complexity

Original broadcast date: February 18, 2021  Join us for this privacy education web conference in which we will discuss how using artificial intelligence technology to automate the data breach process can help enterprises manage the complexity of the regulatory landscape, avoid the risks of notifying too late, notifying when not required and notifying the same individuals multiple times. Read More

Researchers' browser extension uses AI to unearth opt-out links

Carnegie Mellon University Professor and Privacy Engineering Program Co-Director Norman Sadeh, CIPT, has noticed certain organizational behavior may depend on whether a company falls under a privacy law that has opt-in requirements versus opt-out. For those who must adhere to privacy rules with opt-in requirements, such as the EU General Data Protection Regulation, Sadeh said the onus is on website operators and service providers to obtain consent, meaning it's safe to assume opt-in links will ... Read More

Global AI projects, local privacy laws embracing privacy by design

In 2023, spending on artificial intelligence systems will reach $97.9 billion, more than 250% of the value in 2019, according to IDC research. “Software is eating the world, but AI is going to eat software” is how NVIDIA CEO Jensen Huang famously put it. This ongoing AI revolution requires collecting enormous amounts of data. Across industries, from healthcare and retail to automotive and public transport, computer vision and video analytics are key techniques in AI to fuel new digital solutions... Read More

Ensuring that responsible humans make good AI

We are seeing accelerating expansion in the range and capabilities of machine aids for human decision making and of products and services embodying artificial intelligence and machine learning. AI/ML is already delivering significant societal benefits, including improvements in convenience and quality of life, productivity, efficiency, environmental monitoring and management, and capacity to develop new and innovative products and services. The common feature of "automation applications" — the ... Read More

The Privacy Advisor Podcast: Carissa Véliz on privacy, AI ethics and democracy

Artificial intelligence, big data and personalization are driving a new era of products and services, but this paradigm shift brings with it a slate of thorny privacy and data protection issues. Ubiquitous data collection, social networks, personalized ads and biometric systems engender massive societal effects that alter individual self-determination, fracture shared reality and even sway democratic elections. As an associate professor at the University of Oxford's Faculty of Philosophy and the... Read More

Web Conference: The Face-Off — How Regulators Will Take on Facial Recognition Technology

Original broadcast date: Nov. 24, 2020 The EU General Data Protection Regulation sets out an ambitious unified privacy approach for the European Union, but regulatory practice shows facial recognition technologies are treated differently within countries. While a U.K. court found it permissible for the South Wales Police to use facial data to identify individuals at a large football match, Sweden's data protection authority, Datainspektionen, issued a fine of roughly 16,500 GBP to a school board that used cameras in a classroom with the aim of automating the registration process. Examples such as these raise concerns, ranging from privacy to equal treatment. The main issues include lack of transparency and the questionable reliability of algorithms, which could lead to a lack of concise information, biased results and discrimination. Although different jurisdictions may take different tacks, the nature of this technology is global; therefore, national lawmakers and regulators must establish a borderless approach. Hear this roundtable discuss what kind of legal framework may ensure the technology is used in a way that adequately balances concerns with the social and cultural differences among the continents. Read More

AI, a Privacy Odyssey: Conducting Privacy Assessments on AI Projects

Original broadcast date: Nov. 5, 2020 Join this interactive session to walk through an example AI project aimed at mitigating the spread of COVID-19 and learn the process, including the basics of AI, what questions to ask, including about racial and other bias, what laws might apply, how to conduct a privacy impact assessment, and how to assist with privacy engineering solutions. This session is designed to give you practical skills and resources that you can apply in real-time at work. You will hear from an in-house legal privacy leader, a privacy architect at a major tech company, and a partner at a large firm who will bring their real-world experience to guide you through this mock privacy review. Read More

Web Conference: Big Data, Artificial Intelligence and Discrimination in Health Care and Beyond

Original broadcast date: Oct. 12, 2020 This session will review and evaluate the developing law of discrimination in connection with health care and beyond, focusing on how the law defines risks and obligations for big data and artificial intelligence. We will assess the state of the law, identify likely future developments, and provide a roadmap for how companies can navigate this increasingly complicated area. We also will address the question of whether these discrimination issues are best addressed through privacy law or otherwise. Read More

What privacy frameworks can teach us for implementing AI

Artificial intelligence is definitely one of the most important — if not the most important — technology that will shape our world in the years to come. But with opportunities come obviously risks and challenges. Not surprisingly then, numerous efforts are being made to create frameworks and standards that will help reconcile benefits and potential problems we might face. While all these efforts involve multiple disciplines — and naturally, privacy professionals have a big role to play to make ... Read More

Web Conference: Is Privacy Built in to Your Automated Decisions That Use AI?

Original broadcast date: June 10, 2020 How do you do balance privacy and ethics in artificial intelligence as you add data scientists and analysts who aren’t aware of the required policies and controls? How do you ensure your policies are covering the uniqueness of this technology? How are you providing transparency to explain to customers who are concerned about how the results affect their lives? Answer: Apply by-design, by-default and transparent security, privacy and ethics frameworks when AI and machine learning are being developed, trained and operated. Join us for this privacy education web conference to find out more.  Read More

ICO: Explaining decisions made with AI

The U.K. Information Commissioner's Office and Alan Turing Institute published guidance to help organizations explain the decision-making process of artificial intelligence. The guidance covers the basics of AI and how technical teams and senior leadership can put their organizations in the best position to properly explain the workings of the systems. Read More

AI camera detects COVID-19 fever

An Austin, Texas-based company’s artificial intelligence camera can detect those who may have a COVID-19-related fever, Fast Company reports. Athena Security’s camera system uses an AI model to view a subject’s inner eye, which can reflect body temperature. The thermal camera records an image of those with a fever. Athena CEO Lisa Falzone said the technology will be seen more in places like airports and hospitals where access depends on an individual’s temperature.Full Story... Read More

Spanish DPA issues guide on AI-based data processing

Spain's data protection authority, the AEPD, published guidelines on how the EU General Data Protection Regulation addresses data processing that uses artificial intelligence technologies. The AEPD wrote the guide aims "to address some of such concerns regarding privacy compliance and to point of the more relevant aspects regarding the design and implementation of [AI-based processing] from the point of view of GDPR." Areas of focus within the guidelines include the "legal basis for the processi... Read More

Takeaways from new White House annual report on AI

The Trump administration's Office of Science and Technology Policy has released its inaugural report on artificial intelligence. The assessment comes a year after the White House launched the American AI Initiative under Executive Order 13859, which "focuses the resources of the federal government to support AI innovation," the document states.  The report summarizes the initiative's progress to date and sets forth a "long-term vision" for AI. Notably, the 36-page document mentions the word "pr... Read More

Europe aims to take global lead with strategies on AI, data

On Feb. 19, the European Commission presented its much-awaited proposals on artificial intelligence, a data strategy and Europe’s digital future. Despite huge media speculation in recent weeks on what it would say about facial recognition, the "White Paper on Artificial Intelligence" stopped well short of recommending a two- to three-year ban that had appeared in earlier leaked drafts. Instead, from a data protection point of view, it proposed “requirements aimed at ensuring that privacy and pe... Read More

The Privacy Advisor Podcast: How should we interpret the European Commission's new AI strategy?

On Wednesday, the European Commission released its EU data strategy. As the IAPP's Ryan Chiavetta reported, the document outlines the commission’s five-year plan for “policy measures and investments to enable the data economy.” The commission based its strategy on four pillars, one of which is a cross-sectoral governance framework for data access and use. In conjunction with the release data strategy, the commission also published a white paper on AI. In this episode of The Privacy Advisor Podca... Read More

Accelerating AI with synthetic data

The application of artificial intelligence and machine learning to solve today’s problems requires access to large amounts of data. One of the key obstacles faced by analysts is access to this data (for example, these issues were reflected in reports from the General Accountability Office and McKinsey Institute). Synthetic data can help solve this data problem in a privacy preserving manner. What is synthetic data? Data synthesis is an emerging privacy-enhancing technology that can enable acc... Read More

Securiti.ai receives $50M in funding for AI-focused privacy platform

Last summer saw investors and venture capitalists put millions of dollars into the privacy technology market. Securiti.ai was one of the vendors as it closed a round of Series A funding worth $31 million. As the calendar flipped into 2020 and money continued to fuel privacy tech vendors, Securiti.ai once again was able to obtain a new influx of cash. The vendor announced it has received $50 million in Series B funding, led by venture capital firms General Catalyst and Mayfield. "What happened ... Read More

AI a new obstacle for college students in job searches

College students are facing new obstacles in job searches as more businesses use artificial intelligence to vet interns and entry-level employees, CNN Business reports. AI can help businesses quickly conduct video interviews, analyze a candidate’s grammar and facial expressions, and determine their characteristics. College career counselors are educating students about companies that use AI and what they can do to be successful. “Everyone makes snap judgments on students, on applicants, when fir... Read More

Are organizations safeguarding against the risks posed by AI?

Artificial intelligence is advancing at a fast pace. Although it is a useful tool for organizations seeking to increase productivity, it can also create privacy risks. A global study revealed that more than 70% of consumers harbor some sort of fear toward AI. So, are organizations taking the proper steps to consider and mitigate the risks of using AI? The IAPP-EY Governance Report of 2019 revealed that, by and large, organizations are using already-existing tools to meet the new risks posed by ... Read More