TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

The Privacy Advisor | ICO's McDougall: 'We're losing a battle for trust,' but there's a solution Related reading: New ICO project aims to facilitate privacy and innovation

rss_feed

""

""

His opening statement was unequivocal: "We are losing a battle for trust." That is, people are losing trust in modern business and innovation. 

The reason? "Every time we create something cool, we are not bringing people with us. This trust deficit widens and widens."

In his keynote address here at the IAPP Data Protection Intensive in London, U.K. Information Commissioner's Office Executive Director for Technology Policy and Innovation Simon McDougall, CIPP/E, CIPM, CIPT, presented a clear warning to players in the current digital ecosystem.

On the one hand, "we're living in a really exciting world with innovation that brings huge changes to how we live and work," he said. Personalized medicine is extending lives. Smart cities are curbing air pollution and organizing traffic flows. 5G telecommunications networks are about to "radically change our interaction with our phones and devices." 

But at the ICO, he warned, "we're seeing issues around this trust deficit." For example, the agency's annual track survey found only 15 percent of those surveyed have a high-level trust in social media. "Only 10 years ago," McDougall said, "tech firms could do no wrong." The wow-factor era for the latest smartphone or online search capability has gone the way of the dodo. 

For McDougall, people are now living in an "age of unhappiness" and are not feeling empowered. With large tech companies, the balance of power has shifted, and "people are feeling unhappy about that." But at the same time, he observes, "people are still using their devices because they don't feel like they have a choice," and "acquiescence does not equal trust."

To illustrate, he pointed out that the "agree button is one of the biggest lies on the internet. This is not consent. This is not notice."   

Ultimately, what this means is that the digital economy is heading toward a crisis point. "There's an express train heading toward the privacy community. If we don't react, we will reap the consequences." 

But all was not doom and gloom for McDougall, who believes this is also a time for opportunity. In his new role at the ICO, he gets to offer stakeholders the "carrot" while being on the front lines of innovation. And this means working with innovative companies, thought leaders, privacy pros and other organizations. 

Citing a recent ad tech event hosted by the ICO, McDougall said the agency aims to work with the industry and innovators. Though, he said, "We're not happy with what we're seeing in ad tech," a debate and conversation are going here. 

To help work with industry on several fronts, McDougall said the ICO is also focusing on artificial intelligence by working with the Alan Turing Institute to help develop and enhance its expertise in the growing AI field. This would include learning how to make the technology's decision-making capabilities more explainable to people. To help, the ICO has set up an AI regulators' group "with as many regulators as possible," and hired AI expert Reuben Binns to build out an AI audit framework. This would include finding out how an AI is developed, built and tested and how training data is used, as well as how to address black-box algorithms. 

Similarly, the ICO has launched a grants program to help promote and support innovative research and solutions for privacy problems. It has also built a "regulatory sandbox" — a safe space of sorts in which startups and innovators can work with the ICO to figure out ways to create innovative services and products that use personal data while simultaneously protecting it. Continuing, the ICO has named the first two members to its new ICO Technology Advisory Panel: Luciano Floridi and Emiliano de Cristofaro, long-time experts in the data ethics field. 

Significantly, the ICO is working on new and updated guidance on anonymization. Its last guidance came out about 2013, McDougall said, so it's due for an update. Cookie guidance may also be on the docket, and blockchain was in the queue ... that is, until French data protection authority, the CNIL, came out with its own guidance in 2018. The CNIL's guidance was good enough, he said, that the ICO decided it didn't need to put out its own. But, he added, the ICO is working to engage further with the blockchain community. 

The amount of work the ICO is doing with technology and the front line of innovation gives McDougall hope. 

"My key message," he said, "is that the ICO is nothing without the help of the people in this room. As privacy pros, you are on the front lines at your organizations. You make decisions about compliance and ethics. These are the real decisions, and you are snuffing out the issues along the way. We at the ICO want to ensure you that we support you." 

The "innovation discussion is critical," McDougall said, so that privacy pros and regulators are not the people who are always saying "no." 

"This is personal for me," he said, pointing out his years as a privacy pro working to advise chief privacy officers. "We want to make sure the ICO is helping privacy pros do the right thing like the IAPP helps privacy pros do the right thing." 

And it's by doing the right thing and working together that will help people regain the trust that has been lost in recent years. "That makes me optimistic," he said. 

2 Comments

If you want to comment on this post, you need to login.

  • comment Gary LaFever • Mar 13, 2019
    <p>AVOID THE TRUST DEFICIT WITH AI</p>
    <p>Focusing on data breaches, subject access requests (SARS), and consent is distracting and confusing when it comes to AI. What the industry needs is clarity on what is necessary to lawfully POSSESS and USE personal data for AI. This must be clarified NOW before AI design implementations become widespread and significant investments are made. Once made, companies and countries will resist changes because – from an economic return on investment perspective only – it will be more cost effective to engage in regulatory arbitrage than retool AI technology.</p>
    <p>TRUSTWORTHY AI FAQs</p>
    <p>Q1: How Can Personal Data Be Unlawful to POSSESS Under the GDPR?</p>
    <p>A1: Data Controllers Must Take Affirmative Action to Transform Data Collected in Noncompliance with the GDPR for it to be Lawful to Possess.</p>
    <p>Q2: What Is Necessary for Lawful USE of Personal Data In AI?</p>
    <p>A2: An Alternate (Non-Consent) Legal Basis with Technical and Organization Safeguards Is Required to Protect the Rights of Data Subjects.</p>
    <p>CLARITY ON DATA AND USE</p>
    <p>It’s great to see the ICO taking action to address the “trust deficit&quot; with AI, including working with the Alan Turing Institute and AI expert Reuben Binns. [See https://iapp.org/news/a/icos-mcdougall-were-losing-the-battle-for-trust-but-theres-a-solution/]. While attention is rightfully paid to the enormous potential benefits from AI, this IAPP article highlights the need for AI to be “trustworthy,” “ethical,” and “non-discriminatory.”  However, whether a car is driven in a “trustworthy,” “ethical,” and “non-discriminatory” manner is irrelevant if the people in the car don’t even have the legal right to use the vehicle. Data controllers must take action to ensure that both their Possession and Use of personal data for AI comply with the requirements of the GDPR as well as other evolving data protection regulations. If this message is not widely publicized before AI processing is put into wide spread practice, data subjects will suffer as billions are invested by both countries and commercial enterprise in AI using illegal personal data and the rights of data subjects will continue to be sacrificed on an ongoing basis with no reversal as organization (i) elect to fight in court rather than change the AI processes in which they have invested, and (ii) decide that the most cost-effective course of action is “regulatory arbitrage” given the low risk and cost of enforcement action.</p>
    <p>POSSESSION –The GDPR changed the nature of personal data so that it became highly regulated data. The last paragraph on page 31 of the WP29 April 2018 Guidance on Consent [See https://iapp.org/resources/article/wp29-guidelines-on-consent/ ] makes it clear that data controllers must take one of four specifically enumerated actions for data collected in noncompliance with the GDPR to be lawful to possess - doing nothing means the data is unlawful to possess since the GDPR does not have any “grandfather” or savings clause:</p>
    <p>1.	Re-consent the data in compliance with required specificity, un-ambiguity and voluntariness, together with a separate non-consent legal basis for data subjects who do not re-consent so that consent is truly voluntary.</p>
    <p>2.	Anonymize the data (in the true sense of the word) so that it cannot be used to infer, single out or link to data subjects.</p>
    <p>3.	Delete the data (in the case of financial services firms and other regulated industries this could mean keeping a copy of data solely to comply with reporting obligations but not providing access to the data for any other processing).</p>
    <p>4.	Transform the data to a non-consent legal basis while ensuring that continued processing is fair and accounted for.</p>
    <p>PROCESSING – Certain data processing activities – like AI – are incapable of being described at the time of data collection with sufficient detail to support GDPR requirements for consent to serve as a valid legal basis. And, Legitimate Interest, as an alternative legal basis, requires technical and organizational safeguards that truly mitigate the risk to data subjects so that the legitimate interest of the data controller survives a balancing test of the interests of the parties. These safeguards must protect against unauthorized re-identification of data subjects via both direct identifiers as well as indirect identifiers that can be linked together to re-identify a data subject - the “Mosaic Effect.” Traditional technologies developed before increasing volume, velocity and variety of data - like encryption and static tokenisation - are incapable of combating the Mosaic Effect. The IAPP first published my article on this issue in 2014 - What Anonymization and the TSA have in Common [ See https://iapp.org/news/a/what-anonymization-and-the-tsa-have-in-common/ ].</p>
    
  • comment Dale Smith • Mar 13, 2019
    In pursuit of enhancing trust in the privacy world, perhaps repurposing existing trusted notice and consent paradigms could be useful as well as developing new ones.  Consider the ubiquitous and widely-trusted Nutrition Facts label where consumers are offered clear, concise, easily accessible detailed information which they can fully digest, cherry pick, or completely ignore, at their option, and fully under their control.
    Adapting this paradigm to Privacy can provide a foundational de facto standard supporting enhanced trust and future technical innovation.  An Interactive Privacy Facts Notice might look like this:
    http://model.consentcheq.com/PFIN/min/
    Dale Smith, CIPT