TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

""

""

At a hearing at the U.S. Senate Committee on Commerce, Science, and Transportation June 25, lawmakers aimed to determine what kind of government intervention, if any, is necessary for artificial intelligence given that companies are competing for "optimal engagement" from internet users, something that is often achieved through user manipulation and without their knowledge. 

"While there must be a healthy dose of personal responsibility when users participate in seemingly free online services, companies should also provide greater transparency about how exactly the content we see is being filtered," said Committee Chairman Sen. John Thune, R-S.D., who is developing legislation on this, in his opening statement.

At the crux of the matter is algorithmic transparency and the ways sites may be using persuasive tactics, often referred to as "dark patterns," to influence the way users behave on their sites. The problem is many of these algorithms are inaccessible to both users and regulators since that they're often proprietary. Companies aren't going to show everyone the recipe to their special sauce if that's how they're influencing clicks or keeping eyeballs glued to the screen; it's their competitive advantage. 

Tristan Harris, co-founder and executive director of the Center for Humane Technology, testified that "persuasive technology is a massively underestimated and powerful force shaping the world" that has "taken control of the pen of human history and will drive us to catastrophe if we don’t take it back." He described the power asymmetry that exists in what he calls the "attention economy," which websites compete on given that the more users on your site and the longer they stay, the more money advertisers are will to pay for that attention. Harris, a former design ethicist at Google, likened the power imbalance to when, as a child, he aspired to be a magician. He recalled all it took to have control in that situation, like with persuasive technology, is not to necessarily know "more than your audience's intelligence ... you just have to know their weaknesses." 

It's those weaknesses that algorithms — used to convince users to move from one suggested click to the next — prey on, he said. 

"Instead of splitting the atom, it splits our nervous system by calculating the perfect thing that will keep us there longer – the perfect YouTube video to autoplay or news feed post to show next," he said in his written testimony. "Now technology analyzes everything we’ve done to create an avatar, voodoo doll simulations of us. With more than a billion hours watched daily, it takes control of what we believe, while discriminating against our civility, our shared truth, and our calm."

In addition, testified Rashida Richardson of the AI Now Institute at New York University, AI systems "are trained on data sets that reflect historical and existing social and economic conditions. Thus, this data is neither neutral or objective, which leads to AI systems reflecting and amplifying cultural biases, value judgments and social inequities." 

Relatedly, the French data protection authority, CNIL, just released a report on "dark patterns," which includes potential collaboration opportunities among professionals for privacy-friendly design practices. That follows extensive research from The Norwegian Consumer Council, which published in 2018 "Deceived by Design: How tech companies use dark patterns to discourage us from exercising our rights to privacy." 

Sen. Jon Tester, D-Mont., recalled watching his grandkids during visits to his farm. The youngest, he said, an eight-year-old, is glued to the cellphone at all times. He said if he wants any of the chores to get done, he has to threaten to take the device away. He questioned whether Google even talks about addictive nature of technology or the power of persuasion at its board meetings. "Because this scares me to death. You guys could literally sit down at your board meeting," he said to the Google representative there to testify, " ... and determine who's going to be the next president of the United States."

Richardson said Tester's concerns are legitimate, adding companies are not having those conversations —  certainly not in a "holistic" way due to silos and fragmentation among teams like legal, policy and products.  

Harris said there's no easy way for companies to solve the problems we face as a society because "these problems are their business model." 

Maggie Stanphill is user experience director at Google and leads its global Digital Wellbeing Initiative, which debuted in 2018 and "aims to help people find their own sense of balance" given technology's prevalence in daily life. The initiative included additions to Android, YouTube and Gmail to help people "gain awareness of time online, disconnect for sleep and manage their tech habits," Stanphill testified. So features include allowing users to set a "take-a-break reminder" while using YouTube or enable a "focus-mode" function for Android notifications. 

But the senators seemed less interested in user-action-required solutions and more interested in what's happening behind the scenes at companies like Google and YouTube, which also belongs to the Alphabet company. 

A particular point of contention came when Thune asked Stanphill whether Google employs persuasive technology, and Stanphill replied, "Google does not use persuasive technology," rather it puts transparency and user control at the core of all its products. 

That did not sit well with Sen. Brian Schatz, D-Hawaii, who waited until it was his turn to question the witnesses to return to the point. 

"I'm sorry, did you just say Google does not use persuasive technology? ... Either I misunderstand your company or I misunderstand the definition of persuasive technology."

Stanphill was steadfast and repeated that "dark patterns and persuasive technology is not core" to how Google develops products. 

Later, when Sen. Richard Blumenthal, D-Conn., had his chance to question the witnesses, he came back to this.

"I find your contention Google does not build systems with the idea of persuasive technology in mind somewhat difficult to believe," he said. "Because I think Google tries to keep people glued to its screens at the very least. That persuasive technology is operational ... a part of the business model."

While there was much talk about the problems inherent in current uses of artificial intelligence, there wasn't much conversation on the best way forward. To be clear, Richard and Harris agreed Congress needs to act. Richardson came armed with practical solutions, including recommendations that Congress create laws that require technology companies to waive "trade secrecy" claims that allow algorithms to operate within a "black box;" that companies should have to name which technology and vendors are being used to make decisions about consumers or the services made available to them; and that Congress revitalize the Office of Technology Assessment, which died in 1995. 

Researcher Stephen Wolfram, also on-hand to testify, called for a third-party intermediate between consumer and company. The danger, he said, is "who gets to be the moral arbiter?" Platforms should still be able to do large-scale engineering and monetization of content, but an independent third party, which works with the consumer, should determine the "final ranking of particular content shown to particular users," he testified. 

Richardson likes the idea but cautioned, while there should be a human in the loop, "we also need to be cognizant of who has power" in those situations. It shouldn't be a front-line employee who had no input in product design. She said liability in any form should attach at "those making decisions about goals ... and ultimately implementation and use of these technologies ... and then figure out what are the right pressure points or incentive dynamics ... that benefit society." 

Blumenthal said there needs to be human intervention in algorithms and, whoever that human is, has to be independent. "I know regulation is a dirty word in some circles ... but I don't think 'trust me' is going to work anymore," he said. 

In the end, there seemed some level of agreement it's time for Congressional action. 

"Government’s job is to protect citizens," Harris testified. "I tried to change Google from the inside, but I found that it’s only been through external pressure – from government policymakers, shareholders and media – that has changed companies’ behavior."

Photo by Franck V. on Unsplash


Approved
CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT
Credits: 1

Submit for CPEs

1 Comment

If you want to comment on this post, you need to login.

  • comment Christian Stewart • Jun 26, 2019
    It's a difficult issue to grasp.
    1.) Privacy regulation requires people who really understand the technology. We've seen what happens when old guys in Senate try to discuss tech issues.
    2.) Privacy laws could also really slow the progress that companies are making in the AI space. 
    
    This article (https://choosetoencrypt.com/privacy/big-data-vs-privacy/) explains how privacy and "data" are in a constant tug of war. Big data is becoming extremely useful to businesses and websites, but privacy is something that consumers want.