In many ways, 2018 has been "the" year in privacy. The EU General Data Protection Regulation went into effect May 25, data protection laws are popping up around the world, global data transfers are growing more complex and essential, an entire market of tech vendors is emerging to meet the needs of the privacy office, and there's even growing consensus for a federal privacy law in the U.S., something that was virtually unthinkable just a year or two ago. This is truly a special, if not demanding, time to be in the privacy profession.
At the same time, we are also noticing the rise of an adjacent profession that shares similar issues with the privacy profession. The growth of social media and user-generated content in the last 10 years is prompting more and more companies to hire content moderators. True, for the most part, privacy pros aren't out there masking nude photos or deleting violent videos every day, and, conversely, content moderators aren't "doing privacy" day to day, but there certainly is some overlap: Content moderators often deal with similar issues involving law and policy, regulatory scrutiny, and consumer safety and trust. They often work alongside one another or through similar departments. They might be the same person in some organizations. Significantly, both professions must also understand context and cultural nuance as they make decisions that ultimately benefit the company's reputation and bottomline.
And the existing content moderation field is substantial. According to one 2017 estimate from Accenture, there are more than 100,000 content moderators working around the world. However, amidst this growth, regulatory scrutiny is coming down the pike. For years, tech platforms have been legally protected from most liability for user-generated content by Section 230 of the Communications Decency Act and the Digital Millennium Copyright Act of 1998 in the U.S. and the eCommerce Directive in the EU.
In years past, there have been hiccups, as in Italy, when a lower court convicted three Google executives, including its top privacy pro working in the EU, for not quickly removing a video on a Google-owned platform that featured children bullying an autistic boy. Luckily for the Google execs, those convictions were overturned in 2012, but the issue demonstrates some of the overlap between the professions, the legal consequences involved and the cultural differences that arise as platforms connect people across the world.
Now in 2018, the U.S. and EU are each chipping away at some of those legal protections for platforms. Driven by increased misinformation in democratic elections, cyberbullying, hate speech, malicious bots, and other nefarious online behavior, governments are starting to get involved. Earlier this year, the Trump administration signed a law that holds companies liable for hosting content facilitating "sex trafficking." Likewise, the European Commission introduced a bill that requires platforms to take down terrorist content within an hour of being flagged (eat your heart out, 72-hour breach notification laws).
You may have guessed that, as a result, data protection authorities and lawmakers are getting in the game, as well. Fittingly, Italy's DPA, the Garante, created a new division focused squarely on content moderation. The U.K. Information Commissioner's Office has led a high-profile and comprehensive investigation of Cambridge Analytica's involvement in the 2016 Brexit and U.S. presidential elections. Members in the European Parliament are planning a set of recommendations ahead of parliamentary elections, including one recommendation that "all online platforms distinguish political uses of their online advertising products from their commercial uses." And just this week, Sen. Mark Warner, D-Va., questioned whether the "FTC enforcement model is up to the task of consumer protection when it comes to social media platforms. ... It's clear that Congress needs to step in," he said.
Similar to the legal ramifications of the GDPR, tech companies are undergoing increased regulatory risk based on the user-generated content that appears on their platforms. Demand for competent professionals is increasing. True, there are some technological tools that businesses can use to help operationalize things like privacy compliance and content moderation, but a human will always be essential in order to understand the nuance, the context, and the empathy needed to filter user-generated posts.
We are curious about the content moderation profession and think it's worth exploring further. That's why we're standing up a newsletter dedicated to the pressing issues in the content moderation sphere. Please consider subscribing here for weekly content moderation news in your inbox. We also want your feedback. What are you seeing out there? Where are you seeing overlap? Does content moderation play a role in your day-to-day activities? Let us know your thoughts in the comments.
Photo credit: Alistair-Hamilton Glass Brick Wall via photopin (license)
Looking for weekly news about the content moderation space? Sign up now for the Content Moderation Digest, where the IAPP’s publications team will bubble up all of the most interesting and informative works on the web in an easy-to-read format.
If you want to comment on this post, you need to login.