TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

Privacy Perspectives | Takeaways from the 'Content Moderation in 2019' workshop Related reading: Web con: 'Developing Policy for Global Content Moderation'

rss_feed

""

""

After months of planning, it’s hard to believe our “Content Moderation in 2019” workshop is in the rear-view mirror. Thank you to everyone who attended and participated in the conversations throughout the day. A special thank you to all our keynote speakers and panelists. Your contributions, insight and humor fostered a day full of constructive discussions.

We covered a lot of ground in six hours. Our keynote speaker, Tarleton Gillespie, principal researcher at Microsoft New England and author of “Custodians of the Internet: Platforms, Content Moderation and the Hidden Decisions That Shape Social Media,” started the day by challenging attendees to think differently about content moderation. Rather than an ancillary function that is primarily reactive, Gillespie suggests content moderation should become core to internet intermediaries’ business models. While his book title refers to custodians of the internet, and one might think of someone responsible for keeping the internet clean, he offered a different definition. As custodians, the role of intermediaries could be one of guardianship.

This thread was woven throughout the day in many of our panel discussions and comments from attendees, beginning with a discussion focused on how this discipline is defined today and how it's is evolving. While content moderation is getting more attention than ever before, the panelists all agreed there is more work to be done organizationally to elevate it from a necessary evil to an integral part of a company's online safety strategy. It was recognized that because organizational structures vary greatly, there isn’t one common path to moving this function from the back office to center stage — it largely depends on a variety of internal factors.

While content moderation is getting more attention than ever before, the panelists all agreed there is more work to be done organizationally to elevate it from a necessary evil to an integral part of a company's online safety strategy.

This recognition led to hearing from panelists at Microsoft, TripAdvisor and Etsy who shared reporting structures, team profiles, hiring strategies, health and wellness programs, and the positive aspects of the profession. These three perspectives highlighted the diversity of approaches to operationalizing the profession — from hiring in local languages and cultures to outsourcing for certain types of content moderation.

One common sentiment from all in attendance was the importance of ensuring the wellness of the individuals doing this work. Leaders must be aware of their team members’ daily workload, time in front of the computer, and the content they are reviewing. If the data indicates their wellness is in jeopardy, this must be addressed. It was pointed out that sometimes the decisions made about an employee’s wellness may not be in line with a team’s established service-level agreements and was suggested that meeting the needs of the employee over the SLA should always take priority.

Also highlighted was the fact that while this profession receives a lot of negative press, the work is incredibly rewarding, and the people working in this field are extremely dedicated to the profession.

Nicole Wong, former deputy chief technology officer of the U.S. and vice president and deputy general counsel at Google, opened our afternoon sessions by sharing her experiences at Google and how during this time, she never would have imagined the free and open internet she was a part of building would be used in the ways it is today. Wong challenged attendees to shift how we think about the internet, which has traditionally been build on engagement, speed and personalization, by recommending accuracy, authenticity and context as the new pillars of the internet.

Wong challenged attendees to think about how we can shift our thinking about the internet, which has traditionally been build on engagement, speed and personalization, by recommending accuracy, authenticity and context as the new pillars of the internet.

With these words of wisdom, our discussion turned to the changing global regulatory landscape.

Panelists provided insight into how these changes impact companies’ content moderation policies and their ability to effectively do this work, including details related to the development of the latest U.K. government's online harms white paper, which outlines “plans for a world-leading package of online safety measures that also supports innovation and a thriving digital economy.” The white paper “comprises legislative and non-legislative measures and will make companies more responsible for their users’ safety online, especially children and other vulnerable groups.” Panelists and attendees compared the U.K. recommendations with NetDZ, the newly passed Australian legislation to criminalize internet platforms for failing to remove violent videos and audio, and other emerging laws, such as the Singapore’s proposed Protection from Online Falsehoods and Manipulation bill. As a result of the current landscape, the discussion explored taking a more global approach to policy development by allowing for fewer changes as new regional laws are introduced.

The partnership between machine learning and human reviewers is critical in executing content moderation strategies. Our panel discussion on this topic focused on how AI is working well today, including reducing the overall workload of content moderators and the challenges that remain, such as managing false positives or when users purposefully manipulate systems by creating new terms/phrases that AI will not flag. In addition to the traditional content-flagging mechanisms, strategies geared toward changing the tone of conversations online were discussed. For example, integrating instant feedback about why certain posted content is inappropriate and will be removed as a way of helping to reduce negativity and harmful content online. When asked about the future of machine learning, the panelists agreed the technology will continue to improve and move beyond flagging only fringe content, but because machines cannot interpret context, the human-machine partnership will always exist.

It was an honor hosting this workshop. It is our hope that by providing a forum to initiate these important discussions and connect you with one another, you are able to advance your thinking about the work you do as online trust and safety professionals and, in particular, within the discipline of content moderation.

Photo by Markus Spiske on Unsplash

Comments

If you want to comment on this post, you need to login.