TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

Europe Data Protection Digest | Notes from the IAPP Europe Managing Director, 8 Feb. 2019 Related reading: EDPB opinion on legality of pay-or-consent models in EU GDPR context

rss_feed

""

""

Greetings from Brussels!

This week, I attended an event at the European Parliament on the subject of content moderation in the digital era. Hosted by Dutch MEP Marietje Schaake of the ALDE parliamentary group, the event sought to explore how social media and internet companies develop, implement and apply rules and polices in the field of content moderation. Increasingly in the spotlight, online companies need to grapple with the modern-day challenges of moderating or removing illegal and controversial content, including hate speech, terrorist content, disinformation and copyright infringement material. The dilemma in many respects is balancing such moderation, while ensuring that business models remain compatible and in balance with democratic principles and freedoms.

While there has been a growing coverage on this subject in IAPP publications of late — you can subscribe to our Content Moderation Digest here — it remains a relatively new field of activity with several internet companies routinely saying that moderating and removing such content is complex and expensive. Simply put, operational facts about company practices are typically hard to come by.

The conference attracted speakers from a diverse set of companies ranging from some European companies, such as Allegro and Seznam, to the more internationally recognized brands of Facebook, Google, Snapchat and Wikimedia. Most speakers come directly from organizational “Trust and Safety” functions. Perhaps surprisingly for some, many companies appear to be fairly advanced in addressing this challenging activity, the bigger tech giants citing up to as many as 10,000 to 30,000 staff working as “content reviewers” across multiple locations.

By all accounts, it was evident that company efforts to moderate content are designed as a shared activity with a broad community of stakeholders. Companies have developed tools and mechanisms that allow users and consumers to report content they feel uncomfortable with. User communities complement existing organizational teams working in the areas of policy, legal, product, engineering and systems toward a common goal of ensuring brand and user trust, as well as safe content. External consultation is sought with third parties, such academics, ethicists and governments, to continuously refine policies. From a general perspective, there seems to be a recurrent three-way approach to content moderation: user inclusion, in-house operations teams, and machine-learning tools. These all contribute to overall efforts in reporting and enforcement. Technology as a means to detect inappropriate content, including the use of algorithms and machine-vision technology, are admittedly not “fail-proof,” but with exponential advances in technology, one wonders where we will end up. Companies recognize there is still much to be done. Today’s technology is not without bias.

One of the macro challenges for our era will be how technology interacts and transforms the political, legislative and regulatory landscape. One could argue that technological power and control invariably lends itself to enhanced societal influence and behavioral control. As we increasingly live our lives and liberties around and through technology, one can question whether we are living by imposed rules as encoded into the technologies.

Is it a fair statement to say that those who write the code will increasingly write the rules by which we live our lives?

There will be reportedly 25 billion devices connected to the internet by 2020 — and that may be a conservative estimate. Questionably, the “power of code” will continue to exponentially shape and influence our perception of the world and, by extension, our views and actions. While laws such as copyright and the GDPR are shaped to help balance public over private interests, technological code will continue to be primarily designed to serve market supply and demand as engineered by the tech companies. As consumers, it appears we increasingly abide by both public and private rules (online community rules) as we consume technology.

The argument for continued scrutiny, transparency, accountability and smart regulation has probably never been stronger to ensure a balance, and above all, in the public interest. 

Comments

If you want to comment on this post, you need to login.