Greetings from Brussels!
An interesting article in The New York Times caught my attention recently about content moderation, which I think merits some comment. Facebook, as we know, is no stranger to the media spotlight and continues to retain their place in the limelight considering the Cambridge Analytica kerfuffle. However, one could be forgiven for overlooking their current efforts to monitor and moderate the vast amount of content on their platform.
Let’s rewind to January 2018, Germany, when the new Network Enforcement Act came into full effect. In many respects, this controversial law has turned Germany into the European litmus test for whether tech firms can be relied upon to tell the difference between free speech and hate speech. Under the law known as “NetzDG,” online platforms could face fines of up to 50 million euros if they do not remove “obviously illegal” hate speech and other postings within 24 hours of receiving a notification. A seven-day period is granted for removal of illegal content. If there is one country in Europe where such a law has a real chance of enforcement, Germany is probably it: Post–World War II, Germany passed some of the world’s toughest laws curtailing hate speech, including prison sentences for Holocaust denial and inciting hatred against minorities. Fast forward to today, and you could probably argue that they are by no means the only EU country that should be paying attention to extremist speech.
Read the piece: It's surreal to read about the office block in Western Berlin, with hundreds of men and women spread over five floors scanning their computer screens and a specialist trauma team necessitated by the sort of things that make nondisclosure agreements necessary. They are the people who decide what constitutes free speech and hate speech. Not too different from the crowds employed by Google to decide what deserves to be forgotten and what doesn’t.
Because let’s be clear: This is not only relevant to Facebook. Both they and Twitter (for example) have modified their German websites with additional features for flagging controversial content. Ideally, for the moderation service to be effective, the tech companies are reliant — to a degree — on proactive community moderation, as well. This is an interesting concept where the user, while being a content creator, also plays a role in the policing process, as well. For the community, by the community. Both companies have spent months hiring and training the moderators to cope with the Network Enforcement Act. With the pervasive power of digital comes new responsibilities and the creation of new professions reflective of the age we live in: Who said that the tech revolution would wipe out jobs? Arguably, technological advances are also creating employment.
Let’s put some context here, Facebook and other social networking platforms are facing a heavily mediatized backlash in recent times over their inability to safeguard against disinformation campaigns, fake news, and the digital reach of hate groups. Consequently, their ability to maintain user trust and privacy is increasingly in question. This is not just about legal compliance; it’s as much about protecting the business (model) and the brand, by extension. Moreover, this is not restricted to Germany or Europe. Recently, several people were killed in India after false viral messages were circulated on WhatsApp. In Myanmar, there is evidence that violence against the Rohingya minority was fueled, in part, by misinformation spread on Facebook.
Content moderation comes with a tremendous responsibility. Deletion decisions in many cases do not come easy. Perhaps, as has been mentioned by some experts, the law simply gives the companies too much authority to decide what constitutes illegal hate speech in an open democracy. Regardless, the task at hand is real, and digital platforms have already become the influencer of choice globally — it’s time companies demonstrated their maturity curve in addressing the content dilemma.
If you want to comment on this post, you need to login.