This spring has been a mournful time. We mourn for the victims and families of the Boston Marathon bombing. We also mourn for the loss of Anne Smedinghoff, a foreign service officer in Afghanistan killed by a terrorist's explosive while delivering books to Afghan children. That loss is especially painful for those of us in the privacy community who know Anne’s dad, Tom Smedinghoff, a peerless colleague whose grace in the face of unimaginable tragedy has been inspirational. Our heartfelt sympathy goes to Tom and his wife, Mary Beth.
In the aftermath of tragedies such as these, we are left to ask: “What motivates people to burn with hate to such a degree that they take innocent lives?”
Experts in terrorism, psychologists and sociologists study the roots of terrorism, and their answers are complex. But the origin of hate is simple. All hate starts with words. Words of prejudice, words of discrimination, words of paranoia. What triggers people to act on those words of hate is the complex question.
The Internet, as the platform of the modern era for communication, is the megaphone of hate. For nearly 20 years, I have been studying the problem of online hate in my role as a leader of the Anti-Defamation League (ADL), the 100-year-old civil rights organization where I serve as National Civil Rights chair. In June, a book I co-authored with Abe Foxman, Holocaust survivor and the national director of the ADL, Viral Hate: Containing its Spread on the Internet,will be published by Macmillan Palgrave.
In the book, we examine the prevalence of online hate—in websites, through social media, even in comments to news stories—and document its link to hate crimes. We also discuss how online hate is the pollution of the digital age, degrading the Internet experience for many, discouraging others from participating and serving to mislead and sometimes recruit susceptible young people into lives of violence.
We examine the remedies available to society, of course discussing the First Amendment to the U.S. Constitution. We conclude that even where the First Amendment is not a constraint on legal regulation of hate speech (since rarely is speech out of bounds in the U.S., no matter how repugnant), the law is a poor tool to deal with it, the judgments on what is or is not hate speech are not easy, and if made by authorities seeking more to censor than protect, can stifle free expression. Moreover, the viral nature of the Internet means that tamping down speech does not prevent it from re-appearing.
We also discuss the issue of anonymity—the shield that protects the identities of people online. Without question, anonymity and the privacy it provides are powerful tools for expression. From the pre-Revolutionary days of the anonymous pamphleteers challenging King George to the gay teen seeking information on coming out, anonymity has been a force for good.
But anonymity also means that people can say and publish things that are hurtful and hateful without being identified, without standing behind what they put out there. It is axiomatic that when a person is identified with his or her statements, he or she will be more careful about what they say. It is frequently the anonymous, careless, offhanded comment or posting filled with slurs and invective that adds to online hate. Odds are that if people knew they would be identified with their thoughtless slurs, they would not publish them.
In the book, and previously in a New York Times column, I have suggested that Internet intermediaries can play a role, in appropriate circumstances, to limit online anonymity in an effort to curtain online hate. The Times itself has come up with an approach that publishes the comments to news stories of readers who use their real names first, relegating anonymous (pseudonymous) comments to the end of the queue.
Facebook also recently came up with a thoughtful and innovative solution to the problem of anonymous online hate. While Facebook’s real-name policy means that a user will be identified with his or her posts, Facebook Pages allow for anonymous postings. Some of those postings cross a line—not the Community Standards line Facebook sets, which allows the immediate deletion of the content—but a line that when crossed can lead to distress for, and potential harm to, minorities. The frequently invoked example is humor that has racist, homophobic or anti-Semitic meaning. When such content appears on Pages and is brought to Facebook’s attention, Facebook asks the Page “administrator” (a Facebook user) to have his or her real name associated with the Page. Not surprisingly, most users prefer to remove the content rather than be associated with it.
In the privacy world we populate, the debate usually is how to strike a balance between commerce and privacy, or law enforcement and privacy. In the world of hate speech, the balance between anonymity and its useful role in free expression and the harm anonymous hate speech can cause requires a careful look at circumstances when privacy needs to give way to reducing the increasing instances of online hate.
Words of hate lead to acts of hate. And as important as words are, it is vitally important in this mournful season of explosions and loss to address the hate underlying the tragedies we experience all too often. As a privacy lawyer and a privacy advocate, when it comes to hate speech, privacy may have to take backseat.
If you want to comment on this post, you need to login.