TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

Privacy Perspectives | Privacy law and resolving 'deepfakes' online Related reading: MedData data breach lawsuit settled for $7M

rss_feed

""

In the aftermath of a high school shooting in Florida in early 2018, a pro-gun-control movement called the March for Our Lives seized political and media attention across the U.S. During that time, an image of one of the movement’s leaders, Emma Gonzalez, ripping up the U.S. Constitution roared through anti-gun-control social media accounts. It was seen by some as proof that the movement and its leaders were hostile to American freedoms and, by implication, the current American social order. In truth, the image was a doctored version of another image, taken from a Teen Vogue piece about the movement, showing Gonzalez tearing up a shooting target. This was quickly exposed and covered in the wider media.

The original image (left) and the doctored version (right).

'Deepfakes' defined and detected

The image at the center of the Gonzalez controversy is what is known as a “deepfake” (deep learning + fake). Although political deepfakes are relatively new, pornographic deepfakes have been a problem for some time. These often purport to show a famous actress or model involved in a sex act but actually show the subject’s face superimposed onto another woman’s body. This is called face-swapping and is the simplest method of creating a deepfake. There are numerous software applications that can be used for face-swapping, and the technology is more advanced than most realize (see this example using Nicolas Cage’s face). More advanced Photoshop know-how can be leveraged to create images like that of Gonzalez. Deepfakes raise questions of personal reputation and control over one’s image on the one hand and freedom of expression on the other.

The easiest method for detecting a single fake image or thumbnail is to use a reverse-image search. If the deepfake was created using another image somewhere on the internet, the original version should appear in that search. This technique may also catch deepfakes made from video stills, like that of Gonzalez, by finding a similar thumbnail of the videotape. The “uncanny valley” hypothesis suggests that deepfakes can also be discovered through a close study of the image. Deepfakes are often assembled by an algorithm from still photos, so they rarely blink, and facial expressions will not fully translate. As a result, they look human, but not indistinguishable from true humans, and should inspire a subconscious feeling of unease or revulsion.

There are also more robust and technically complicated methods to detect deepfakes, even in the absence of an original image. According to one method, advocated by Satya Venneti of Carnegie Mellon, a deepfake can be revealed through magnification. Compare the subject’s pulse as indicated by their irises with the pulse indicated by a point on their neck. If the two pulses do not match as expected, the image is fake. Analysis of other physiological markers may be successful, as well. As the techniques for creating deepfakes get more advanced, it is likely that detection methods will evolve.

Legal remedies for deepfakes and their limits

In common law jurisdictions, the victim of a deepfake can (in theory) sue the deepfake’s creator under one of the privacy torts, the most applicable of which is the “false light” theory. The victim can use the same theory to sue the deepfake’s publisher, the person who communicates it onward to others, both ordinary people and a website, television channel or newspaper. Although the publisher can be the same person as the creator, the deepfake must be published (communicated or shared with at least a third person) to be actionable. Under this theory, a plaintiff needs to prove that the deepfake incorrectly represents the plaintiff in a way that would be embarrassing or offensive to the average person. In the U.S., there is an additional “actual malice” requirement. This term refers to knowledge or negligence toward the truth of the matter by the creator or publisher, and it also applies to defamation suits. The amount of knowledge required varies depending upon whether the plaintiff is a public figure.

If a deepfake is being used to promote a product or service, the victim may invoke the privacy tort of misappropriation or right of publicity (depending upon the jurisdiction). Under this theory, a successful plaintiff can recover any profits made from the commercial use of their image in addition to other statutory and punitive damages. Misappropriation can be combined with false light where relevant.

In addition to false light and misappropriation, if the piece makes factually untrue assertions about a person and those statements demonstrably harm the subject’s reputation, a traditional defamation or libel suit may also prevail. Defamation/libel is distinguishable from false light in that false light statements are not technically false but rather are misleading or insinuate falsity.

The last traditional cause of action to combat a deep fake is intentional infliction of emotional distress. In this tort, a plaintiff can sue a defendant who has traumatized them through behavior that exceeds all reasonable bounds of decency. This a difficult standard to meet, even for a victim of a deepfake, and may be particularly onerous on plaintiffs whose images are used for political purposes.

For example, under the Supreme Court case Hustler Magazine Inc. v. Falwell, parody is protected even if it is distasteful or distressing to the subject. It is, therefore, unlikely that a victim of a political deepfake, such as Gonzalez, will be able to successfully recover for intentional infliction.

Whether or not the victim of a pornographic deepfake could recover under the law is a different matter. In some cases, such as when an image qualifies as obscenity, it may not be treated as protected speech.  In Roth v. United States, the Supreme Court declared that obscene speech wasn’t fully protected and laid down a five-part test. Whether a parody meeting the Roth criteria for obscenity remains protected under Hustler is a subject of legal debate.

In addition to the common law privacy torts, there may also be statutory and criminal remedies for a deepfake. For example, if the deepfake was made by a schoolmate or coworker, a victim may be able to seek legal remedy under a state’s cyberbullying or, if it was sexual in nature, sexual harassment statutes. If it was created by a familial or romantic relation, it may meet the definition of a relevant domestic abuse statute.

A deepfake may also be prosecutable under a cyberstalking law. These laws have varying criteria; however, they usually require that the maker/publisher of the deepfake know or have reason to know that the deepfake could cause repeated, harmful harassing contact of the victim (e.g.,  M.C.L. § 750.411). Occasionally, a cyberstalking law will have one or more exceptions to the repetition requirement allowing a single posting of a particular type of deepfake to trigger liability without the repeated contact requirement (e.g., R.C.W. § 9.61.260). Although seemingly relevant to pornographic deepfakes, most revenge porn laws do not cover them since the initial consent of the subject is an element of the offense (e.g., Cal. Penal Code § 647(j)(4)(A)). Even when revenge porn laws offer another route to liability besides the initial consent of the subject, they usually do not contemplate the creation of a fictional image (e.g., WI Stat § 942.09).

Using the law to remove a deepfake from the internet

Once the victim of a deepfake has successfully established their case against the deepfake’s maker or publisher, they can obtain a variety of remedies, including statutory, punitive and actual damages. Crucially, the victim can also get an injunction against the publication of the deepfake, which can be used to remove instances of the deepfake from the internet and otherwise attempt to limit its reach.

Since people generally do not own a copyright interest in their own image, copyright law isn’t a good weapon to use against deepfakes. Copyright in a photograph is usually owned by the photographer. If the source photographs from which a deepfake was assembled can be determined, the subject may be able to act in concert with a copyright owner to bring a copyright claim under the Digital Millennium Copyright Act or analogous law. However, given that deepfakes are created from the cropping and algorithmic combination of still images, the doctrine of fair use may be a substantial barrier to most copyright-based actions against deepfakes.

The right to be forgotten, granted to European residents in Article 17 of EU General Data Protection Regulation as the “right to erasure,” may assist a European victim of a deepfake. Under the right to be forgotten, a data subject has the right to request that the controller of personal data (i.e., the creator or publisher of the deepfake) about them delete that data. Data subjects can also object to the processing of their data under certain circumstances, likely to apply here. A deepfake, although fictional, counts as personal data under Article 4(1) of the GDPR, since it “relat[es] to an identified or identifiable natural person.” 

Publisher immunity and moderating against deepfakes

Most content-hosting sites — such as YouTube, Pinterest, Facebook or even a number porn sites — are insulated from any liability in the U.S. for content posted by others on their sites by Section 230(c) of the Communications Decency Act (47 U.S.C. § 230(c)). Section 230(c) also exempts hosts from liability for moderation activities. Specifically, Section 230(c)(2)(A) exempts hosts from all liability for good faith content removal by moderators, regardless of any constitutional protection, while Section 230(c)(2)(B) also exempts them from liability for automatic moderation or user takedown mechanisms they establish. Content-hosting sites are, however, amenable to injunctions and/or other orders from a court with jurisdiction over them. The Communications Decency Act also does not shelter them from criminal or state law.

This means that content hosts have substantial protections that allow them to liberally develop and deploy web moderation solutions against deepfakes. One mechanism of deepfake moderation is to deploy a takedown tool, like YouTube’s copyright takedown system. This system allows copyright owners to submit a request asking that an infringing video be taken down, using the same method as any other user-filed moderation complaint. As part of the request, the complainant must provide information about themselves and make certain assertions under penalty of perjury. An anti-deepfake system would conceivably need to follow a similar mechanism, with the complainant providing their contact information and other relevant details to prove their identity.

The best method of identity confirmation is a verification photo, where the user provides a picture of themselves, holding either their legal ID or a note containing a code or other writing given by the host to prove the picture is authentic, like a captcha. A moderator could then easily review the deepfake, compare it to the provided photo, and render a decision based on the host’s policies. The safe harbor provided by Section 230(c)(2)(B) gives hosts considerable latitude to experiment with these takedown methods, including automating the process through bioinformatics.

Conclusion

Outlawing deepfakes isn’t feasible or recommendable.

Deepfake technology has many legitimate uses, especially in movies, where it is used to place an actor’s face on their stunt double’s body or retouch an actor’s face when called for by the plot. Even if they were outlawed, deepfake's technology, much like copyright infringement, is likely impossible to fully prevent or ban. As the technology becomes more advanced, it will be easier to create deepfakes and, because the internet is borderless, deepfake creation software will always be accessible from places where the technology remains legal.

Technology companies and content hosts, therefore, bear much of the burden of moderating against the abusive uses of deepfakes on their websites. This type of moderation can be incentivized in the U.S. by carving liability for deepfakes out of the liability protections of Section 230. Instead of blanket protection, hosts could be required to meet a minimum moderation standard before being shielded from liability. A similar approach, called the DMCA safe harbor, is the standard in copyright cases (see 17 U.S.C. Section 512).

A solution that protects the technology’s use as a means of public comment takes a cue from Falwell v. Hustler’s fact pattern. Jerry Falwell sued Hustler for emotional distress because his initial defamation and false light claims were dismissed, likely because the parody at issue included a statement reading “ad parody — not to be taken seriously” at the bottom.

Although requiring non-consensual deepfakes to be marked wouldn’t solve every concern created by the technology, it would preserve it as a means of commentary. This watermarking system could be forced by shifting the burden onto the maker/publisher in a tort action arising from a non-consensual deepfake. If it bears the watermark, the defendant is insulated from any liability for defamation or false light, provided their use of the deepfake is permissible. If it lacks a watermark, the defendant is liable per se, provided the plaintiff can prove that they are the person depicted. Governments can also decide that certain categories of deepfakes, such as pornography, were not permissible uses and either refuse to shield them from liability under this system or punish them criminally.

Top image courtesy of Villian Guy/Youtube

Comments

If you want to comment on this post, you need to login.