[Essay] The Real Problems of Deepfakes

The Real Problems of Deepfakes

            Deepfakes—defined by machine-learning experts Yisroel Mirsky and Wenke Lee as “content generated by an artificial intelligence, that is authentic in the eyes of a human being,” particularly “the generation and manipulation of human imagery”—have become increasingly popular in recent years. They rely on the rapidly developing technology of “deep learning.” Deep learning is, essentially, the process of a computer teaching itself new information through trial and error. In the case of deepfakes, the computer is teaching itself how human faces look and how they can be manipulated. Deepfakes have “lower[ed] the technological barriers required to create high-quality manipulations” (Lutz & Bassett). Their ease of access lends well to any individual or group seeking to create a time- and cost-efficient video for what is often a malicious purpose. Because of these malicious uses, it would be smart to place regulations on deepfakes. Before we can do this, we first need a way to reliably detect deepfakes with an automated process.

            Lindsey Wilkerson of the University of Missouri states that “the first viral deepfake was a pornographic video… first posted on the social media website Reddit by a user named ‘deepfakes.’” Some say this is where the technology got its name, while others say the user’s name was to showcase what he does. Regardless, this post set a precedent for deepfakes to be used maliciously, and the malicious uses would only grow more problematic as the technology advanced. Wilkerson explains that, due to the huge increase in misinformation surrounding the 2020 Presidential election, several social media websites began to limit deepfakes on their platforms. “YouTube announced that it would not ‘allow election related deepfake videos’… Facebook put out a statement that it had ‘strengthen[ed]’ its policies ‘toward misleading manipulated videos…identified as deepfakes,’” and “Twitter and Pornhub…completely banned the publication of deepfakes” (Wilkerson). With nearly every major social media platform claiming to prohibit deepfakes, it’s safe to assume they have a sure-fire way to tell when something is or is not a deepfake. But is that truly the case? Is there a chance that a deepfake can still slip through? How are they even detecting deepfakes?

            One popular method for deepfake detection that these social media sites might be using is the “analytic” method. Lutz and Bassett describe this method as a “promising technique for detecting deepfakes” by “finding head pose inconsistencies in modified images.” They go on to explain that “post estimates contain enough information to identify unique individuals.” What does all that mean? Well, the analytic method is the process of having an AI referred to as “the analytic” compare a deepfaked video of a person to an authentic video of that same person. In comparing the two videos, the analytic can see a difference in body and head movement. The analytic “computes two 3D head pose estimates, one using only the central region of the face and the other its entirety, and using various features derived from these head poses classifies an image as either manipulated or authentic” (Lutz & Bassett). The image below depicts how the analytic is able to analyze 3D facial features and track them, regardless of anything obstructing the face (such as the hands in the image). Notice how the analytic estimates that the woman on the right is smiling despite her mouth not being visible. It makes this estimation based on other facial features, like slightly raised eyebrows, squinted eyes, and wrinkles around the mouth.

(Lutz & Bassett)

             All the estimations performed by the analytic allow it to detect deepfakes with “high accuracy,” according to Lutz and Bassett. However, “high accuracy” is not explicitly defined and does not mean 100% accuracy. This means that there is still a chance that a malicious deepfake will slip through and be published online. Even worse is that, because this deepfake slipped past the analytic, it will likely be a very convincing deepfake to humans as well.

The capabilities of the analytic are not going to be enough. Matthew Bodi cites “deepfake pioneer” Dr. Hao Li as saying in 2019 that “deepfakes that appear ‘perfectly real’ will be accessible to everyday people in ‘six months to a year.’” Though that time has passed and deepfakes aren’t quite at the point that Dr. Hao Li predicted, there is still a more relevant and somewhat worrying statistic: Bodi states that “between 2018 and 2019, the number of deepfake videos on the internet doubled.” If deepfake production continues to grow as rapidly as it did in 2019, and if deepfakes do eventually become as accessible as Dr. Hao Li thought, then social media platforms will be overwhelmed with deepfakes. The analytic and any other methods being used to detect and combat deepfakes will be unable to keep up as deepfakes become more convincing and more accessible.

            A solid solution to the malicious use of deepfakes needs to be implemented. As of now, the regulation of deepfakes is only being done on a small-scale with the platforms affected needing to do their own regulating. The ability to create a deepfake is currently available—in some cases for free—to anyone who wants it. The free deepfakes aren’t going to fool anyone, but, again, Dr. Hao Li predicts that the easily accessible deepfakes will progress to the point of being “perfectly real.” While it’s good that the companies most worried about and affected by malicious deepfakes are attempting to prevent them, it shouldn’t be their sole responsibility. Free deepfakes, for the most part, are of a low enough quality that they aren’t convincing, and they almost always include a watermark that gives it away. Paid usage is more of a concern and needs some form of regulation as well. Perhaps there should be some sort of certification required before licenses to these programs can be purchased. Such limitations won’t entirely solve the problem of malicious deepfakes, but they would at least slow things down and give the deepfake detection algorithms—and the people making them—some breathing room.


 

Works Cited

Bodi, Matthew. “The First Amendment Implications of Regulating Political Deepfakes.” Rutgers Computer & Technology Law Journal, vol. 47, no. 1, Jan. 2021, pp. 143–172. EBSCOhost, search-ebscohost-com.libprox1.slcc.edu/login.aspx?direct=true&db=lgh&AN=148437014&site=eds-live&scope=site.

Campbell, Colin, et al. “Preparing for an Era of Deepfakes and AI-Generated Ads: A Framework for Understanding Responses to Manipulated Advertising.” Journal of Advertising, Apr. 2021, pp. 1–17. EBSCOhost, doi:10.1080/00913367.2021.1909515.

Lutz, Kevin, and Robert Bassett. DeepFake Detection with Inconsistent Head Poses: Reproducibility and Analysis. 2021. EBSCOhost, search-ebscohost-com.libprox1.slcc.edu/login.aspx?direct=true&db=edsarx&AN=edsarx.2108.12715&site=eds-live&scope=site.

Mirsky, Yisroel, and Wenke Lee. “The Creation and Detection of Deepfakes: A Survey.” ACM Computing Surveys, vol. 54, no. 1, Jan. 2021, pp. 1–41. EBSCOhost, doi:10.1145/3425780.

Wilkerson, Lindsey. “Still Waters Run Deep(Fakes): The Rising Concerns of ‘Deepfake’ Technology and Its Influence on Democracy and the First Amendment.” Missouri Law Review, vol. 86, no. 1, Winter 2021, pp. 407–432. EBSCOhost, search-ebscohost-com.libprox1.slcc.edu/login.aspx?direct=true&db=asn&AN=150897210&site=eds-live&scope=site.

Comments

Popular Posts

Questions And Lessons From "I, Robot" (2004)