UC Berkeley professor influences Facebook’s efforts to combat deepfakes
Facebook recently announced that it will ban misleading deepfake videos, nearly eight months after the tech giant approached researchers at UC Berkeley and other universities for help in detecting such content on its platforms.
Concern over deepfakes has exploded in recent years as advances in artificial intelligence allow people to manipulate videos with increasingly realistic fake images and audio.
Hany Farid, a Berkeley professor of electrical engineering and computer sciences, was one of the researchers Facebook approached last year. The company ultimately invested $7.5 million with Berkeley, Cornell University and the University of Maryland to develop technology to spot the deepfakes.
In a brief interview, Farid, who has a joint appointment at the School of Information, said manipulated videos, which often portray politicians and celebrities saying or doing things they didn’t do, pose a serious threat to society.
“The videos are clearly designed to be misleading and harmful to the individuals/political parties,” Farid wrote in an email. “I believe that these types of fraudulent videos should be banned because the harm outweighs any possible benefit.”
Deepfakes are created by applying artificial intelligence or machine learning techniques to computer algorithms that can recognize images and patterns. Using the face of one celebrity as a reference point, a computer can construct the image of another celebrity based on the numerous photos of celebrities that exist online. The new image can mimic the gestures, movements and expressions of the original.
But while Facebook will remove misleading deepfakes, the company will still allow doctored videos made from less advanced technology. Under these guidelines, a manipulated video that recently spread through social media that depicted House Speaker Nancy Pelosi as drunk would make the cut.
Farid, who also advised Facebook on this issue, is highly critical of the policy. Whether using high-tech or low-tech means, the intent of these videos is to spread disinformation, he said.
In a blog post, Monika Bickert, Facebook’s vice president of global policy management, argued that allowing these videos to remain on the platform is a matter of education and transparency.
“If we simply removed all manipulated videos flagged by fact-checkers as false, the videos would still be available elsewhere on the internet or social media ecosystem,” Bickert said. “By leaving them up and labeling them as false, we’re providing people with important information and context.”
Facebook has long tried to toe the line between curbing internet trolling and protecting legitimate speech.
For example, Facebook has worked hard to dismantle sites from foreign countries like Russia and Iran attempting to spread misinformation and influence elections. But CEO Mark Zuckerberg has said the company will continue to run American political ads that are demonstrably false.
“I don’t think it’s right for private companies to censor politicians and the news,” he said during a recent conference call with investors.
Farid, however, notes that the company already decides what information to keep and what to exclude, especially content that might damage its business reputation and interests.
“Facebook routinely bans protected speech like legal adult pornography,” Farid said. “Nobody is jumping up and down on their heads claiming that they are censoring speech. They ban legal pornography because it is bad for business — advertisers don’t want their products advertised against this material.”
Whether a video is a deepfake or not, or originates from Russia or the United States, the problem — and remedy — is still the same, Farid said.
“Surely we can agree that misleading or fraudulent content meant to disrupt democratic elections or sow civil unrest can be safely removed without violating the spirit of an open and free internet and exchange of ideas,” he said.