Hill Calls for Social Media Standards from Facebook, Reddit, Others on Combating Deep Fakes

A bipartisan duo of powerful senators has fired off letters to a host of social media sites calling on them to crackdown on "deep fakes" given their possible use in election disinformation campaigns, and the growing reliance on social media as a news source.

Related: Survey: Facebook is Growing News Source, But Trusted? Not So Much

In letters to Facebook, Twitter, YouTube, Reddit, LinkedIn, Tumblr, Snapchat, Imgur, TikTok, Pinterest, and Twitch, Sens. Mark Warner (D-Va.) and Marco Rubio (R-Fla.) called deep fakes--sophisticated altered audio or video files--a growing threat and asked the companies to "develop industry standards for sharing, removing, archiving, and confronting the sharing of synthetic content as soon as possible," particularly given potential foreign intervention in the upcoming 2020 election. Warner is ranking member of the Senate Intelligence Committee and one of the leading figures in its investigation of Russian election interference.

They pointed out that such fakes can create "false and defamatory content" easily shared and amplified by social media.

The senators' patience is clearly wearing thin. They said there had not been sufficient progress in coming up with such standards, despite "despite numerous conversations, meetings, and public testimony acknowledging your responsibilities to the public."

“As concerning as deep fakes and other multimedia manipulation techniques are for the subjects whose actions are falsely portrayed, deep fakes pose an especially grave threat to the public’s trust in the information it consumes; particularly images, and video and audio recordings posted online,” they wrote. “If the public can no longer trust recorded events or images, it will have a corrosive impact on our democracy.”

The senators provided no deadline, but wanted the following questions answered:

1. "What is your company’s current policy regarding whether users can post intentionally misleading, synthetic or fabricated media?

2. "Does your company currently have the technical ability to detect intentionally misleading or fabricated media, such as deep fakes? If so, how do you archive this problematic content for better re-identification in the future?

3. "Will your company make available archived fabricated media to qualified outside researchers working to develop new methods of tracking and identifying such content? If so, what partnerships does your company currently have in place? Will your company maintain a separate, publicly accessible archive for this content?

4. "If the victim of a possible deep fake informs you that a recording is intentionally misleading or fabricated, how will your company adjudicate those claims or notify other potential victims?

5. "If your company determines that a media file hosted by your company is intentionally misleading or fabricated, how will you make clear to users that you have either removed or replaced that problematic content?

6. "Given that deep fakes may attract views that could drive algorithmic promotion, how will your company and its algorithms respond to, and downplay, deep fakes posted on your platform?

7. "What is your company’s policy for dealing with the posting and promotion of media content that is wholly fabricated, such as untrue articles posing as real news, in an effort to mislead the public?"

John Eggerton

Contributing editor John Eggerton has been an editor and/or writer on media regulation, legislation and policy for over four decades, including covering the FCC, FTC, Congress, the major media trade associations, and the federal courts. In addition to Multichannel News and Broadcasting + Cable, his work has appeared in Radio World, TV Technology, TV Fax, This Week in Consumer Electronics, Variety and the Encyclopedia Britannica.