Youtube, Twitter Hunt Down Deepfakes

YouTube and Twitter clamp down on synthetic and manipulated media, which include deepfakes. Deepfakes are media (images, audio, video, etc.) synthetically generated through artificial intelligence and machine learning (AI/ML), which have been exploited in adult videos and propaganda through the use of the faces and voices of unwitting celebrities, politicians, and other well-known figures.

Youtube has an existing ban for manipulated media under the spam, deceptive practices, & scams policies of their Community Guidelines. In an entry entitled “How YouTube supports elections" published in its official blog, Youtube zeroes in on misleading U.S. election-related content such as digitally manipulated videos that can mislead users and spread untruthful information; this category includes deep fakes. Videos bearing false information about the candidates or the voting process, whether manipulated or not, are also disallowed.

Twitter’s update on their regulations on synthetic and manipulated media, which include deepfakes, did not specifically mention the subject of the upcoming U.S. elections. It purely announced a ban on synthetic or manipulated media that are “likely to cause harm.” The rule was said to be based on a survey Twitter conducted late last year, which they used to solicit opinion from the public on how to formulate rules on the media in question.

Earlier this year, Facebook also enforced the ban of manipulated media such as AI-generated content and deepfakes. Facebook co-founder Marc Zuckerberg has also been the subject of a deepfake video that circulated online.

 

Deepfakes as cybersecurity threats

Deepfakes can be used to damage an individual’s reputation and credibility or spread misleading information, as convenient access to technology makes the production and consumption of fake news and cyber propaganda faster and easier than ever. As if these aren’t enough, Trend Micro has predicted that deepfakes will also be increasingly used in fraud. Just last year, cybercriminals used deepfaked audio of the company’s CEO to steal US$243,000 from a U.K.-based energy company.

Knee-deep in manipulated media, how can people identify deepfakes? For videos, Wired.com interviewed Sabah Jassim, professor of mathematics and computation at the University of Buckingham, and Bill Posters, Spectre co-creator, who recommend looking out for the following:
  • Rate of blinking; subjects in deepfakes blink less than a person normally would
  • Syncing of the speech and lip movement
  • Emotion mismatch
  • Blurring marks, dropped frames, or discoloration
As deepfakes can be possibly used in new variations of CEO fraud and in conjunction with tried-and-tested Business Email Compromise (BEC) tactics, users are advised to combine their awareness of deepfakes and the basics of avoiding BECs. Some of the steps users can take include double-checking the details mentioned in the email/phonecall/video, such as the correctness of the bank account details or the email or phone number used, and verifying fund transfer requests with other concerned parties before proceeding.

In addition, users can expect the quality of deepfakes to improve over time, eventually eliminating the telltale signs currently used to spot them. To combat this, it is recommended to regularly train and update employees on emerging socially-engineered attacks, including developments related to deepfakes.

Additional protection against BECs and other threats are also offered under the Trend Micro™ Smart Protection Suites:
 
HIDE

Like it? Add this infographic to your site:
1. Click on the box below.   2. Press Ctrl+A to select all.   3. Press Ctrl+C to copy.   4. Paste the code into your page (Ctrl+V).

Image will appear the same size as you see above.

Veröffentlicht in Cybercrime & Digital Threats