{"id":1487,"date":"2020-02-06T19:23:20","date_gmt":"2020-02-07T03:23:20","guid":{"rendered":"https:\/\/internetsafety.trendmicro.com\/?p=1487"},"modified":"2020-06-04T02:04:13","modified_gmt":"2020-06-04T10:04:13","slug":"taking-a-stand-against-deepfakes","status":"publish","type":"post","link":"https:\/\/www.trendmicro.com\/internet-safety\/blog\/taking-a-stand-against-deepfakes\/","title":{"rendered":"Taking a Stand Against the Reality of Deepfakes"},"content":{"rendered":"<p>by Lynette Owens<\/p>\n<p>What percent of the content on the internet do you believe is true? \u00a0While there is certainly lots of reliable information online, it\u2019s getting increasingly difficult to tell the genuine from the rumor-mill, the real from the fake news. And it may be getting harder. Why? Because of <a href=\"https:\/\/en.wikipedia.org\/wiki\/Deepfake\">deepfakes<\/a>: highly convincing, AI-powered video and audio clips that could quite literally put words in the mouth of someone you know.<\/p>\n<p>This is a big problem \u2014 for our society and our democracy. \u00a0In fact, <a href=\"https:\/\/ethics.house.gov\/sites\/ethics.house.gov\/files\/wysiwyg_uploaded\/Deep%20Fakes%20Pink%20Sheet%20Guidance-Final.pdf\">U.S. lawmakers were recently warned<\/a> that if they shared deepfakes with the public they could be in violation of ethics rules.<\/p>\n<p>But knowledge is power. If we practice becoming more critical of things we see online and understand the potential harm of sharing faked footage, and teach our kids to do the same, we can all do our part to create a stronger, safer internet.<\/p>\n<p><strong>How do deepfakes work?<\/strong><\/p>\n<p>Deepfakes are so-called because they <a href=\"https:\/\/www.cnbc.com\/2019\/10\/14\/what-is-deepfake-and-how-it-might-be-dangerous.html\">use deep learning<\/a>, a type of artificial intelligence, to create spoofed video and audio clips that are difficult to tell from the real thing.<\/p>\n<p>To generate a deepfake video, the technology learns separately how to encode and decode two different faces \u2014 say, one of a famous person speaking and one of a different person saying something completely different and maybe controversial. The technology learns how to break down and reconstruct someone\u2019s face and ultimately meld it with the second face. Thus, the original person\u2019s facial expressions seem to mimic the second person\u2019s. The same technology can be used to superimpose yet another face altogether onto the person being targeted for the deepfake.<\/p>\n<p><strong>Giant steps<\/strong><\/p>\n<p>The technology is not quite there yet, making it pretty easy to spot most deepfakes. But advances are arriving rapidly, especially in <a href=\"https:\/\/www.theverge.com\/2019\/6\/10\/18659432\/deepfake-ai-fakes-tech-edit-video-by-typing-new-words\">making small changes<\/a> to the audio which could significantly alter a video\u2019s core message. What\u2019s more, reporters have shown that basic deepfakes are <a href=\"https:\/\/arstechnica.com\/science\/2019\/12\/how-i-created-a-deepfake-of-mark-zuckerberg-and-star-treks-data\/\">already within the reach<\/a> of everyday people, for very little to no money and a bit of tech know-how.<\/p>\n<p>While there are possible positive uses for this technology \u2013 such as in the movie production business where re-filming a scene could be avoided by the use of deepfake technology \u2013 it\u2019s already being used in a negative way, such as to create adult content using the faces of celebrities who have not given their permission. \u00a0With this in mind there are great concerns that this technology could be regularly used to swing elections, crash markets, ruin careers and enable even worse crimes.<\/p>\n<p><strong>Time to call out the fakers<\/strong><\/p>\n<p>It\u2019s good to see the issue of deepfakes being taken seriously by lawmakers and technology companies. <a href=\"https:\/\/www.theverge.com\/2020\/1\/7\/21055283\/facebook-deepfake-ban-political-ads-shallowfakes-rules-moderation\">Facebook<\/a>,<a href=\"https:\/\/youtube.googleblog.com\/2020\/02\/how-youtube-supports-elections.html\"> YouTube,<\/a> <a href=\"https:\/\/t.co\/mRlmMruMmW?amp=1\">Twitter,<\/a> and the <a href=\"https:\/\/www.theguardian.com\/us-news\/2019\/oct\/07\/california-makes-deepfake-videos-illegal-but-law-may-be-hard-to-enforce\">state of California<\/a> have recently sought to ban the distribution of such content, but it remains to be seen how well such efforts can be enforced. \u00a0A possible solution might be to require videos to be digitally watermarked and signed, which could help the average person validate the originator of the content.\u00a0 Hopefully we\u2019ll have a technically accurate way of flagging deepfakes before they get posted or at least to quickly warn us. \u00a0\u00a0But until those potential solutions are viable, our only recourse is to be vigilant and to help others do the same.<\/p>\n<p>If you come across a video on your social network feed or anywhere online, anytime is a good time to practice these 3 things:<\/p>\n<ol>\n<li><strong>Stop.<\/strong> Don\u2019t immediately take action such as believe, react, share or comment on videos if they seem suspicious in any way.<\/li>\n<li><strong>Question.<\/strong> Where did the video originally come from? Does the person in it seem out of character? \u00a0Why is the person or organization sharing it online?<\/li>\n<li><strong>Report.<\/strong> Whenever you see anything suspicious online, ignoring it is always an option.\u00a0 But if you\u2019re really concerned about it and believe it might be a deepfake, report it to the site or app you saw it on.\u00a0 While YouTube, Facebook, and Twitter are trying to remove deepfakes on their own, we as a community can help by flagging them, too.<\/li>\n<\/ol>\n<p><strong><br \/>\n<\/strong>Deepfakes are a real technology that, like any other, has the potential to benefit or hurt us.\u00a0 We can all make sure the latter is less of a risk by understanding what they are and by taking the time to stop, question, and report them when we can.\u00a0 And let\u2019s pass that skill on to our kids.\u00a0 It\u2019s good practice for almost anything we see online that might do harm to others.\u00a0 Deepfakes might be the newest technology people are investing in, but it might have a greater cost to society if it\u2019s in the wrong hands.\u00a0 Taking action as individuals is something we can do now. All it costs us is our time, and that\u2019s a reality we should all be able to live with.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>What percent of the content on the internet do you believe is true? \u00a0While there is certainly lots of reliable information online, it\u2019s getting increasingly difficult to tell the genuine from the rumor-mill, the real from the fake news. And it may be getting harder. Why? Because of deepfakes: highly convincing, AI-powered video and audio clips that could quite literally put words in the mouth of someone you know.<\/p>\n<p>This is a big problem for our society.  While technology companies have recently taken a stand against it and others are developing new tools to fight it, there are things you and I can do today to stand up to the problems caused by deepfakes.<\/p>\n","protected":false},"author":2,"featured_media":1488,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_et_pb_use_builder":"","_et_pb_old_content":"","_et_gb_content_width":"","om_disable_all_campaigns":false,"footnotes":""},"categories":[3,4],"tags":[171,8,47,5,48,10,88,172],"class_list":["post-1487","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-for-parents","category-for-teachers","tag-deepfakes","tag-digital-citizenship","tag-digital-literacy","tag-internet-safety","tag-media-literacy","tag-online-safety","tag-social-media","tag-video","wpautop"],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/www.trendmicro.com\/internet-safety\/wp-json\/wp\/v2\/posts\/1487","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.trendmicro.com\/internet-safety\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.trendmicro.com\/internet-safety\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.trendmicro.com\/internet-safety\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.trendmicro.com\/internet-safety\/wp-json\/wp\/v2\/comments?post=1487"}],"version-history":[{"count":0,"href":"https:\/\/www.trendmicro.com\/internet-safety\/wp-json\/wp\/v2\/posts\/1487\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.trendmicro.com\/internet-safety\/wp-json\/wp\/v2\/media\/1488"}],"wp:attachment":[{"href":"https:\/\/www.trendmicro.com\/internet-safety\/wp-json\/wp\/v2\/media?parent=1487"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.trendmicro.com\/internet-safety\/wp-json\/wp\/v2\/categories?post=1487"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.trendmicro.com\/internet-safety\/wp-json\/wp\/v2\/tags?post=1487"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}