by Lynette Owens

What percent of the content on the internet do you believe is true?  While there is certainly lots of reliable information online, it’s getting increasingly difficult to tell the genuine from the rumor-mill, the real from the fake news. And it may be getting harder. Why? Because of deepfakes: highly convincing, AI-powered video and audio clips that could quite literally put words in the mouth of someone you know.

This is a big problem — for our society and our democracy.  In fact, U.S. lawmakers were recently warned that if they shared deepfakes with the public they could be in violation of ethics rules.

But knowledge is power. If we practice becoming more critical of things we see online and understand the potential harm of sharing faked footage, and teach our kids to do the same, we can all do our part to create a stronger, safer internet.

How do deepfakes work?

Deepfakes are so-called because they use deep learning, a type of artificial intelligence, to create spoofed video and audio clips that are difficult to tell from the real thing.

To generate a deepfake video, the technology learns separately how to encode and decode two different faces — say, one of a famous person speaking and one of a different person saying something completely different and maybe controversial. The technology learns how to break down and reconstruct someone’s face and ultimately meld it with the second face. Thus, the original person’s facial expressions seem to mimic the second person’s. The same technology can be used to superimpose yet another face altogether onto the person being targeted for the deepfake.

Giant steps

The technology is not quite there yet, making it pretty easy to spot most deepfakes. But advances are arriving rapidly, especially in making small changes to the audio which could significantly alter a video’s core message. What’s more, reporters have shown that basic deepfakes are already within the reach of everyday people, for very little to no money and a bit of tech know-how.

While there are possible positive uses for this technology – such as in the movie production business where re-filming a scene could be avoided by the use of deepfake technology – it’s already being used in a negative way, such as to create adult content using the faces of celebrities who have not given their permission.  With this in mind there are great concerns that this technology could be regularly used to swing elections, crash markets, ruin careers and enable even worse crimes.

Time to call out the fakers

It’s good to see the issue of deepfakes being taken seriously by lawmakers and technology companies. Facebook, YouTube, Twitter, and the state of California have recently sought to ban the distribution of such content, but it remains to be seen how well such efforts can be enforced.  A possible solution might be to require videos to be digitally watermarked and signed, which could help the average person validate the originator of the content.  Hopefully we’ll have a technically accurate way of flagging deepfakes before they get posted or at least to quickly warn us.   But until those potential solutions are viable, our only recourse is to be vigilant and to help others do the same.

If you come across a video on your social network feed or anywhere online, anytime is a good time to practice these 3 things:

  1. Stop. Don’t immediately take action such as believe, react, share or comment on videos if they seem suspicious in any way.
  2. Question. Where did the video originally come from? Does the person in it seem out of character?  Why is the person or organization sharing it online?
  3. Report. Whenever you see anything suspicious online, ignoring it is always an option.  But if you’re really concerned about it and believe it might be a deepfake, report it to the site or app you saw it on.  While YouTube, Facebook, and Twitter are trying to remove deepfakes on their own, we as a community can help by flagging them, too.


Deepfakes are a real technology that, like any other, has the potential to benefit or hurt us.  We can all make sure the latter is less of a risk by understanding what they are and by taking the time to stop, question, and report them when we can.  And let’s pass that skill on to our kids.  It’s good practice for almost anything we see online that might do harm to others.  Deepfakes might be the newest technology people are investing in, but it might have a greater cost to society if it’s in the wrong hands.  Taking action as individuals is something we can do now. All it costs us is our time, and that’s a reality we should all be able to live with.

Lynette Owens

Lynette Owens

Lynette Owens is Vice President of Global Consumer Education & Marketing at Trend Micro and Founder of the Internet Safety for Kids and Families program. With 25+ years in the tech industry, Lynette speaks and blogs regularly on how to help kids become great digital citizens. She works with communities and 1:1 school districts across the U.S. and around the world to support online safety, digital and media literacy and digital citizenship education. She is a board member of the National Association for Media Literacy Education, an advisory committee member of the Digital Wellness Lab, and serves on the advisory boards of INHOPE and U.S. Safer Internet Day.

Follow her on Twitter @lynettetowens.