June 17, 2024

Deepfake Videos Trick Us into Creating False Memories, New Study Finds

In 2005, Will Smith starred in a remake of it The Matrix. It made a small splash at the box office before quietly falling into relative obscurity. You might have remembered it—which would be weird, I totally made this movie.

And yet, just reading those two sentences, there’s a strong chance that a good number of you reading this believed it. Not only that, but some of you may even have thought to yourself, “Yes, me do Will Smith remembers starring in a remake of it The Matrix really” even though that movie didn’t exist in the first place.

This is a false memory, a psychological phenomenon where you remember things that never happened. If the story above is about Will Smith i The Matrix the beating you had wasn’t enough, don’t worry: You probably already have plenty of false memories going back to your childhood. That’s not a knock on you – he’s only human. We tell ourselves stories and sometimes those stories are told and retold so often that they morph into something that bears little resemblance to the original event.

Even small buttons can change your memory. In the 1970s, a study found that if you asked a witness to a car accident how fast a car make up into another, they will remember it as going much faster than it probably did.

However, this human phenomenon can easily be weaponized against us to spread misinformation and harm the real world. With the proliferation of AI tools like deepfake technology, there are even fears that it could be used on a mass scale to manipulate elections and push false stories.

That is at the heart of a study published on July 6 i PLOS One found that there were deeply faked clips of films that do not really exist which caused participants to remember them falsely. Some viewers considered the fake remakes better than the originals – underscoring the terrifying power of deepfake technology to manipulate memory.

However, there is one silver lining: the study authors also found that simple text descriptions of mock repetition effectively elicited participants’ false memories. On its own that sounds like a bad thing – and it is! But that finding suggests that AI deepfakes may not be more effective at spreading misinformation than less technologically sophisticated methods. What we end up with is a complex picture of the harms that technology can cause – certainly to be feared, but which also have their own limits.

“We shouldn’t jump to predictions of a dystopian future based on our fears about emerging technologies,” lead study author Gillian Murphy, a misinformation researcher at University College Cork in Ireland, told the Daily Beast. “Yes, a deep fake is a real harm, but we should always gather evidence of those harms in the first place, before we rush to solve problems that we assumed might exist.”

For the study, the authors recruited a group of 436 people to watch clips of various immersive videos and were told they were remakes of real movies. This included Brad Pitt and Angelina Jolie i The ShiningChris Pratt i Indiana JonesCharlize Theron i Captain Marveland—of course—Will Smith in The Matrix. Participants also watched clips from actual remakes including Carrie, Total Recalland Charlie and the Chocolate Factory. Meanwhile, some of the participants were given a text description of the fake remake.

​​​​The researchers found that on average 49 percent of the participants believed the deepfaked videos were real. Of this group, quite a few said the remake was better than the original. For example, 41 percent said the Captain Marvel a remake was better than the original and 12 percent said The Matrix remake was better.

However, the results showed that when participants were given a text description of the deep video, it performed as well and sometimes better than the video. This could suggest that genuine methods of disinformation and distortion of reality such as fake news articles could be as effective as using AI.

“Our findings are not particularly worrisome, as they do not indicate any unique and powerful threat of depth beyond existing forms of misinformation,” Murphy explained. However, she said the study only looked at short-term memory. “Deep fakes may be a more powerful vehicle for disseminating misinformation because, for example, they are more likely to go viral or are more memorable in the long term.”

This speaks to a broader issue at the root of misinformation: motivated reasoning, or the way people allow their biases to perceive information. For example, if you believe the 2020 election was stolen, you’re more likely to believe a deep video of someone stuffing ballot boxes than someone who believes it wasn’t stolen.

When people want something to be true, they will try to make it true.

Christopher Schwartz, Rochester Institute of Technology

“This is the big problem with disinformation,” Christopher Schwartz, a cybersecurity and disinformation researcher at the Rochester Institute of Technology who was not involved in the study, told the Daily Beast. “More than the quality of the information and the quality of the sources, the problem is that when people want something to be true, they will try to make it true.”

Motivational reasoning is a big part of why our current cultural and political landscape is the way it is, according to Schwartz. While people may not necessarily be influenced by deep fiction or fake news, they may be more inclined to seek out articles and opinions that confirm our worldview. We retreat into our own digital and social bubbles where we see and hear the same ideas ad nauseam until we are convinced that what we believe is the only thing that is true.

Such a climate becomes fertile ground, then, for something like an AI deepfake of Donald Trump being arrested or the Pentagon being set on fire take root and grow out of control. Sure, it may seem ridiculous – but when it confirms what we already believe to be true, our brains will make it as true as we need it to be.

Luckily, there is some hope. Murphy said the study’s participants largely agreed that the use of AI to flesh out characters in films raised concerns about the potential for abuse of the actors involved. This indicated that humans may be adverse to AI and its depth in general, which may discourage its use in future media.

“They didn’t want to see a remake of a movie where the performers weren’t adequately compensated or didn’t agree to be included in the project,” she said. “They considered such a process ‘unreal’ and ‘artistically bankrupt’.”

As for misinformation, Schwartz said the solution lies in two of the most contentious issues in American society today: education and the media. First of all, people need to be technologically literate enough to know how to spot a deepfake. Second, they must be willing to challenge their assumptions about images, articles, or other media that confirm their beliefs. This can and should start in classrooms where people can understand these issues early and often.

Likewise, newsrooms and media organizations have a responsibility to not only inform the public about the dangers of deep AI fakes, but also to call out misinformation when it occurs.

“These articles raise awareness, which will have some kind of impact on vaccination style [AI misinformation],” Schwartz said. “People in general will be on guard, and they will know that they are being targeted.”

So consider this your warning now: We’re in an age where generative AI that creates text, images and videos is steadily exploding at an exponential rate. These tools will only become more powerful and popular with each passing day. they will can be used against you and they have already been used against you.

It can be scary and definitely confusing. But knowing it’s there is the first step to finding out what’s real and what’s fake – and that could make a difference.

“[We have] the tools we need to fight things like this,” Schwartz said. “God has given us the ability to reason. We have to use it and we can use it.”

Leave a Reply

Your email address will not be published. Required fields are marked *