Deepfakes: a double-edged weapon of Artificial Intelligence?
Deepfakes are funny but also dangerous. We have summarised the 3 indicators to recognize them at a glance and win the battle against them,
Deepfakes: a double-edged weapon of Artificial Intelligence?
Deepfakes are funny but also dangerous. We have summarised the 3 indicators to recognize them at a glance and win the battle against them,
Deepfakes: what is hidden inside?
We know about deepfakes in particular because of the risks they entail, such as spreading disinformation. What are they concretely? Is it also possible to mitigate these dangers?
Spoiler: we are meant to live in a world where deepfakes will get better and harder to recognize. Today, however, let us learn how to know and identify them!
Deepfake combines ‘deep learning’ and ‘fake’. Deep learning is a field of Artificial Intelligence research that deals with creating neural network systems capable of processing complex data. ‘Fake’ means precisely ‘fake’.
This term originated in 2017 from the nickname of a Reddit user called deepfakes. He shared videos with his community in which replaced the faces of the protagonists with the faces of celebrities, such as Nicholas Cage.
Generally, when we talk about deepfakes, we mean videos edited, but extremely realistic, by AI. It is not only videos that can be altered. Audio and images can also be turned into ‘deepfakes’, although this practice arouses less suspicion. The feature that makes them so difficult to distinguish from real videos is movement.
How are they realized then?
AI analyses real video, audio, and images to learn what makes them up. It then creates models containing the information to recreate faces etc. Then, when AI has learnt how to create a particular face, it superimposes it on a different face in an existing video. The result? A video very similar to the original one, but with a different character.
Nowadays, anyone can create deepfakes simply through apps and without technical skills. What changes is their quality and therefore it is easy to see that they are deepfakes.
On a professional level, however, much more sophisticated programs are used. Among them, we find a generative adversarial network (GAN), which provides excellent results. The amount of data required to generate a deepfake via GAN is much higher. Consequently, the more data fed into the system, the higher the quality of the result generated.
In contrast to the intricate GAN discussed earlier, the diffusion model, a simpler approach, relies on the gradual spread of information through a population, demonstrating a different paradigm in data-driven processes.
The two opposite sides of deepfakes: harmless and dangerous
The applications of deepfake span many different areas. We can distinguish between two main strands: the goliardic-carefree one and, as already mentioned, the dark-risky one. Let us start with the first.
Apart from academia, the first category includes deepfakes created for pure fun and to train one’s skills. One example that deserves attention on the use of deepfakes was the TV series ‘Deep Fake Neighbour Wars’ by ITVX. This was the first TV series based on this technology.
The protagonists are celebrities, living in the same neighborhood and fighting and hating each other. It is a reality show, where the public would expect to find ordinary people. Instead, the faces include actors, activists, influencers, and singers, such as Kim Kardashian, Greta Thunberg, and Adele. In reality, none of these personalities participated in the series.
Source: ITVX |
Wow effects at the drop of a hat? Deepfakes are used in several films to rejuvenate or age actors or to ‘hide’ stuntmen. It is thus possible to avoid hours and hours of make-up. With deepfake technology, scenes can be filmed directly without make-up, and the rejuvenation takes place later and automatically.
Deepfakes can also be used to give voice to those who have lost it. They can recreate the audio of people who have disappeared or provide an artificial voice for those who have lost theirs due to illness or accidents. Thanks to these technologies, the person’s voice can be recorded and synthesized to continue communicating.
This is the positive side of deepfake, which is one of the most delicate aspects of AI. It is a powerful technology but that can also be misused.
Some use deepfakes for illicit and criminal purposes, such as those who create pornographic montages of VIPs on adult scenes. One episode concerned Italian Prime Minister Giorgia Meloni. Two Italians are accused of creating and uploading to an adult site deepfake porn videos with Giorgia Meloni’s face. These videos remained online for several months and recorded millions of views. The Prime Minister has expressed her demand for damages, the sum of which will be paid into the national fund for supporting women victims of violence. She wants to contribute to the protection of victims, who often are women who are the target of this kind of crime.
Finally, there are deepfakes related to national and international politics to which one has to be extremely careful. For example, recently an audio made with AI was circulated in which Labour leader Keir Starmer swears at his staff. This was released on X during his party conference. It was a real attack on his image for the upcoming British election campaign he is running in.
There is a lot of hesitation around deepfakes. Once again, AI tools are affixed to a negative conception. Should we be afraid of them?
How to recognize deepfakes and thus protect yourself
It is not always easy to recognise deepfakes, but neither it is impossible. As previously mentioned, those made with deepfake apps are quite recognizable. Those, on the other hand, created by experts can be almost indistinguishable from real videos. However, some details can help us unmask them. We can divide them into three categories: visual, audio, and context indicators.
Among the visual indicators, we find:
- animation problems: for example facial movements, such as eyes that do not blink or blink too often, or facial expressions that do not fit the context.
- resolution problems: such as blurred faces or sharp edges.
- perspective problems: faces appear not to be in the same plane as their surroundings.
To the second category, i.e. audio indicators, belong:
- lip synchronization problems: words are not pronounced or pronounced unnaturally.
- volume problems: voices are too high or too low compared to the context.
- quality problems: background noise or distortions are heard.
Finally, context indicators include:
- inconsistencies: concerning the scenario in which the deepfakes are set. For example, a video of a politician making statements contrary to his known positions could be a deepfake.
- motivation: it is important to ask why someone might have created a deepfake. If the video is aimed at spreading misinformation or damaging someone’s reputation, it is more likely to be a deepfake.
It is impossible to stop technological innovation. So we must expect that deepfake videos will be more and more realistic and less costly, both in terms of time and resources. That is why it is very important to learn how to recognize them. It will be more difficult to distinguish with the naked eye the difference between the original content and the content generated by Artificial Intelligence.
Therefore, it is important to invest in research into software that can recognize deepfakes accurately. Not only that, it is also necessary to work on awareness of this technology and how it can be used responsibly.
Only by knowing about deepfakes and recognizing them is it possible to avoid falling into error and believing fictitious things. It is therefore crucial to educate people about this technology. So they can be more aware of the risks and can make informed decisions.
Knowledge is the real weapon against deepfakes.
© Copyright 2012 – 2024 | All Rights Reserved
Author: Giovanni Trovini, Chief Technology Officer
Deepfakes: what is hidden inside?
We know about deepfakes in particular because of the risks they entail, such as spreading disinformation. What are they concretely? Is it also possible to mitigate these dangers?
Spoiler: we are meant to live in a world where deepfakes will get better and harder to recognize. Today, however, let us learn how to know and identify them!
Deepfake combines ‘deep learning’ and ‘fake’. Deep learning is a field of Artificial Intelligence research that deals with creating neural network systems capable of processing complex data. ‘Fake’ means precisely ‘fake’.
This term originated in 2017 from the nickname of a Reddit user called deepfakes. He shared videos with his community in which replaced the faces of the protagonists with the faces of celebrities, such as Nicholas Cage.
Generally, when we talk about deepfakes, we mean videos edited, but extremely realistic, by AI. It is not only videos that can be altered. Audio and images can also be turned into ‘deepfakes’, although this practice arouses less suspicion. The feature that makes them so difficult to distinguish from real videos is movement.
How are they realized then?
AI analyses real video, audio, and images to learn what makes them up. It then creates models containing the information to recreate faces etc. Then, when AI has learnt how to create a particular face, it superimposes it on a different face in an existing video. The result? A video very similar to the original one, but with a different character.
Nowadays, anyone can create deepfakes simply through apps and without technical skills. What changes is their quality and therefore it is easy to see that they are deepfakes.
On a professional level, however, much more sophisticated programs are used. Among them, we find a generative adversarial network (GAN), which provides excellent results. The amount of data required to generate a deepfake via GAN is much higher. Consequently, the more data fed into the system, the higher the quality of the result generated.
In contrast to the intricate GAN discussed earlier, the diffusion model, a simpler approach, relies on the gradual spread of information through a population, demonstrating a different paradigm in data-driven processes.
The two opposite sides of deepfakes: harmless and dangerous
The applications of deepfake span many different areas. We can distinguish between two main strands: the goliardic-carefree one and, as already mentioned, the dark-risky one. Let us start with the first.
Apart from academia, the first category includes deepfakes created for pure fun and to train one’s skills. One example that deserves attention on the use of deepfakes was the TV series ‘Deep Fake Neighbour Wars’ by ITVX. This was the first TV series based on this technology.
The protagonists are celebrities, living in the same neighborhood and fighting and hating each other. It is a reality show, where the public would expect to find ordinary people. Instead, the faces include actors, activists, influencers, and singers, such as Kim Kardashian, Greta Thunberg, and Adele. In reality, none of these personalities participated in the series.
Source: ITVX |
Wow effects at the drop of a hat? Deepfakes are used in several films to rejuvenate or age actors or to ‘hide’ stuntmen. It is thus possible to avoid hours and hours of make-up. With deepfake technology, scenes can be filmed directly without make-up, and the rejuvenation takes place later and automatically.
Deepfakes can also be used to give voice to those who have lost it. They can recreate the audio of people who have disappeared or provide an artificial voice for those who have lost theirs due to illness or accidents. Thanks to these technologies, the person’s voice can be recorded and synthesized to continue communicating.
This is the positive side of deepfake, which is one of the most delicate aspects of AI. It is a powerful technology but that can also be misused.
Some use deepfakes for illicit and criminal purposes, such as those who create pornographic montages of VIPs on adult scenes. One episode concerned Italian Prime Minister Giorgia Meloni. Two Italians are accused of creating and uploading to an adult site deepfake porn videos with Giorgia Meloni’s face. These videos remained online for several months and recorded millions of views. The Prime Minister has expressed her demand for damages, the sum of which will be paid into the national fund for supporting women victims of violence. She wants to contribute to the protection of victims, who often are women who are the target of this kind of crime.
Finally, there are deepfakes related to national and international politics to which one has to be extremely careful. For example, recently an audio made with AI was circulated in which Labour leader Keir Starmer swears at his staff. This was released on X during his party conference. It was a real attack on his image for the upcoming British election campaign he is running in.
There is a lot of hesitation around deepfakes. Once again, AI tools are affixed to a negative conception. Should we be afraid of them?
How to recognize deepfakes and thus protect yourself
It is not always easy to recognise deepfakes, but neither it is impossible. As previously mentioned, those made with deepfake apps are quite recognizable. Those, on the other hand, created by experts can be almost indistinguishable from real videos. However, some details can help us unmask them. We can divide them into three categories: visual, audio, and context indicators.
Among the visual indicators, we find:
- animation problems: for example facial movements, such as eyes that do not blink or blink too often, or facial expressions that do not fit the context.
- resolution problems: such as blurred faces or sharp edges.
- perspective problems: faces appear not to be in the same plane as their surroundings.
To the second category, i.e. audio indicators, belong:
- lip synchronization problems: words are not pronounced or pronounced unnaturally.
- volume problems: voices are too high or too low compared to the context.
- quality problems: background noise or distortions are heard.
Finally, context indicators include:
- inconsistencies: concerning the scenario in which the deepfakes are set. For example, a video of a politician making statements contrary to his known positions could be a deepfake.
- motivation: it is important to ask why someone might have created a deepfake. If the video is aimed at spreading misinformation or damaging someone’s reputation, it is more likely to be a deepfake.
It is impossible to stop technological innovation. So we must expect that deepfake videos will be more and more realistic and less costly, both in terms of time and resources. That is why it is very important to learn how to recognize them. It will be more difficult to distinguish with the naked eye the difference between the original content and the content generated by Artificial Intelligence.
Therefore, it is important to invest in research into software that can recognize deepfakes accurately. Not only that, it is also necessary to work on awareness of this technology and how it can be used responsibly.
Only by knowing about deepfakes and recognizing them is it possible to avoid falling into error and believing fictitious things. It is therefore crucial to educate people about this technology. So they can be more aware of the risks and can make informed decisions.
Knowledge is the real weapon against deepfakes.
© Copyright 2012 – 2024 | All Rights Reserved
Author: Giovanni Trovini, Chief Technology Officer