OpenEXPO Virtual Experience 2021 counted on an exceptional sponsor, as is Chema Alonso. The popular security expert would also give a conference on such interesting topics as cybersecurity and how deepfakes and AI can influence it.
And it is that, with the advance in artificial intelligence, cybersecurity faces new challenges. Currently, an identity can be supplanted relatively easily with AI, giving rise to the phenomenon of the deepfakes that flood social networks and the Internet.
A DeepFake allows, for example, to use an existing video with some character and change your face for another, as well as inserting a cloned voice that is pronouncing words that it would never have spoken. Something that could lead to terrible hoaxes, especially if they are used against political leaders or with great ability to influence the population.
Today they have become one of the most sophisticated techniques for spreading fake news and disinformation campaigns. They can even significantly influence the increase in cyberattacks, as Chema Alonso pointed out from OpenEXPO Virtual Experience.
And it is a more worrisome evil than it seems. Until 2019 there were less than 15.000 deepfakes circulating on the Internet. In 2020 it was almost 50.000 fake videos, 96% of them of a pornographic nature. And the number does not stop growing, generating new challenges for cybersecurity.
For the detection of these deepfakes, Chema Alonso points to two forms of analysis:
- Forensic analysis of the images.
- Removal of the biological data from images.
The renowned expert has delved into this topic in his talk for OpenEXPO Virtual Experience 2021, and together with his team, he has been able to develop a plug-in for the Chrome web browser with which any user can select a video and run tests to detect these DeepFakes.
This plugin implements 4 scientific research for the fight with these deceptions:
- FaceForensics ++: checks based on a model trained on its own database.
- Exposing DeepFake Videos by Detecting Face Warping ArtifactsCurrent AI algorithms often generate images of limited resolutions, and this tool detects those limitations with a CNN model.
- Exposing Deep Fakes Using Inconsistent Head Poses- A swap is performed between the original and synthesized face, so that causes errors in the 3D head pose. With a HopeNet model, these inconsistencies can be detected.
- CNN-generated Images Are Surprisingly Easy To Spot…for now: It can be confirmed that the current images generated by CNN share systematic flaws.
The OpenEXPO treatise is a very interesting topic and an equally necessary tool, since these deepfakes are the order of the day ...
More information - Official Website of the Event