MINSK, 15 January (BelTA) – Deepfake detection technology has defects. This matter was discussed at another meeting of the Expert Community project “Deepfakes: New Challenge to Information Security” in the BelTA press center on 15 January.
Analyst with the Belarusian Institute of Strategic Research (BISR) Vitaly Demirov emphasized that deepfakes and fake news are getting increasingly impactful in politics, public life and business, and the technology that tries to enhance this effect is becoming more visible. “At the same time there are no solid technology that could help detect deepfakes,” the expert said. Such technology should be able to identify deepfakes pretty fast. If detection takes too long, a deepfake will be able to spread all over social media.
According to Vitaly Demirov, no special skills and competencies are needed to create fake videos. “There is a package of ready-made programs to enable users to produce fake videos,” he said. This process still takes much time. For example, a 40-second video can take a week or longer to assemble. It is clear that it is quite a challenge for an ordinary person, the expert added. However, this can be absolutely achievable by those who have time and money. The expert believes that in the future this technology, like other solutions, will become cheaper and easier to use.
Deepfakes are a synthetic media technology in which a person in an existing image or video is replaced with someone else's likeness using artificial neural networks and artificial intelligence. The most prominent example of how artificial intelligence can facilitate the creation of a photorealistic fake video is the video showing Barack Obama slamming today's U.S. President Donald Trump. The video was released in April 2018 on the BuzzFeed platform. At the end of the video its creators said that it was a fake. By releasing this video they sought to bring attention to the dangers of the controversial video-editing technology.