Towards the end of 2020, I had the opportunity to give a talk on Deepfakes in the 60th AI4EU WebCafe. […]
WeVerify is an EU granted project that aims to address the complex content verification challenges through a participatory verification approach, open-source algorithms, low-overhead human-in-the-loop machine learning and intuitive visualizations. The MeVer team leads the ‘Cross-modal Disinformation Detection and Content Verification’ work package and in collaboration with the University of Sheffield (USFD) have developed tools that will bring together features from text, images and videos to tackle the challenge of cross-modal disinformation detection and content verification.
Polychronis from the MeVer team had the opportunity to visit DeepTrace Labs, a company based in Amsterdam, focusing on deepfake detection. MeVer and DeepTrace have recently started collaborating on the problem of DeepFake detection, which MeVer faces in the context of the WeVerify project. As part of this collaboration, the two teams also joined forces to tackle the Kaggle’s DeepFake Detection Challenge. The goal of the challenge is to build effective solutions that can help detect DeepFakes and manipulated media. The final deadline for submissions is March 31, 2020.
The Context Aggregation and Analysis tool was developed by the MeVer team within the InVID project and continues to evolve within the WeVerify project. The tool aims to facilitate the verification of user-generated videos posted by three well-known platforms – YouTube, Facebook and Twitter. It collects information surrounding the video, analyses and filters it and creates a verification report that is then presented to the end user (journalist) who is responsible to inspect the verification cues and decide about the video veracity.
MeVer has contributed to the editing and authoring of the brand new book “Video Verification in the Fake News Era” published by Springer last week.