In this post we explain the basics behind our paper “Leveraging EfficientNet and Contrastive Learning for Accurate Global-scale Location Estimation” […]
In this post, we explain the basics behind our paper “Operation-wise Attention Network for Tampering Localization Fusion”, which has been accepted for publication at this year’s Content-Based Multimedia Indexing conference (CBMI 2021).
The MediaEval Multimedia Evaluation benchmark was founded in 2008 as VideoCLEF and in 2011 became an independent benchmarking initiative. Each year it offers tasks that are related to multimedia retrieval, analysis, and exploration.
Towards the end of 2020, I had the opportunity to give a talk on Deepfakes in the 60th AI4EU WebCafe. […]
WeVerify is an EU granted project that aims to address the complex content verification challenges through a participatory verification approach, open-source algorithms, low-overhead human-in-the-loop machine learning and intuitive visualizations. The MeVer team leads the ‘Cross-modal Disinformation Detection and Content Verification’ work package and in collaboration with the University of Sheffield (USFD) have developed tools that will bring together features from text, images and videos to tackle the challenge of cross-modal disinformation detection and content verification.
In this post, we explain the basics behind our paper “Audio-based Near-Duplicate Video Retrieval with Audio Similarity Learning,” which has […]
It has been almost 2 months since the final deadline for the challenge on the Kaggle platform. Competition organizers have just finalized the standings (13th of June 2020) in the private leaderboard. A Kaggle staff member mentioned in a discussion that competition organizers took their time to validate winning submissions and ensure that they comply with the competition rules. This process resulted in the disqualification of the top-performing team due to the usage of external data without proper license. This caused a lot of disturbance among the Kaggle community mainly because the competition rules were vague.
On the 24th of April 2020, a huge coronavirus-related hackathon started, the EUvsVirus Hackathon, organized by the European Commission. Over 20,900 people across the EU and beyond took part, with 2,150 solutions submitted in areas including health and life (898), business continuity (381), remote working and education (270), social and political cohesion (452), digital finance (75) and other challenges (83).
A lot of our research and development activities rely on large collections of web media content sourced from social media platforms, such as YouTube and Twitter, and then manually curated and annotated by our researchers with the purpose of creating “ground truth” datasets. This helps us train machine learning models on specific tasks and then benchmark those models along with competing approaches in order to select the best method per case.
Polychronis from the MeVer team had the opportunity to visit DeepTrace Labs, a company based in Amsterdam, focusing on deepfake detection. MeVer and DeepTrace have recently started collaborating on the problem of DeepFake detection, which MeVer faces in the context of the WeVerify project. As part of this collaboration, the two teams also joined forces to tackle the Kaggle’s DeepFake Detection Challenge. The goal of the challenge is to build effective solutions that can help detect DeepFakes and manipulated media. The final deadline for submissions is March 31, 2020.