It has been almost 2 months since the final deadline for the challenge on the Kaggle platform. Competition organizers have just finalized the standings (13th of June 2020) in the private leaderboard. A Kaggle staff member mentioned in a discussion that competition organizers took their time to validate winning submissions and ensure that they comply with the competition rules. This process resulted in the disqualification of the top-performing team due to the usage of external data without proper license. This caused a lot of disturbance among the Kaggle community mainly because the competition rules were vague.
On the 24th of April 2020, a huge coronavirus-related hackathon started, the EUvsVirus Hackathon, organized by the European Commission. Over 20,900 people across the EU and beyond took part, with 2,150 solutions submitted in areas including health and life (898), business continuity (381), remote working and education (270), social and political cohesion (452), digital finance (75) and other challenges (83).
A lot of our research and development activities rely on large collections of web media content sourced from social media platforms, such as YouTube and Twitter, and then manually curated and annotated by our researchers with the purpose of creating “ground truth” datasets. This helps us train machine learning models on specific tasks and then benchmark those models along with competing approaches in order to select the best method per case.
Polychronis from the MeVer team had the opportunity to visit DeepTrace Labs, a company based in Amsterdam, focusing on deepfake detection. MeVer and DeepTrace have recently started collaborating on the problem of DeepFake detection, which MeVer faces in the context of the WeVerify project. As part of this collaboration, the two teams also joined forces to tackle the Kaggle’s DeepFake Detection Challenge. The goal of the challenge is to build effective solutions that can help detect DeepFakes and manipulated media. The final deadline for submissions is March 31, 2020.
The Context Aggregation and Analysis tool was developed by the MeVer team within the InVID project and continues to evolve within the WeVerify project. The tool aims to facilitate the verification of user-generated videos posted by three well-known platforms – YouTube, Facebook and Twitter. It collects information surrounding the video, analyses and filters it and creates a verification report that is then presented to the end user (journalist) who is responsible to inspect the verification cues and decide about the video veracity.
The International Conference on Computer Vision (ICCV) is one of the major computer vision conferences, hosted by IEEE and CVF. This year’s event took place on October 27 – November 2, 2019 in Seoul, Korea. The main conference was held on October 29 – November 1, 2019, and many co-located workshops and tutorials took place on October 27, 28 and November 2, 2019.
The Truth and Trust Online conference achieved its mission to bring together people with different backgrounds, levels of seniority, origin and also from various disciplines. It covered a wide range of issues around the challenge of misinformation and the 257 participants had two days of amazing talks from people of well known companies, universities, media organizations, NGOs, such as Facebook, Google, Twitter, Microsoft, BBC, Full Fact and others.
The Thessaloniki edition of the Researchers’ Night was held on September 27 and tried to bring researchers closer to society and inspire more young people to get involved with research.
MeVer has contributed to the editing and authoring of the brand new book “Video Verification in the Fake News Era” published by Springer last week.
In this post we explain the basics behind our paper with title “ViSiL: Fine-grained Spatio-Temporal Video Similarity Learning” which was accepted for an oral presentation at this year’s International Conference on Computer Vision (ICCV 2019).