The International Workshop on Reducing Online Misinformation Exposure ROME 2019 was co-located with the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, which took place on July 21-25, 2019 in Paris. On Thursday 25th, ROME 2019 and 7 other workshops were held in the Cité des Sciences located in the north-east of Paris. The Cité des sciences is located within the park La Villette which is home to exhibitions, and shows.
Guillaume Bouchard and Vassilis Plachouras from Facebook, UK and Guido Caldarelli from IMT Lucca, Italy carried out a well-organized workshop which offered a ‘forum for researchers to discuss the challenges of online misinformation and to define new directions for work on automating fact checking, reducing misinformation online, and making social media more resilient to the spread of false news’.
The workshop started with a welcome introduction of the organizers to the ~25 attendees.
Three keynotes were given that discussed the challenges of misinformation, why people believe false narratives, their motivation for sharing false claims, the impact of unreliable content dissemination, the importance to spot false claim before they are widely disseminated, and the identification of credulous users:
- Carolina Scarton, University of Sheffield: Technological Approaches to Online Misinformation: Major Challenges Ahead.
- Rocco De Nicola, IMT Lucca: Towards the automatic detection of credulous users on social networks
- Preslav Nakov, Qatar Computing Research Institute: Can We Spot the “Fake News” Before They Were Even Written?
MeVer researcher Olga Papadopoulou presented our work ‘Context Aggregation and Analysis: A tool for User Generated Video Verification’. This work was developed within the InVID and WeVerify projects.
The Context Aggregation and Analysis tool aims to facilitate the verification of user-generated videos disseminated through three well-known platforms – YouTube, Facebook and Twitter. The tool generates a verification report by collecting, filtering and analyzing information related to ‘video context’, e.g. video title, comments, account that posted the video, etc. The tool can be used by journalists and citizens for video verification as a standalone tool or as part of the InVID-WeVerify verification plugin.
Overall, interesting discussions took place during the oral presentations of the six papers and the poster session of the workshop. Different approaches and fruitful ideas of reducing online disinformation came out from this workshop.
Much of the discussion focused on the users posting malicious content and how to label these users. With respect to news sources a reference to websites such as the Media Bias/Fact Check provides features regarding the bias of news sources. Knowing that the source is credible or not, a claim that is posted by this source could in that way be ‘debunked’ ‘before it is even written’. Moreover, bias also affects publicly available datasets which are used as training samples for the development of automatic approaches which poses the risk of creating biased classifiers. A survey of 12 datasets which provided evidence that they suffered from bias was discussed throughout the workshop raising an issue that researchers should take this into account when experimenting with machine learning approaches.