Western media channels are being systematically manipulated to spread pro-Russian government propaganda and disinformation, according to a new report by the Crime and Security Research Institute at Cardiff University.

The researchers said they uncovered evidence that “provocative” pro-Russian or anti-Western statements were being systematically posted in reader comments sections in articles relating to Russia in 32 prominent media outlets across 16 countries. Russian-language media outlets subsequently used these comments as the basis of stories about politically controversial events.

These stories insinuated there is significant support among Western citizens for Russia or President Putin, using headlines such as “Daily Mail readers say…” and “Readers of Der Spiegel think…” They were also picked up and reported by other ‘fringe media’ and websites with track records of spreading disinformation and propaganda, some of whom have been linked to Russian intelligence services by Western security agencies.

In one example highlighted by the study, a small number of comments in a Mail Online story about the Taliban’s takeover in Afghanistan were used in a Russian news article under the headline “The British have compared the rise of the Taliban to power with the end of Western civilization.”

These stories were primarily published in Russia and audiences in Central and Eastern Europe, particularly Bulgaria. In total, the team discovered 242 stories that they believe were generated in this fashion. Among the websites repeatedly targeted by the influence operation are The Daily MailDaily Express and The Times in the UK; Fox News and Washington Post in the US; Le Figaro in France; Der Spiegel and Die Welt in Germany; and La Stampa in Italy.

The report added that there was evidence of coordination between Russian state-owned media and outlets linked to the non-state Patriot Media Group, observed and drew upon these reader comments.

While these activities were first spotted as part of research into online disinformation amid tensions between Ukraine and Russia earlier this year, the Cardiff team believes these tactics have escalated since 2018.

The researchers used data science pattern recognition and detection techniques to reader comments to make their findings, which uncovered multiple unusual behaviors associated with some accounts posting pro-Russian content. These included some users repeatedly changing their personas and locations. At the same time, on specific platforms, pro-Kremlin comments received an unusually high number and proportion of ‘up-votes compared with typical messages. Together, these multiple inauthenticity signals suggest the commenting activity was coordinated.

Professor Martin Innes, director of the Crime and Security Research Institute at Cardiff University, explained: “As mainstream social media platforms have become more alert to the risks of foreign state influence operations, so disinformation actors and propagandists have been seeking new vulnerabilities in the media ecosystem to exploit. By adopting a ‘full spectrum’ media strategy that blends together information from social and mainstream media outlets, this sophisticated campaign has had the potential to shape the thoughts, emotions and behavior of several diverse international audiences in relation to high-profile media stories.

“Most importantly, the particular tactics and techniques used to ‘hack’ the comments function in the media ecosystem make it almost impossible to attribute responsibility for the pro-Kremlin trolling behavior on the basis of publicly available open-source data. It is therefore vital that media companies running participatory websites are more transparent about how they are tackling disinformation and more proactive in preventing it.”

Andy Patel, a researcher with F-Secure’s Artificial Intelligence Center of Excellence, said media channels must do more to prevent comments being posted by nefarious actors on their sites. He says: “Comments sections on news sites attract posts from users with extreme opinions, and thus often precipitate arguments. These discussions are ripe for abuse by trolls and entities wishing to further extremist agendas. The research published by Cardiff University illustrates how such discussion threads can also be weaponized by bad-faith actors. Comments sections on sites such as these should be properly moderated. If this is not possible, the responsible thing to do is to disable comment functionality until proper moderation policies can be put in place.”

Leave a Reply