Produced by the Royal Society, one of the report’s key findings finds the vast majority of respondents believe the internet has improved the public’s understanding of science.
The Society’s survey of the British public also found most would fact-check suspicious scientific claims read online, stating that they feel confident to challenge friends and family on scientific misinformation on COVID-19 for example.
“Science stands on the edge of error and the nature of the scientific endeavour at the frontiers means there is always uncertainty,” explains Professor Frank Kelly FRS, Professor of the Mathematics of Systems at the Statistical Laboratory, University of Cambridge, and Chair of the report.
“In the early days of the pandemic, science was too often painted as absolute and somehow not to be trusted when it corrects itself, but that prodding and testing of received wisdom is integral to the advancement of science, and society.
“This is important to bear in mind when we are looking to limit scientific misinformation’s harms to society. Clamping down on claims outside the consensus may seem desirable, but it can hamper the scientific process and force genuinely malicious content underground.”
Food and nutrition info
Whilst the report focused on such topics as the safety of COVID-19 vaccines, the role of humans in climate change, and the safety of 5G technology, its findings have relevance to misinformation concerning food and nutrition.
The report echoes findings from a study that looked at what influences people to be susceptible to false information about health.
Here, the team shared multiple versions of a fake news story that claimed vitamin B17 deficiency could cause cancer, which included changes in author credentials, writing style and whether the article was labelled as "suspicious" or "unverified,” amongst other alterations.
Results suggested author credentials and how the story was written did not have significant differences on how people perceived its credibility, whether they would adhere to its recommendations or share it.
However, those who saw the article presented with any sort of flagging stating it was not verified information were significantly less likely to find it credible, adhere to recommendations or share it.
Respondents who showed higher levels of social media efficiency, or were more tech savvy, evaluated information more carefully and reported they would be less likely to share the article.
The Royal Society’s report also touches on the influence of digital technologies in information dissemination commenting on its “great potential” but also acknowledges concerns about new forms of online harm and erosion of trust that these could enable.
The report stresses that censoring or removing inaccurate, misleading and false content, whether it’s shared unwittingly or deliberately, is not a silver bullet and may undermine the scientific process and public trust.
According to the authors, there needs to be a focus on building resilience against harmful misinformation across the population and the promotion of a “healthy” online information environment.
Despite accusations that social networks are part of the problem, the report takes a lighter view commenting that platforms like Facebook, Instagram and Tik Tok should not rely on content removal as a solution to online scientific misinformation.
“Deciding what is and is not scientific misinformation is highly resource intensive and not always immediately possible to achieve as some scientific topics lack consensus29 or a trusted authority for platforms to seek advice from,” the report says.
“What may be feasible and affordable for established social media platforms may be impractical or prohibitively expensive for emerging platforms which experience similar levels of engagement (eg views, uploads, users).”