Misinformation on Personal Messaging—Are WhatsApp's Warnings Effective? provides new, population-level findings that confirm and expand the exploratory findings in our June 2023 report, Beyond Quick Fixes: How Users Make Sense of Misinformation Warnings on Personal Messaging.
The evidence we present today comes from our nationally-representative survey of 2,000 members of the public. These new data allow us to generalise about how those among the UK public who use personal messaging actually interpret the “forwarded” and “forwarded many times” misinformation warning tags.
We used the findings from our qualitative research to design a new type of survey about misinformation warnings on personal messaging. We asked: You may have seen or heard about a label that can appear on WhatsApp messages that says “Forwarded” or “Forwarded many times.” In your opinion, what do these labels indicate that the message potentially contains?
As we show below, interpretations of the tags vary widely and few people understand the “forwarded” and “forwarded many times” tags as clear markers of potential misinformation.
We also explored some demographic, attitudinal, and behavioural factors that may be related to misinterpretations of the tags.
The findings cast serious doubt on whether these tags are effective as misinformation warnings.
Summary of New Findings
According to Meta, which owns WhatsApp, the "forwarded" tags are meant to alert a message recipient that a message's original source is unknown, and prompt them to reconsider its potential accuracy.
However, our survey reveals that UK messaging users have a wide variety of interpretations of these tags.
- Only about 10% understand the tags in the way they were intended.
- About half of the UK’s messaging users combined report either not recalling having ever seen the "forwarded" or "forwarded many times" tags, not knowing what they signify, or being uncertain about what they mean.
- The most common misinterpretation is that the tags simply indicated viral entertainment content such as jokes or videos.
- Worryingly, around 10% say they see the tags as an indicator of accurate, trustworthy, useful or relevant content.
In other words, the tags rarely serve their intended purpose. And, in some cases, they can even have the opposite effect to what is intended.
We also highlight the attitudes and demographic factors that are more likely to be associated with misinterpreting the purpose of the “forwarded” and “forwarded many times” tags.
- Younger people are more likely to misinterpret the tags.
- People who place a great degree of trust in what they see on personal messaging are more likely to misinterpret the tags.
- Older messaging users are less likely to be familiar with the tags and know how to interpret them.
- The same applies to people with lower levels of formal education.
- People who use personal messaging most frequently are less likely to wrongly see the tags as signalling accurate or trustworthy information. However, rather than associating the tags with potentially untrustworthy content, frequent messaging users still tend to associate the tags with popular content, jokes, and multimedia.
- Those who often participate in larger messaging groups—either of friends or of workmates—are also more likely to misperceive the tags’ purpose.
Recommendations: Five Principles for the Design of Misinformation Warnings
We reiterate our five key principles for the design of effective misinformation warnings on encrypted personal messaging:
- Don’t rely on description alone: Misinformation warnings should clearly indicate the potential for misinformation.
- Introduce user friction: Misinformation warnings may be overlooked unless they use designs that force a person to stop and reflect.
- Gain media exposure: Platforms should engage in publicity campaigns about the intended purpose of misinformation warnings.
- Consider the context: It is crucial to understand the different ways messaging platforms are shaped by social norms and people’s relationships with others.
- Think beyond platforms: Technological features need to be combined with socially-oriented anti-misinformation interventions, to empower people to work together to use personal messaging platforms in ways that help reduce misinformation.
These findings are further evidence that all online messaging platforms have some work to do to show they are serious about tackling misinformation. Platforms need to prioritise the safety of the public over the corporate aim of avoiding negative associations between their apps and harmful content.
When it comes to misinformation on WhatsApp, Meta can do better.
We thank the Leverhulme Trust for its generous financial support for this research project. Opinions in this report are solely those of its authors.