Truth or lie: The rise and rise of disinformation

Arathi Sethumadhavan
6 min readJun 25, 2021

--

Co-authored with Esra Bakkalbasioglu

Special thanks to Meredith Berger and Ashish Jaiman from Microsoft’s Defending Democracy Team, for their comments on this blog.

When there are many unknowns about a hot topic, the knowledge vacuum or search engine “data void” is quickly filled with disinformation. The COVID-19 pandemic has been no different. In fact, Italy, India, United States, and Indonesia reported increased disinformation attacks during the pandemic. In India, more than half of the disinformation articles published in 2020 was related to the pandemic. Many of these articles have been used to advance existing political agendas and waken anti-minority sentiments among the public, during an extremely fragile time. Similarly, from a sample of 8,914 U.S. adults, about half stated that during the first few months of the pandemic, they came across at least some made-up news about COVID-19.

Survey of U.S. adults conducted march 10–16, 2020 about COVID-19 news.
Survey of U.S. adults conducted march 10–16, 2020 about COVID-19 news.

In early May 2020, a 26-minute video called “Plandemic,” went viral with millions of views and clicks across social media. An investigation showed that the interaction level of the video was higher than other mainstream online events that happened during the same period, including Taylor Swift’s concert, online reunion of The Office cast, and Pentagon videos showing unexplained aerial phenomenon (see the image below). Non-profit fact checking website PolitiFact flagged multiple claims in the video as disinformation, including ‘wearing the mask activates your own virus’ and ‘if you ever had a flu vaccine, you were injected with coronaviruses’.

Not being able to detect disinformation is a significant concern for the general public.

Advancements in audio, video, and image manipulation and synthesis techniques have made it easier to differentiate false from real information. For example, there are emerging consumer tools that allow individuals to selectively edit or delete elements in the foreground and background of videos, create audio that simulates a person’s voice using only a short source sample, or even superimpose one person’s face over the video of another person’s face to generate a hybrid face.

A survey conducted with 75,000 people from 38 countries, revealed that around 55% of research participants were concerned about being deceived by false news. Especially in countries that were recent targets of electoral disinformation campaigns, including Brazil, United Kingdom, United States, and France, 65–85% of survey participants stated that they are concerned about not being able to detect disinformation and being deceived.

Percentage of respondents broken by country agreeing with the following statement: “Thinking about online news, I am concerned about what is real and what is fake on the internet. Reuters Institute Digital News.
Percentage of respondents broken by country agreeing with the following statement: “Thinking about online news, I am concerned about what is real and what is fake on the internet. Reuters Institute Digital News.

An experiment conducted by MIT revealed that participants misclassified deepfakes as real 30% of the time when they are given 5 seconds to view the image. When the exposure time reduced from 5 to 0.25 seconds, the misclassification rate increased from 30 to 50 percent, indicating the difficulty of detecting doctored content.

False information spreads faster than true news.

Fake news diffuses significantly farther, faster, deeper, and more broadly than unmanipulated stories. MIT researchers argue that many individuals unintentionally share, engage and post more fake news because it is more novel and exciting than mainstream news. Social media researcher Judith Donath underlines that, in certain cases, believing a news article that has been proven to be fake even becomes a test of one’s faith to their community’s idiosyncratic worldview.

Even when effectively discredited, exposure to false information creates lingering effects on human attitudes and perception.

People’s attitudes sometimes continue to be shaped by false information, even after it has been discredited. In some cases, this false information can linger as it fits the person’s existing preferences and beliefs. In others, individuals assume that “where there is smoke, there’s fire”.

Identifying and curbing disinformation

Identifying, and curbing fake news and false information is a complex task that requires a multi-stakeholder and multi-model solution. Below is a sample list of actions that state agents, NGOs, media, social media platforms, and technology companies, as well as consumers can take for curbing the spread of disinformation.

State agents:

· Create standardized media literacy curriculum and public service announcements to inform the public.

· Launch programs that enable research that help prevent disinformation (e.g., see DARPA’s work in this space).

· Regulate advertising networks and assess the viability of possible platform-based interventions.

Media companies and media NGOs

· Facilitate cross industry collaborations to promote the development and implementation of strong ethical standards across the media industry.

· Synthetic media training that enables media professionals to recognize and debunk synthetic media.

Technology companies

· Provide content creators tools to certify the authenticity of their content.

· Improve recommender algorithms to make these more clear, transparent, and non-discriminatory.

· Help content moderators to identify and take down fake news articles.

· Create AI systems that work alongside human fact checkers, to detect fake news. For example, Microsoft has developed a machine learning model to detect fake news early on by leveraging weak supervision from multiple sources involving users and their social engagements.

·Provide tools that enable consumers to detect whether a site is reliable or not, such as the one below.

A screenshot of NewsGuard, that provides trust ratings for news sites.
A screenshot of NewsGuard, that provides trust ratings for news sites.

NewsGuard has also partnered with Microsoft to help public differentiate COVID vaccine facts from fiction.

VaxFacts Quiz
VaxFacts Quiz

Consumers

· Overcome confirmation bias and learn to question the accuracy of articles with surprising or suspicious content and develop awareness of the use of social media by malicious actors to manipulate public opinion.

Top 10 Key Takeaways

1. When there are many unknowns about a hot topic, the knowledge vacuum or search engine “data void” is quickly filled with rumors and disinformation. For example, “virus disinformation wars” has been swelling throughout COVID-19, and even when effectively discredited, exposure to false information can create lingering effects on human attitudes and perception.

2. Not being able to detect disinformation is a significant concern among the general public. A survey conducted with 75,000 individuals from 38 countries revealed that more than half of the respondents were concerned about being deceived by disinformation.

3. Disinformation can be propagated in a myriad of ways including fabricated news articles mimicking the content and style of the authentic news content, memes and visuals, isolated and edited video clips that convey a false narrative, deepfake videos, deep video portraits, and synthetic audio.

4. Creators of fake news have two main motivations: money and/or ideology. For example, multiple investigations revealed that a number of sites posting fake news articles during the 2016 U.S. presidential electoral race were run by teenagers from a small town in North Macedonia who earned a large sum of money in exchange for the circulation of fake news on social media platforms.

5. Over the past few years, many governing bodies around the world have been crafting policies to address the growing concern around disinformation. At least 54 countries have announced measures against online disinformation, ranging from the creation of task force committees and development of media literacy campaigns to criminalization of false information.

6. In the last 5 years, disinformation campaigns were launched to shift public opinion in multiple countries, including United States, United Kingdom, Philippines, and Taiwan.

7. Curbing the dissemination of false information without restricting freedom of expression is a complex problem. While some countries are attempting to regulate speech via measures such as criminalizing the spread of false information or blacklisting involved actors, some others are attempting to create conditions for individuals to differentiate between truth and lie (e.g., media literacy campaign launched in Canada or public awareness campaign launched in Netherlands).

8. State agents, media companies, technology companies, and consumers — all have a role to play in curbing disinformation.

9. Social media platforms have begun to implement various measures to curb disinformation, including banning and blocking accounts, prohibiting manipulated media, labeling authentic content as well as empowering users by connecting them to fact-checking organizations.

10. Because there is no perfect way to prevent media from being modified or to unequivocally prove where it originated from, responsible development of synthetic media should proactively evaluate appropriate use and harms to impacted stakeholders as well as consider gating technology access.

Note: This blog is a review of recent publications on the issue of disinformation versus Microsoft’s opinion on the topic.

--

--

Arathi Sethumadhavan

Head of User Research, Ethics & Society, Microsoft I Fellow, World Economic Forum