U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Springer Nature - PMC COVID-19 Collection

Logo of phenaturepg

Fake news on Social Media: the Impact on Society

1 Newcastle Business School, Northumbria University, Newcastle Upon Tyne, UK

Uchitha Jayawickrama

2 School of Business and Economics, Loughborough University, Loughborough, UK

Emmanuel Ogiemwonyi Arakpogun

Jana suklan.

3 NIHR Newcastle IVD Co-operative Translational and Clinical Research Institute, Newcastle University, Newcastle Upon Tyne, UK

Shaofeng Liu

4 Plymouth Business School, University of Plymouth, Plymouth, UK

Fake news (FN) on social media (SM) rose to prominence in 2016 during the United States of America presidential election, leading people to question science, true news (TN), and societal norms. FN is increasingly affecting societal values, changing opinions on critical issues and topics as well as redefining facts, truths, and beliefs. To understand the degree to which FN has changed society and the meaning of FN, this study proposes a novel conceptual framework derived from the literature on FN, SM, and societal acceptance theory. The conceptual framework is developed into a meta-framework that analyzes survey data from 356 respondents. This study explored fuzzy set-theoretic comparative analysis; the outcomes of this research suggest that societies are split on differentiating TN from FN. The results also show splits in societal values. Overall, this study provides a new perspective on how FN on SM is disintegrating societies and replacing TN with FN.

Introduction

In cascading news and sensitive information, the fundamental principles are embedded in the concepts of truth as well as the theories of accuracy in communication (Brennen, 2017 ; Dwivedi et al., 2018 ; Orso et al., 2020 ; Pennycook et al., 2020 ). However, in the past five years or so, social media (SM) has redefined the structure, dimensions, and complexity of the news (Berkowitz & Schwartz, 2016 ; Copeland, 2007 ; Kim & Lyon, 2014 ). The impact of SM, specifically on political affairs, has been attracting more interest, as SM platforms, notably Twitter, Facebook, and Instagram, enable the broad sharing of information and news (Vosoughi et al., 2018 ). In addition to providing information, another main purpose of SM is to enable people to engage in social interaction, communication, and entertainment (Hwang et al., 2011 ; Kuem et al., 2017 ). In particular, many SM posts are looking for support, where reposting aims to spread messages via the multiplicative effect. Consequently, this study purpose is to address the research problem and gap which suggest that SM platform providers are doing little in tackling the spread and cascading of FN on SM.

By providing unlimited access to a large amount of information, people can share different beliefs and values (George et al., 2018 ; Kim et al., 2019 ; Rubin, 2019 ). However, the risks and implications of this new resource remain unclear to most of the population. One such risk is fake news (FN). FN, although unvetted, has a credible and professional appearance, ensuring that people cannot always distinguish it from true news (TN) (Kumar et al., 2018 ). The effects of FN cut across the society, for example, the spread of FN on SM determines how governments, organizations, and people respond to events in the society. Majority of FN is targeted to a specific sample of the population with the aim of promoting a certain ideology by stimulating strong beliefs and polarizing society (Chen & Sharma, 2015 ). According to Kumar et al. ( 2018 ); Lundmark et al. ( 2017 ); Tandoc et al. ( 2019 ), a periodic review of FN on SM is thus required to limit discord and violence by groups or individuals in society.

FN has become a major part of SM, raising doubts about information credibility, quality, and verification. Studies investigating the influence of FN on SM have appeared in various fields such as digital media, journalism, and politics; however, in-depth analyses of the impact of FN on society remain scarce. Furthermore, despite the growing body of research on FN and SM —a significant factor in the fight against FN —(Tandoc et al., 2018 ), an adequate review of the impact of FN in SM on society is also lacking.

Hence, The aim of this study is to explore the role of SM platform providers in reducing the spread of FN in the society, as the research gap identified from previous studies (Kim & Dennis, 2019 ; Kim et al., 2019 ; Knight & Tsoukas, 2019 ; Roozenbeek & van der Linden, 2019 ) on the limited research on the impact of FN on the society, leading to this study finding answers to the following research questions (RQs):

  • RQ1. Why is FN cascading impacting negatively on the society?
  • RQ2. Are the big SM organizations taking actions in reducing FN cascading?

Based on the foregoing, this study provides a holistic view of the three focus areas (FN, SM, and societal acceptance) by reviewing research publications, case studies, and experts’ opinions to produce a conceptual framework, an insightful and comprehensive meta-framework. This study then analyzes the associations among the three distinct fields from theoretical and practical perspectives. These associations derived from the literature are tested using an analytic technique called fuzzy set analysis to show if they are supported, thereby indicating society’s efforts to combat FN. We find that people’s interpretations of what is TN or FN affect societal efforts to reduce the spread of FN.

The findings of this study contribute to research on FN on SM, specifically looking at societal impacts. They provide experts and researchers in these fields with insights into how communities are effectively combating the spread of FN and how to implement the useful ideas from this research to strengthen the inputs in tackling FN on SM. Further, the findings of this research not only provide support for the associations but demonstrate a model for societal strategies to manage the spread of FN as well as fact-checking and information verification, thus equipping society with the tools to recognize the differences between FN and TN.

The remaining sections in this study are organized as follows: the theoretical development of the conceptual meta-framework explains the literature for the concept of FN, SM, and societal acceptance. This is followed by researched method section that describes the data, analysis and presents the results of the study. Further, there is a discussion section on the results, implications of this study for research, practice, and the society, finally limitations and future research.

Theoretical Development of the Conceptual Meta-Framework

FN is shaped to replicate TN by mimicking its characteristics (i.e. accuracy, verifiability, brevity, balance, and truthfulness) to mislead the public (Han et al., 2017 ; Kim & Dennis, 2019 ; Kim et al., 2019 ). FN is not a new phenomenon, according to Burkhardt ( 2017 ), FN can be traced back to at least Roman times when the first Roman Emperor had to announce fake news to encourage Octavian to destroy the republican system. During the Roman period, there was no way of verifying and validating the authenticity of news, as challenging authority was classed as treason. The 20th century heralded a new era of numerous one-to-many communication modes such as newspapers, radio stations, and television stations, marking the beginning of misinformation in news (Aggarwal et al., 2012 ; Kim & Dennis, 2019 ; Kim et al., 2019 ; Knight & Tsoukas, 2019 ; Manski, 1993 ; Preti & Miotto, 2011 ; Roozenbeek & van der Linden, 2019 ). With the emergence of multimedia corporations, the content of FN has been gaining new audiences (Oestreicher-Singer & Zalmanson, 2013 ), and the arrival of the Internet towards the end of the century improved the phenomenon of FN (Kapoor et al., 2018 ). As technology advanced in the 21st century, SM arrived, multiplying the dissemination of FN using both one-to-many and many-to-many strategies.

Understanding FN

FN content, which is divided into individual opinions and scientific consensus on trending issues such as COVID-19, evolution, and climate change, has long existed (Knight & Tsoukas, 2019 ). However, constant changes in political strategies have fundamentally impacted how information is defined, viewed, and interpreted at all levels of communication (Massari, 2010 ). Aggarwal and colleagues argued that incorrect scientific, political, and belief-oriented information has significant causes and consequences on individuals that are more politically inclined and those aiming to drive their ideas to wider society (Aggarwal et al., 2012 ). Therefore, individuals actively seeking information are united in their pursuit of knowledge and political action (Aggarwal & Singh, 2013 ). It is impossible to change their values and beliefs, abandon old ways and accept the fact-checked news, new methods to enlightening individuals or people with similar beliefs to adopt new states to a degree of news verification and validation (Cao et al., 2015 ; Centeno et al., 2015 ; Kim & Lyon, 2014 ).

As FN is fundamentally built on untraced and misleading phenomena, experts and researchers have noted a rising interest in the development of fact-checking tools to spot the spread of FN content in society (Berkowitz & Schwartz, 2016 ; Hwang et al., 2011 ; Miranda et al., 2015 ; Miranda et al., 2016 ). However, despite the large investment in innovative tools for identifying, distinguishing, and reducing factual discrepancies (e.g., ‘Content Authentication’ by Adobe for spotting alterations to original content), the challenges concerning the spread of FN remain unresolved, as society continues to engage with, debate, and promote such content (Kwon et al., 2017 ; Pierri et al., 2020 ). Indeed, the gap between fact-checking and the fundamental values and beliefs of the public discourages people from promoting fact-checking rather than accepting the dangers of FN (Kim & Lyon, 2014 ; Lukyanenko et al., 2014 ). Therefore, these tools do little to reduce the spread of FN in practice.

SM and Society

SM provides an environment in which individuals can exchange personal, group, or popular interests to build relationships with people that have similar and/or diverging beliefs and values. For example, most people of a particular age group share similar interests courtesy of growing up in the same era (Gomez-Miranda et al., 2015 ; Lyon & Montgomery, 2015 ; Miller & Tucker, 2013 ; Nerur et al., 2008 ). People’s characteristics are often inherited from educational institutions, communities, and family lifestyles (Matook et al., 2015 ). Further, certain age groups continue to hold onto specific values and beliefs, as reflected in the public’s response to the 2016 and 2020 U.S. presidential election and the 2019 UK general election (Prosser et al., 2020 ; Wang et al., 2016 ). Accordingly, Venkatraman et al. ( 2018 ) argued that values and beliefs are passed down through family generations, making it possible for a group in society to continue to hold onto specific philosophies.

SM plays an important role in helping people reconnect with friends and families as well as find jobs and purchase products and services (Kim & Dennis, 2019 ; Leong et al., 2015 ; Lyon & Montgomery, 2015 ; Miller & Tucker, 2013 ; Nerur et al., 2008 ; Pierri et al., 2020 ). SM platforms are also channels for recruiting interested parties for the continuity and propagation of a long-held ideology. Moreover, people with common demographic attributes use the instant messaging services on SM to communicate more than those without such shared demographics (Baur, 2017 ). SM platforms are thus online services that mirror real-world activities (e.g., dating services from Facebook, live Instagram feeds from parties).

The societal acceptance strategy can reduce the spread of FN (Haigh et al., 2018 ; Lundmark et al., 2017 ; Lyon & Montgomery, 2015 ; Miller & Tucker, 2013 ; Nerur et al., 2008 ; Sommariva et al., 2018 ). However, the expansion of multiple access points for information and news sharing on SM platforms contributes more to the spread of falsity than reducing its impact. Nevertheless, societal acceptance is considered to be a game-changer for controlling the spread of FN by SM (Egelhofer & Lecheler, 2019 ). Some empirical studies have analyzed the spread and flow of FN online (Garg et al., 2011 ; Gray et al., 2011 ), but little research examines how human judgment can differentiate truth from falsity. To reduce the spread of FN in society, it is important to understand the triangle of FN, the relationships between the constructs from each circle, and the associations that bind the circles, and then analyze the strength of the relationship (Chang et al., 2015 ; Chen & Sharma, 2015 ; Matook et al., 2015 ).

Meta-framework on the Impact of FN

This study developed a meta-framework based on the literature on FN, SM, and societal acceptance. Each of these perspectives, depicted as circles in the meta-framework, discusses the constructs that contribute to defining the clusters in theory. The constructs that then emerge from each perspective are the foundation for the meta-framework discussing the relationships among their associations. This study further develops notations to define the associations. By combining the three defined circles, these perspectives provide a new theoretical framework, as previous studies have shown that feasibilities to conceptualize phenomenon are at a wide spectrum (Table  1 ).

Summary of the key theoretical studies

StudiesContext of FN, SM, and SAResearch aimsSummary/main outcomeRelationship to FN, SM, and SABenefit to FN, SM, and SA
(Burkhardt, ; Kapoor et al., ; Kim et al., ; Pan et al., ; Venkatraman et al., ; Vosoughi et al., )Verification/fact checkingEstablishing a system or processes dedicated to authenticating the content in the news and its intentionsComparing multiple platforms, users; and FN; evaluating and analyzing data using specific analytic techniques to derive resultsFinding associations from the FN literature to support the meta-framework in this researchSupporting the investigation of the relationships defined regarding the attributes in the FN construct
(Brummette et al., ; Chang et al., ; George et al., ; Kim & Dennis, ; Kwon et al., ; Leong et al., ; Sommariva et al., )SM platformsUnderstanding the operations of platforms, analyzing the spread and cascading of news, and observing patterns in users’ consumption behaviorApplying key fact-checking and cascading indicators to evaluate FN and content on SM platformsFinding associations from the SM literature to support the meta-framework in this researchSupporting the investigation of the relationships defined regarding the attributes in the SM construct
(Barrett et al., ; Brennen, ; Burkhardt, ; Fang et al., ; Kapoor et al., ; Lazer et al., ; Posetti & Matthews, ; Tandoc et al., )SocietySA strategies, models, and implementations incorporating news content, content processes, and transmissionThis holistic approach compares traditional news processes with modern news processes as well as traditional news verification and validity with modern verification and validityFinding associations from the SA literature to support the meta-framework in this researchSupporting the investigation of the relationships defined regarding the attributes in the SA construct
(Ragin, ; Ragin & Pennings, )Fuzzy setA set theoretic technique designed for set theory analysis by creating patterns of attributes defined by numerous features and generating outcomes on the construction of relationshipsComplementarity and equifinality testing by generating consistency and solution coverage

The combination system

supports relationships among the FN, SM, and SA constructs

A holistic approach targeting new attributes in the three constructs’ mapping to establish relationships among collecting data, testing theory, and producing outcomes
(Chen et al., ; Kumar et al., ; Kwon et al., ; Roozenbeek & van der Linden, ; Venkatraman et al., )TechnologyDevelopment of a hybrid intelligent system that supports fact-checking and uses SM and information management

The system was empirically

assessed with SM platforms’ decision-makers. The results showed that the hybrid system supported strategy development

An understanding of how technology is supporting the fight against the spread of FN and challenges in its useSociety helping reduce the spread and cascading of FN; understanding fact-checking and verifying news

Note: SA = Societal acceptance

This study adopted the epidemiological model as a suitable theory for discussing the meta-framework perspectives. In particular, it employed the conceptual model of the disease triangle. In the 1960 s, the disease triangle was developed by George McNew to understand the pathology and epidemiology of plants and their diseases (Scholthof, 2007 ). This model stated that for a disease to manifest, three fundamental elements are required: the environment; the infectious pathogen that carries the virus, bacteria, or other micro-organisms; and the host. In this study, FN is defined as an ‘infectious pathogen’, as it is an epidemic that consists of varieties of fake news (Pan et al., 2017 ). According to Scholthof ( 2007 ), the environment determines whether the infection can be controlled; here, as shown in Fig.  1 , SM is conceptualized as the environment, the hosts are the readers, individuals, and society.

An external file that holds a picture, illustration, etc.
Object name is 10796_2022_10242_Fig1_HTML.jpg

Fake news triangle

SM as an environment for cascading of FN has a structure (Chen et al., 2015 ; Miller & Tucker, 2013 ; Scholthof, 2007 ). The aim of the SM structure is to generate contents that attract millions of views by re-sharing news or information targeting a set of specific viewers. As the contents are shared and attained a viral status in the society, SM organizations are leveraging increased profits (Mettler & Winter, 2016 ). Primarily, SM structure is designed on contents ranking system constructed by algorithm ranking techniques, the method of data management and significance leveling in data priority (Hamamreh & Awad, 2017 ). News and information are ranked in a methodological order that links constructing a natural distribution by connecting between nodes of the SM (Gerlach et al., 2015 ; Matook et al., 2015 ). To understand the ranking system in SM, each node is assigned a unique code by creating iterative process of weights in network, these weights are assigned according to the content structure of the SM node (Brennen, 2017 ; Burkhardt, 2017 ; Chen, 2018 ). According to Brennen ( 2017 ); Burkhardt ( 2017 ); Chang et al. ( 2014 ); Chen ( 2018 ); Maier et al. ( 2015 ); Massari ( 2010 ), SM as the environment for infectious contents like FN comprises of communication channels such as websites, mobile applications, and platforms that facititate relationship forming among users of contents with similar interest. Hence, the relevance of SM to various aspects of life is of high singficance to users, government policies, and the economy.

This is somewhat consistent with the argument of the Director-General of the World Health Organization (WHO) – Tedros Ghebreyesus – at a foreign policy and security expert submit held in Germany in February 2020 (Union, 2020 , May 19). Tedros argued that as the world continues to grapple with Covid-19 contagion, an ‘infodemic’ is emerging as FN continues to “spread faster and more” than Covid-19 (Africe, 2020 ). Given the speed of the spread of FN, infodemic can hinder the effectiveness of public health response while propagating confusion and distrust in the society.

As shown in Fig.  1 , the hosts interact with those who have similar interests in their SM groups or forums and thus recruit new believers to the environment (Haigh et al., 2018 ; Humprecht, 2019a ; Mettler & Winter, 2016 ; Roozenbeek & van der Linden, 2019 ; Rubin, 2019 ). These communities continue to grow as positive social networks expand. With the power of SM platforms, new groups are created that have a similar agenda, improving social learning and opportunities using SM platforms’ tools (Kwon et al., 2017 ). One of the purposes of these strategies and networks is to clamp down as quickly as possible on people perceived as outsiders that may uncover or expose their content and philosophies.

Research Method

Research design and data collection.

This study carried out a longitudinal survey with online participants to test the relationships and associations in the proposed meta-framework. A cross-sectional online survey was conducted in 2019, survey was conducted using stratified sampling, with participants divided into groups based on their demographics, proficiency of using SM platforms, and interest in news and current affairs online. Table  2 shows participants’ profiles in terms of their gender, age, location, SM usage, and SM experience. The questionnaire was designed through the research gap and literature.

Participants’ profiles

No.PercentageNo.Percentage
SexSM platform usage
 Male13738.5 Once a week41.0
 Female21961.5 2–4 times a week72.1
 5–6 times a week195.2
Age Once a day5615.8
 18–247521.2 2–3 times a day8122.9
 25–3411131.1 4–5 times a day8824.6
 35–448523.7 More than 5 times a day10128.4
 45–547019.6
 55–64102.9SM platform experience
 65 or above51.5 Less than a year277.6
 1–2 year(s)3710.5
Location  3–4 years6518.2
 Africa4512.5 5–6 years8122.7
 Antarctica215.9 7–8 years7922.3
 Asia4111.6 9–10 years3810.6
 Australia plus Oceania4512.7 More than 10 years298.1
 Europe9225.8
 North America10529.4
 South America72.1

This study distributed the questionnaire to 2234 active engaging participants and received 546 surveys which included both partial and completed questionnaire, which accounts for a response rate of 24%, demonstrating that the response rate is consistent with previous studies (Arshad et al., 2014 ; Klashanov, 2018 ; Malik et al., 2020 ). This study sample size consists of participants from across the global, with North America accounting for 29% of the total survey which make up for the largest share in terms of participant size. Experience of using SM platforms show that 28% of the participants engage more than 5 times daily on the platforms while 22.7% accounting for participants with 5 to 6 years working the SM platforms.

Analytical Technique

According to Ragin ( 2013 ); Ragin and Pennings ( 2005 ), the fuzzy set theoretical approach can be used to evaluate theories, frameworks, and models with a deductive strategy driven by a positivist paradigm. Fuzzy set analysis is an emerging technique for management and social sciences, which has become more popular as the initial problems were overcome by introducing hybrid techniques of fuzzy set logic. This study adopts the relationship and association testing suggested by Ragin ( 2009 ) to test for Boolean expressions in the fuzzy set theoretical approach of the four intersections in Fig.  2 .

An external file that holds a picture, illustration, etc.
Object name is 10796_2022_10242_Fig2_HTML.jpg

Integrated meta-framework

This study proposes an eight-step process flowchart consisting of four loop relationships (represented by the double line diamonds in Fig.  3 ) and three predictive relationships (represented by the single line diamonds) that shows the relationships used to discuss the outcomes of the analysis. The flowchart is described as follows:

An external file that holds a picture, illustration, etc.
Object name is 10796_2022_10242_Fig3_HTML.jpg

Flow chart for the consistency analysis

  • A loop relationship for an expression that a solution pathway is reliable shows whether the consistency of the sufficiency analysis is greater than 0.7 of the solution pathways as defined in this paper for the consistency threshold analysis. Any relationship that falls below the set threshold is eliminated from further analysis testing, as this means that that relationship does not achieve acceptable reliability.
  • A loop relationship for an expression that a solution pathway is accepted shows whether the consistency of A1 is greater than 0.7. This statement suggests that any relationship that falls below the acceptable criteria in the solution pathway must be rejected.
  • A double line diamond relationship for a strongly supported expression shows whether the consistency of A2, A3, and A4 is less than or equal to 0.7. This statement suggests that any relationship that passes the acceptance criteria does not have significant contradictory proofs.
  • A single line diamond relationship for an expression not supported by itself (however, subsequent relationships can benefit) can be described by the consistency of A3, which is less than or equal to 0.7. Furthermore, A3 represents the type I consistency error, and it is usually below the acceptance threshold.
  • A loop relationship for an expression that a solution pathway is weakly supported shows whether the consistency of the sufficiency analysis that A1 is greater than A3 of the solution pathways, as defined for the consistency threshold analysis. Any relationship that falls below the set threshold is eliminated from further analysis, as the relationship does not achieve acceptable reliability.
  • A double line diamond relationship for a supported expression shows whether the consistency of A4 is less than or equal to 0.7. This statement suggests that any relationship that passes the acceptance criteria does not have a significant error during analysis and this supports classification.
  • A loop relationship for an expression that a solution pathway is not weakly supported shows whether the consistency of A2 is greater than 0.7. This statement suggests that any relationship that falls below the acceptable criteria in the solution pathway can be improved and there is weak support for classification.
  • A double line diamond relationship for a supported expression shows whether the consistency of A2 is greater than or equal to A4. This statement suggests that any relationship that passes the acceptance criteria and partially supports the conditions for A2 and A4 represents the type II consistency error; this is usually equal to or greater than the acceptance threshold.

Data Analysis and Results

According to Deutsch and Malmborg ( 1985 ), complementarity and equifinality, the two underlying features in the fuzzy set theoretic approach, display patterns of attributes and different results depending on the structure of the constructs. In addition, the attributes in the constructs are concerned with the present or absent conditions and associations formed during conceptualization, rather than isolating the attributes from the constructs. Furthermore, complementarity exists if there is proof that causal factors display a match in their attributes and the analysis shows a higher level in the results, while equifinality exists if at least two unidentical pathways known as causal factors show the same results (Herrera-Restrepo et al., 2016 ).

In Table  3 , the attributes of the constructs indicate the relationships that provide empirical evidence to reject or support the model. The results demonstrate that the relationships are mostly rejected. We find that a higher consistency level directly results in a higher reliability of the relationship. The three combinations of attributes in the sufficiency analysis show that the input efficiency either fails or passes the set consistency threshold requirement (consistency and coverage are 0.72 and 0.44, respectively).

Results for A1: CN-VN-TN/USˑVA/USˑNW

A1: FN/USˑVAA1: FN/USˑNW
ConditionS1S2S3S1
Consistency
Raw coverage0.2296180.2096800.1837060.022014
Unique coverage0.1371270.1073500.0698500.022014
Solution consistency
Solution coverage0.4379010.022014
C1: H•S⊂Y -Consistency0.5396670.5454500.622072
C1: H•S⊂Y -Raw coverage0.0437300.0435240.0365550.003689
C2: ~H•S⊂Y -Consistency
C2: ~H•S⊂Y -Raw coverage0.2274790.2101360.1839320.022590
C3: H•~S⊂~Y - Consistency 0.651971
C3: H•~S⊂~Y -Raw coverage0.1124210.1124210.1124210.100733
C4: ~H•~S⊂Y -Consistency0.4638120.4788310.4853830.523584
C4: ~H•~S⊂Y -Raw coverage0.8376490.8738580.8917190.934861
Solution pathway resultRejectRejectRejectSupport
Combined solution pathway unique coverage of same result0.3143270.022014
Overall result

The bold entries indicate impact of the findings and are used to further the discssion section

In Table  4 , the relationships indicate support for the empirical findings. The results show that the attributes of the constructs have higher combined solution pathways than the attributes in Table  3 . The type II error (or false negative) is one form of contradiction ignored in Fig.  3 . These findings show the least likely attributes of the constructs, indicating the continuation of existing relationships as well as supporting the higher consistency level of the associations and stronger support for further relationships. Hence, this analysis can introduce additional causal conditions of similar attributes not yet shown in the current relationships by retracking to the relationship mapping data and finding common attributes in existing constructs. This may explain the undefined variance in the existing relationships.

Results for A2: PM-DS-CO/USˑVA/USˑNW

A2: SM/USˑVAA2: SM/USˑNW
ConditionS1S2S3S4S1S2S3S4
Consistency0.6257600.693128 0.663176
Raw coverage0.4791400.2264930.1721210.1720260.0986410.1591010.1108580.055455
Unique coverage0.2387540.0698010.0026590.0024500.0401920.0742290.0198430.002375
Solution consistency0.6026130.688200
Solution coverage0.5541640.242285

C1: H•S⊂Y

-Consistency

0.674924

C1: H•S⊂Y

-Raw coverage

0.0568210.0529460.0546300.0571860.0501520.0540880.0434570.056607

C2: ~H•S⊂Y

-Consistency

0.6257140.692681 0.678735

C2: ~H•S⊂Y

-Raw coverage

0.4785870.2260850.1713540.1711720.1009920.1583910.1098380.056607

C3: H•~S⊂~Y

- Consistency

0.6660450.6660450.6660450.6366160.6709670.6813940.6813940.663628

C3: H•~S⊂~Y

-Raw coverage

0.0724470.0724470.0724470.0636380.0717680.0752690.0752690.069434

C4: ~H•~S⊂Y

-Consistency

0.5383590.5321130.5269080.5275740.5364920.5362440.5379950.530698

C4: ~H•~S⊂Y

-Raw coverage

0.6230640.8427420.8947090.8969000.9363020.8974710.9346670.967440
Solution pathways resultIgnoreIgnoreSupportSupportIgnoreSupportSupportSupport
Combined solution pathway unique coverage of result0.0051090.096447
Overall result

Table  5 shows the combined solution pathways for consistency and coverage, indicating support for most of the attributes of the constructs. This indicates a type I error (or false positive) in the form of contradicting the variances in the relationships, while the higher consistency level of the associations supports the higher values that delimit the relationships. Therefore, unconfirmed attributes indicate a restriction of the current relationships.

Results for A3: A1-A2/USˑVA/USˑNW

A3: A1ˑA2 /USˑVAA3: A1ˑA2/USˑNW
ConditionS1S2S3S4S5S1S2S3
Consistency 0.673542
Raw coverage0.2722010.1311730.1964030.2651470.0703950.2595470.2848020.266998
Unique coverage0.1371180.0375630.0057080.0542580.0028100.0510030.0762590.060114
Solution consistency0.660851
Solution coverage0.4771600.395919
C1: H•S⊂Y-Consistency
C1: H•S⊂Y -Raw coverage0.0637070.0720590.0676320.0843410.0715780.0715640.0872080.069849
C2: ~H•S⊂Y -Consistency 0.673175
C2: ~H•S⊂Y -Raw coverage0.2722370.1342200.1955200.2639830.0715780.2560830.2703140.266000
C3: H•~S⊂~Y - Consistency 0.5296450.5958510.520320
C3: H•~S⊂~Y -Raw coverage0.0861600.0831610.0861600.0861600.0811620.0542140.0542140.054214
C4: ~H•~S⊂Y -Consistency0.4746250.4728270.4717770.4817870.4585890.4785240.4732770.463005
C4: ~H•~S⊂Y -Raw coverage0.8766570.9764110.9342700.9000390.9891850.8132440.7874650.786341
Solution pathway resultSupportSupportSupportIgnoreSupportSupportSupportSupport
Combined solution pathway unique coverage of result0.1831990.187376
Overall result

In Table  6 , this combined solution pathway indicates that neither the predicted relationships nor the coverage by attributes’ definitions of the constructs are strongly supported in terms of societal acceptance and the challenges posed by FN on SM on society. Therefore, alternative variances, as understood by the society, are better-supporting conditions for the relationship’s definitions in A4. Five of the six pathways are equal to or greater than the defined threshold, indicating that the relationships between the constructs can benefit from trade-offs. Furthermore, there are similar results for the unique coverage, signaling a significantly high-efficiency input directly linked to the variance from the causal conditions.

Results for A4: A1-A2/A3

A4: A1ˑA2/A3A4: A1ˑA2/A3
ConditionS1S2S3S4S1S2S3S4S5S6
Consistency0.6483440.663247 0.697460
Raw coverage0.1962120.3742760.1153290.1721210.1028090.1591010.1108580.2506320.1539860.033637
Unique coverage0.0541840.2414120.0375150.0323130.0324550.0586960.0160030.1209650.0288820.010464
Solution consistency0.635798
Solution coverage0.5387970.454133
C1: H•S⊂Y-Consistency 0.6727320.688173
C1: H•S⊂Y -Raw coverage0.0547770.0423560.0591580.0469740.0547940.0410980.0392830.0162190.0182010.005915
C2: ~H•S⊂Y -Consistency0.6456420.663392 0.697353
C2: ~H•S⊂Y -Raw coverage0.1928170.3755290.1119910.1713540.1028560.1583910.1098380.2508110.1545020.033991
C3: H•~S⊂~Y - Consistency0.6158250.6006940.6433750.6006940.5961000.6007810.6007810.6007810.6007810.600781
C3: H•~S⊂~Y -Raw coverage0.0468190.0468190.0468190.0468190.0440530.0475530.0475530.0475530.0475530.047553
C4: ~H•~S⊂Y -Consistency0.5449020.5424490.5175640.5243090.5258620.5322960.5325420.5263830.5396820.528046
C4: ~H•~S⊂Y -Raw coverage0.8978110.7362260.9335470.8969000.9348760.8974710.9376480.7981920.9058460.958520
Solution pathway resultIgnoreIgnoreSupportSupportSupportSupportSupportIgnoreRejectSupport
Combined solution pathway unique coverage of result0.0698280.1176180.028882
Overall result

To fully understand the A4 outcomes, it is important to discuss the outcomes from A1, A2, and A3 simultaneously. A1 and A2 are insufficient to support a high input efficiency, indicating that SM will fade-out without a correlation with FN. To have a high input efficiency, the combination of the two constructs is highly significant to the relationships. However, A3, which considers all the attributes in the societal acceptance constructs, rejects the associated attributes from A1, whereas it shows weak support for A2, which indicates that the conditions are peripheral or are unconcerned about the variance. This explains the weak support in the attributes of their relationships. The A4 outcome shows that this study considers the attributes of the relations between A1 and A2, as A3 can explain the outcomes of redefining and reducing the impact of both associations.

The aim of this research was to carry out an investigation on the impact of FN on the society, the use of SM as a platform for cascading of information and news. Thus, this study further explore the conceptual model of disease triangle (Piccialli et al., 2021 ) which identify FN as infectious pathogen in Fig.  1 (SM platforms host and spread FN), without the societal acceptance, it is difficult to cascade information and news. Furthermore, FN as defined in this study holds three main features which are significant for the perceptions of the society: the contents of the news, the intentions of the news, and the verification of the news. Hence, the use of comparative technique (fsQCA analysis) to outline the findings as shown in this study auggesting that societal acceptance is important in understanding the impact of FN. To better understand FN, SM, and societal acceptance, this study developed a meta-framework and analyzed the relationships among the attributes of the three constructs within. An online survey with 356 participants was carried out with a stratified sample size to test the meta-framework, and the data collected from the survey process were further categorized as the relationships designed in the constructs. This study considered SM platforms and the activities stimuling cascading processes of FN, changing the societal acceptance through the lens of contents management.

In previous studies, SM platforms are increasingly changing business activities and strategies used in positioning new products and brands, also leading to mis-information in the society (Modgil et al., 2021 ; Parra et al., 2021 ; Piccialli et al., 2021 ), also analyzed the SM platforms as the environment for business and social transactions focusing on capturing the largest audiences for information cascading, this further the spread of FN through the use of cascading tools available on SM. According to (Dwivedi et al., 2018 ; Kim & Dennis, 2019 ; Kim et al., 2019 ), cascading of FN through the use of SM platforms is growing faster than anticipated. The results of this study identified focused areas that can reduce the spread of FN on SM.

The results gathered during data analysis of validated questionnaire demonstrated important contributions of this study to minimizing cascading of FN in the society. Thus, the evaluation of the three perspectives; FN, SM, and societal acceptance further enhanced into relationship mapping by considering the entities from each perspectives as shown in Fig.  2 . The results from Table  3 , suggest that the testing of the relationship A1: FN/USˑVA of FN perspective and the entities users and values of the societal perspective is rejected while the relationship A1: FN/USˑNW of FN perspective and the entities users and networks of the societal acceptance is supported. Furthermore, the outcomes in Table  3 concur with the disease triangle theory which discussed the pathology model for disease manifestation, stating that the three triangular elements for infectious pathogen must be present for disease to grow (Humprecht, 2019b ; Rubin, 2019 ; Sommariva et al., 2018 ). Hence, the relationship A1: FN/USˑVA of FN perspective and the entities users and values of the societal perspective lacks the environment (networks) for cascading of contents of FN.

Table  4 shows support for SM and societal acceptance perspectives relationship mapping, with constructs’ consistency and coverage meeting the set requirement in Fig.  3 . However, condition S1 and S2 for A2: SM/USˑVA and S1 for A2: SM/USˑNW were ignored from the result, suggesting that there are other sources of information such as true news, entertainment contents which users are engaging with on SM platforms. According to Kwon et al. ( 2017 ), SM platforms provide positive opportunities such as learning new skills, engaging with experienced individuals and mentors, and finding new friendship, directly impacting positively on the society.

The increase in the level of cascading of FN can be attributed to SM companies drive to upsurge the size of big data, leading to strategic end to end nodes multiplication (Haigh et al., 2018 ). This study demonstrates that the enabling environment for the spreading of FN is attributed to the structure and strategies of SM companies. As shown in Table  6 , when SM companies implement effective fact-checking tools on SM platforms, the traffic of FN is minimized and the impact on the society is reduced. The relevant role of SM companies is to ensure that verification and fact-checking are embedded into the process of retrieving news and information.

In summary, the findings of this study suggest that previous studies (Dwivedi et al., 2018 ; Kim et al., 2019 ; Malik et al., 2020 ; Modgil et al., 2021 ; Roozenbeek & van der Linden, 2019 ) demonstrated the gap for an investigation of the societal acceptance of contents available on SM. Our findings show that the societal acceptance of information and news is highly dependent on the verification and fact-checking features that are available on the SM platforms. Therefore, the research questions in this study outlined the need for fact-checking and verification of information and news most importantly FN on SM. The results of the complementarity assessments show that SM and societal acceptance did significantly influence cascading of contents towards users. Specifically, FN cascading spread faster than any other type of contents on SM as shown in Table  5 . With regards to societal acceptance, users distributions of FN contents unconsciously aid cascasding with the intention of spreading awareness about the situation surrounding FN events.

Theoretical Implications

This study builds on the theoretical knowledge in literature by making significant contribution to the understanding of the impact of FN and SM platforms on the society. According to studies (Abouzeid et al., 2021 ; Au et al., 2021 ; Dwivedi et al., 2018 ; Kim et al., 2019 ; Parra et al., 2021 ; Tran et al., 2021 ) with combined body of knowledge on misinformation, FN, SM, SM platforms, cascading of FN, and risks of misinformation, this study identifies three main themes in our contribution: FN, SM, and societal acceptance. Previous studies (Orso et al., 2020 ; Pennycook et al., 2020 ) have presented FN and SM concepts, however this study’s introduction of societal acceptance is a novel theoretical contribution. Furthermore, the lack of studies on the societal acceptance of cascading of FN have generated a theoretical gap in understanding FN, misinformation and SM. Therefore, the results in our paper filled the research gap by validating the proposed features of societal acceptance: users, networks, and values.

The findings of this study contribute to theory by using complementarity among FN, SM, and societal acceptance to explain their influence by evaluating all the attributes in the three constructs, building relationships, and presenting findings that identify the significance of each association to reduce the cascading of FN in society. Therefore, this research answers the call of studies (George et al., 2018 ; Miller & Tucker, 2013 ; Miranda et al., 2016 ) that have suggested further work on FN on SM. Further, this study explains the impact of FN on society by exploring the conditions in different scenarios and with different complementarity values. It also shows how SM (i.e., the environment) and users can strategically deploy all resources to tackle the cascading and spread of FN. Most importantly, fuzzy set theory provides a data analysis structure that shows complex causality, enabling this research to present empirical findings.

Theoretically speaking, the outcomes show the importance of fact-checking and managing cascading in reducing the spread of the contents of FN in the society. Also, the role of SM companies in continuance commitment to support the course of minizing the impact of FN. As of date, this is the first of study to develop a meta-framework to examine the impact of FN on the society distributed on the SM. This study argued that exploring fact-checking and managing cascading will provide a platform for SM companies to contributing in the challenging impact of FN on the society. This study finds that SM as a type of environment is equipped with the technological know-how to tackle the spread of FN. This is particularly so for large SM organizations such as Facebook whose main business is SM content. Therefore, investment in technological research and service innovation is becoming a priority. However, more investment is required for fact-checking and analyzing cascading news, meaning that SM organizations with technical research facilities are more likely to initiate rigorous fact-checking campaigns. Hence, profitability and market growth may be more important for implementing fact-checking and news-cascading technologies that benefit society.

Practical Implications

Based on the outcomes obtained from the complementarity of the fuzzy set, it is also important for the SM platform providers to continue to invest in the fact-checking and managing contents of FN that are influencing users perceptions. In addition, it is very important to manage the direct impact of FN contents on the society by increasing the amount of fact-checking and verification tools that are available on SM. For instance, vigorous campaigns on the important role of news and information verification across all SM platforms and ensuring that there is educating information about the impact of spreading FN on SM on the society at large. Also, SM organizations should implement safe technology such as real-time deletion of contents of FN to ensure a safer communication environment for the users. Furthermore, the distinguishing real news from fake news using aided technology will boost confidence in the society. The comprehensive theoretical review and in-depth empirical analysis of the complex casualty of FN on SM on society in this study allows SM organizations to consider their organizational strategies to reduce FN cascading and implement sustainable solutions. SM organizations should prioritize the allocation of resources toward measures that tackle the challenges FN poses to society as well as the cost, societal impact, and misinformation linked to regulations to halt the spread of FN.

Implications for Society

The in-depth empirical analysis conducted concerning the FN on SM and the societal impact, the study provides a platform to the SM users on how far the facts published on SM can be trusted and how to filter the FN from TN on SM. SM organizations such as Facebook and Twitter have invested in large to tackle the publishing of FN on social media while yet the FN has taken on SM drastically during certain urgent situations.

Following the countless challenges that arose around the world due to the FN published on SM and the societal impact, the SM organizations have taken larger steps in minimizing the FN before being published and open to the public. The flowchart for the consistency analysis can be used by SM organizations in analyzing the published news on SM to distinguish FN from TN. Thus, the negative impact caused by FN to users and their lives can be minimized. Despite the fact that steps been taken by the SM organizations, it is also users’ responsibility to filter TN from FN even if they are being posted on verified accounts, by fact-checking or using appropriate verification (Nagi, 2020 ).

Conclusions

The results from this study demonstrate that it is important for SM platform providers continue in their efforts to understand the risks of cascading of FN and the influence on the society at large. Hence, the implementation of fact-checking tools is significant in reducing the spread of FN, building of trust and confident in the society. SM platform providers should ensure that there is continuous monitoring of online activities triggered by spread of FN and also ensures periodic upgrade of fact-checking technologies to tackle new tricks and strategies used in cascading FN in the society (Modgil et al., 2021 ; Parra et al., 2021 ). Furthermore, fact-checking information and public awareness on how to verify news can be added to campaigns to support the affected societies in combating the impact of FN. The findings in our study demonstrate that societal acceptance is a powerful tool that can persuade the society to focus on achieving common goal. The role of the society is to adopt the strength in societal acceptance to drive positive cultural change that welcome fact-checking and verification of any form of news.

Limitations and Future Research Directions

This study, like other studies, has limitations that suggest future research directions. This study analyzed how three constructs, FN, SM, and societal acceptance, impact on society. Other constructs were not included in this study such as SM firms’ power, political strategies, and societal perceptions. In addition, our data collection focused on people who engage most frequently with SM; experts and SM analysts may be relevant for future research to examine. Given that previous researchers focus on cascading FN and fact-checking news content to distinguish TN from FN, the influence of fact-checking and analyzing FN cascading could be tested future research with new datasets. In this vein, this study did not consider the financial impact of FN on SM on society, which is another interesting area for future research.

This cross-sectional research aimed to provide an in-depth understanding of the relationships of the three studied topics by analyzing data from many demographics rather than from one location. Therefore, the findings of this study support generalization to many locations. However, since some studies consider the results from a single location, future research could compare the complementarity, consistency, and coverage of a single location with many locations, which would enrich the findings of this study.

Biographies

is a Senior Lecturer in Business Information Management at Northumbria University, UK. He obtained his PhD degree from Plymouth University, UK. He has teaching and industry experience in the field of information systems. His research interests focus on knowledge sharing, organizational factors, and performance management in organizations. He has collaborated on numerous research projects for, among others, development agencies. He has authored numerous articles in peer-reviewed journals and books.

is a Lecturer in Information Systems (which is equivalent to Assistant Professor) at the Information Management Group, School of Business and Economics, Loughborough University, UK. He obtained his PhD degree from Plymouth University, UK. He has research, teaching, and industry experience in the field of information systems, particularly in the areas of enterprise systems, cloud ERP, business process automation, knowledge management, knowledge management systems, digitization (digital innovation & productivity), business intelligence, data analytics, and business process re-engineering. He has published research in various renowned conferences, books, and journals. He is involved in several research projects internally and externally. He is a reviewer for several journals and international conferences. He has editorial experience in various journals. He is a member of several scientific/technical/program committees.

is a Senior Lecturer in International Business Management at Newcastle Business School. His research interests lie at the nexus of the liberalization of the telecommunications market and universal access policies as a combined strategy for closing the digital divides in emerging economies. He is a reviewer for Information Technology and People.

is an Associate Researcher at the Translational and Clinical Research Institute at Newcastle University. She works across the University and National Institute for Health Research Newcastle In Vitro Diagnostics Co-operative. She holds a PhD in Interdisciplinary Statistics from the University of Ljubljana, Slovenia. Her thesis covered the application of econometric models for the analysis of synergetic effects within channels of integrated marketing communications. Her current work focuses on evaluations of novel medical devices from very early stages to adoption. She is professionally active in several research areas including social research, business and management, innovation, and healthcare.

is Professor of Operations Management and Decision-making. She obtained her Ph.D. degree from Loughborough University, UK. Her main research interests and expertise are in knowledge-based techniques to support business decision-making, particularly in the areas of knowledge management, integrated decision support, digital business, and quantitative decision methods. She is a senior editor for Cogent Business and Management , an open access journal. She has undertaken several influential research projects funded by UK research councils and the European Commission with a total value of over €40 million. She is currently the PI and Co-I for four EU projects under the Horizon 2020 program. She has published over 150 peer-reviewed research papers.

Declarations

There is no conflict of interest and no funding was received for conducting this study. Also, All authors certify that they have no affiliations with or involvement in any organization or entity with any financial interest or non-financial interest in the subject matter or materials discussed in this study.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Femi Olan, Email: [email protected] .

Uchitha Jayawickrama, Email: [email protected] .

Emmanuel Ogiemwonyi Arakpogun, Email: [email protected] .

Jana Suklan, Email: [email protected] .

Shaofeng Liu, Email: [email protected] .

  • Abouzeid A, Granmo OC, Webersik C, Goodwin M. Learning automata-based misinformation mitigation via Hawkes processes. Information Systems Frontiers. 2021; 23 (5):1169–1188. doi: 10.1007/s10796-020-10102-8. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Africe, W. R. O. (2020). f. Technical Guidance on contact tracingor COVID-19 in the World Health Organization (WHO) African region .  https://www.afro.who.int/publications/technical-guidance-contact-tracing-covid-19-world-health-organization-who-african . Accessed 19 May 2020.
  • Aggarwal R, Gopal R, Sankaranarayanan R, Singh PV. Blog, blogger, and the firm: can negative employee posts lead to positive outcomes? Information Systems Research. 2012; 23 (2):306–322. doi: 10.1287/isre.1110.0360. [ CrossRef ] [ Google Scholar ]
  • Aggarwal R, Singh H. Differential influence of blogs across different stages of decision making: the case of venture capitalists.(Report) Mis Quarterly. 2013; 37 (4):1093. doi: 10.25300/MISQ/2013/37.4.05. [ CrossRef ] [ Google Scholar ]
  • Arshad M, Islam S, Khaliq A. Fuzzy logic approach in power transformers management and decision making. IEEE Transactions on Dielectrics and Electrical Insulation. 2014; 21 (5):2343–2354. doi: 10.1109/TDEI.2014.003859. [ CrossRef ] [ Google Scholar ]
  • Au CH, Ho KKW, Chiu DKW. The role of online misinformation and fake news in ideological polarization: barriers, catalysts, and implications. Information Systems Frontiers. 2021 doi: 10.1007/s10796-021-10133-9. [ CrossRef ] [ Google Scholar ]
  • Barrett M, Oborn E, Orlikowski W. Creating value in online communities: the sociomaterial configuring of strategy, platform, and stakeholder engagement. Information Systems Research. 2016; 27 (4):704–723. doi: 10.1287/isre.2016.0648. [ CrossRef ] [ Google Scholar ]
  • Baur A. Harnessing the social web to enhance insights into people’s opinions in business, government and public administration. Information Systems Frontiers. 2017; 19 (2):231–251. doi: 10.1007/s10796-016-9681-7. [ CrossRef ] [ Google Scholar ]
  • Berkowitz D, Schwartz DA. Miley, CNN and The Onion. Journalism Practice. 2016; 10 (1):1–17. doi: 10.1080/17512786.2015.1006933. [ CrossRef ] [ Google Scholar ]
  • Brennen B. Making sense of lies, deceptive propaganda, and fake news. Journal of Media Ethics. 2017; 32 (3):179–181. doi: 10.1080/23736992.2017.1331023. [ CrossRef ] [ Google Scholar ]
  • Brummette J, Distaso M, Vafeiadis M, Messner M. Read all about it: the politicization of “Fake News” on Twitter. Journalism & Mass Communication Quarterly. 2018; 95 (2):497–517. doi: 10.1177/1077699018769906. [ CrossRef ] [ Google Scholar ]
  • Burkhardt JM. History of fake news. Library Technology Reports. 2017; 53 (8):5–9. [ Google Scholar ]
  • Cao X, Guo X, Liu H, Gu J. The role of social media in supporting knowledge integration: A social capital analysis. Information Systems Frontiers. 2015; 17 (2):351–362. doi: 10.1007/s10796-013-9473-2. [ CrossRef ] [ Google Scholar ]
  • Centeno R, Hermoso R, Fasli M. On the inaccuracy of numerical ratings: dealing with biased opinions in social networks. Information Systems Frontiers. 2015; 17 (4):809–825. doi: 10.1007/s10796-014-9526-1. [ CrossRef ] [ Google Scholar ]
  • Chang IC, Liu CC, Chen K. The push, pull and mooring effects in virtual migration for social networking sites. Information Systems Journal. 2014; 24 (4):323–346. doi: 10.1111/isj.12030. [ CrossRef ] [ Google Scholar ]
  • Chang WL, Diaz A, Hung P. Estimating trust value: A social network perspective. Information Systems Frontiers. 2015; 17 (6):1381–1400. doi: 10.1007/s10796-014-9519-0. [ CrossRef ] [ Google Scholar ]
  • Chen H, De P, Hu YJ. IT-enabled broadcasting in social media: an empirical study of artists’ activities and music sales. Information Systems Research. 2015; 26 (3):513–531. doi: 10.1287/isre.2015.0582. [ CrossRef ] [ Google Scholar ]
  • Chen R, Sharma SK. Learning and self-disclosure behavior on social networking sites: the case of Facebook users. European Journal of Information Systems. 2015; 24 (1):93–106. doi: 10.1057/ejis.2013.31. [ CrossRef ] [ Google Scholar ]
  • Chen X. Calling out fake news on social media: a comparison of literature in librarianship and journalism. Internet Reference Services Quarterly. 2018; 23 (1-2):1–13. doi: 10.1080/10875301.2018.1518284. [ CrossRef ] [ Google Scholar ]
  • Copeland DA. A series of fortunate events: why people believed Richard Adams Locke’s “Moon Hoax” Journalism History. 2007; 33 (3):140–150. doi: 10.1080/00947679.2007.12062738. [ CrossRef ] [ Google Scholar ]
  • Deutsch SJ, Malmborg CJ. Evaluating organizational performance-measures using fuzzy subsets. European Journal of Operational Research. 1985; 22 (2):234–242. doi: 10.1016/0377-2217(85)90231-0. [ CrossRef ] [ Google Scholar ]
  • Dwivedi YK, Kelly G, Janssen M, Rana NP, Slade EL, Clement M. Social media: the good, the bad, and the ugly. Information Systems Frontiers. 2018; 20 (3):419–423. doi: 10.1007/s10796-018-9848-5. [ CrossRef ] [ Google Scholar ]
  • Egelhofer JL, Lecheler S. Fake news as a two-dimensional phenomenon: a framework and research agenda. Annals of the International Communication Association. 2019; 43 (2):97–116. doi: 10.1080/23808985.2019.1602782. [ CrossRef ] [ Google Scholar ]
  • Fang X, Hu PJH, Li Z, Tsai W. Predicting adoption probabilities in social networks. Information Systems Research. 2013; 24 (1):128–145. doi: 10.1287/isre.1120.0461. [ CrossRef ] [ Google Scholar ]
  • Garg R, Smith MD, Telang R. Measuring information diffusion in an online community. Journal of Management Information Systems. 2011; 28 (2):11–38. doi: 10.2753/MIS0742-1222280202. [ CrossRef ] [ Google Scholar ]
  • George JF, Gupta M, Giordano G, Mills AM, Tennant VM, Lewis CC. The effects of communication media and culture on deception detection accuracy. MIS Quarterly: Management Information Systems. 2018; 42 (2):551–575. doi: 10.25300/MISQ/2018/13215. [ CrossRef ] [ Google Scholar ]
  • Gerlach J, Widjaja T, Buxmann P. Handle with care: How online social network providers’ privacy policies impact users’ information sharing behavior. Journal of Strategic Information Systems. 2015; 24 (1):33–43. doi: 10.1016/j.jsis.2014.09.001. [ CrossRef ] [ Google Scholar ]
  • Gomez-Miranda ME, Perez-Lopez MC, Argente-Linares E, Rodriguez-Ariza L. The impact of organizational culture on competitiveness, effectiveness and efficiency in Spanish-Moroccan international joint ventures. Personnel Review. 2015; 44 (3):364–387. doi: 10.1108/Pr-07-2013-0119. [ CrossRef ] [ Google Scholar ]
  • Gray P, Parise S, Iyer B. Innovation impacts of using social bookmarking systems. Mis Quarterly. 2011; 35 (3):629–643. doi: 10.2307/23042800. [ CrossRef ] [ Google Scholar ]
  • Haigh M, Haigh T, Kozak NI. Stopping fake news. Journalism Studies. 2018; 19 (14):2062–2087. doi: 10.1080/1461670X.2017.1316681. [ CrossRef ] [ Google Scholar ]
  • Hamamreh, R. A., & Awad, S. (2017). 14-16 Dec. 2017). Tag ranking multi-agent semantic social networks. 2017 International Conference on Computational Science and Computational Intelligence (CSCI)
  • Han, J., Lee, S. H., & Kim, J. K. (2017). A process integrated engineering knowledge acquisition and management model for a project based manufacturing (Vol 18, pg 175, 2017). International Journal of Precision Engineering and Manufacturing , 18 (3), 467-467. 10.1007/s12541-017-0056-x
  • Herrera-Restrepo O, Triantis K, Trainor J, Murray-Tuite P, Edara P. A multi-perspective dynamic network performance efficiency measurement of an evacuation: A dynamic network-DEA approach. Omega-International Journal of Management Science. 2016; 60 :45–59. doi: 10.1016/j.omega.2015.04.019. [ CrossRef ] [ Google Scholar ]
  • Humprecht E. How do they debunk “fake news”? A cross-national comparison of transparency in fact checks. Digital Journalism. 2019 doi: 10.1080/21670811.2019.1691031. [ CrossRef ] [ Google Scholar ]
  • Humprecht E. Where ‘fake news’ flourishes: a comparison across four Western democracies. Information Communication and Society. 2019; 22 (13):1973–1988. doi: 10.1080/1369118X.2018.1474241. [ CrossRef ] [ Google Scholar ]
  • Hwang YC, Yuan ST, Weng JH. A study of the impacts of positive/negative feedback on collective wisdom—case study on social bookmarking sites. Information Systems Frontiers. 2011; 13 (2):265–279. doi: 10.1007/s10796-009-9186-8. [ CrossRef ] [ Google Scholar ]
  • Kapoor K, Tamilmani K, Rana N, Patil P, Dwivedi Y, Nerur S. Advances in social media research: past, present and future. Information Systems Frontiers. 2018; 20 (3):531–558. doi: 10.1007/s10796-017-9810-y. [ CrossRef ] [ Google Scholar ]
  • Kim A, Dennis AR. Says who? The effects of presentation format and source rating on fake news in social media. MIS Quarterly: Management Information Systems. 2019; 43 (3):1025–1039. doi: 10.25300/MISQ/2019/15188. [ CrossRef ] [ Google Scholar ]
  • Kim A, Moravec PL, Dennis AR. Combating fake news on social media with source ratings: the effects of user and expert reputation ratings. Journal of Management Information Systems. 2019; 36 (3):931–968. doi: 10.1080/07421222.2019.1628921. [ CrossRef ] [ Google Scholar ]
  • Kim EH, Lyon T. Greenwash vs. Brownwash: Exaggeration and undue modesty in corporate sustainability disclosure. Organization Science. 2014; 26 (3):705–723. doi: 10.1287/orsc.2014.0949. [ CrossRef ] [ Google Scholar ]
  • Klashanov, F. (2018). Fuzzy logic in construction management. MATEC Web of Conferences , 170 . 10.1051/matecconf/201817001111
  • Knight E, Tsoukas H. When Fiction Trumps Truth: What ‘post-truth’ and ‘alternative facts’ mean for management studies. Organization Studies. 2019; 40 (2):183–197. doi: 10.1177/0170840618814557. [ CrossRef ] [ Google Scholar ]
  • Kuem J, Ray S, Siponen M, Kim SS. What leads to prosocial behaviors on social networking services: a tripartite model. Journal of Management Information Systems. 2017; 34 (1):40–70. doi: 10.1080/07421222.2017.1296744. [ CrossRef ] [ Google Scholar ]
  • Kumar N, Venugopal D, Qiu L, Kumar S. Detecting review manipulation on online platforms with hierarchical supervised learning. Journal of Management Information Systems. 2018; 35 (1):350–380. doi: 10.1080/07421222.2018.1440758. [ CrossRef ] [ Google Scholar ]
  • Kwon HE, Oh W, Kim T. Platform structures, homing preferences, and homophilous propensities in online social networks. Journal of Management Information Systems. 2017; 34 (3):768–802. doi: 10.1080/07421222.2017.1373008. [ CrossRef ] [ Google Scholar ]
  • Lazer, D. M. J., Baum, M. A., Benkler, Y., Berinsky, A. J., Greenhill, K. M., Menczer, F. … Zittrain, J. L. (2018). The science of fake news: Addressing fake news requires a multidisciplinary effort. Science , 359 (6380), 1094–1096. 10.1126/science.aao2998 [ PubMed ]
  • Leong C, Pan S, Ractham P, Kaewkitipong L. ICT-enabled community empowerment in crisis response: social media in Thailand flooding 2011. Journal of the Association for Information Systems. 2015; 16 (3):174–212. doi: 10.17705/1jais.00390. [ CrossRef ] [ Google Scholar ]
  • Lukyanenko R, Parsons J, Wiersma YF. The IQ of the crowd: understanding and improving information quality in structured user-generated content. Information Systems Research. 2014; 25 (4):669–689. doi: 10.1287/isre.2014.0537. [ CrossRef ] [ Google Scholar ]
  • Lundmark L, Oh C, Verhaal J. A little Birdie told me: Social media, organizational legitimacy, and underpricing in initial public offerings. Information Systems Frontiers. 2017; 19 (6):1407–1422. doi: 10.1007/s10796-016-9654-x. [ CrossRef ] [ Google Scholar ]
  • Lyon TP, Montgomery AW. The means and end of greenwash. Organization & Environment. 2015; 28 (2):223–249. doi: 10.1177/1086026615575332. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Maier C, Laumer S, Eckhardt A, Weitzel T. Giving too much social support: social overload on social networking sites. European Journal of Information Systems. 2015; 24 (5):447–464. doi: 10.1057/ejis.2014.3. [ CrossRef ] [ Google Scholar ]
  • Malik A, Froese FJ, Sharma P. Role of HRM in knowledge integration: Towards a conceptual framework. Journal of Business Research. 2020; 109 :524–535. doi: 10.1016/j.jbusres.2019.01.029. [ CrossRef ] [ Google Scholar ]
  • Manski CF. Identification of endogenous social effects: the reflection problem. The Review of Economic Studies. 1993; 60 (3):531–542. doi: 10.2307/2298123. [ CrossRef ] [ Google Scholar ]
  • Massari L. Analysis of MySpace user profiles. Information Systems Frontiers. 2010; 12 (4):361–367. doi: 10.1007/s10796-009-9206-8. [ CrossRef ] [ Google Scholar ]
  • Matook S, Cummings J, Bala H. Are you feeling lonely? The impact of relationship characteristics and online social network features on loneliness. Journal of Management Information Systems. 2015; 31 (4):278–310. doi: 10.1080/07421222.2014.1001282. [ CrossRef ] [ Google Scholar ]
  • Mettler T, Winter R. Are business users social? A design experiment exploring information sharing in enterprise social systems. Journal of Information Technology. 2016; 31 (2):101–114. doi: 10.1057/jit.2015.28. [ CrossRef ] [ Google Scholar ]
  • Miller AR, Tucker C. Active social media management: the case of health care. Information Systems Research. 2013; 24 (1):52–70. doi: 10.1287/isre.1120.0466. [ CrossRef ] [ Google Scholar ]
  • Miranda SM, Kim I, Summers JD. Jamming with social media: How cognitive structuring of organizing vision facets affects it innovation diffusion. Mis Quarterly. 2015; 39 (3):591. doi: 10.25300/MISQ/2015/39.3.04. [ CrossRef ] [ Google Scholar ]
  • Miranda SM, Young A, Yetgin E. Are social media emancipatory or hegemonic? Societal effects of mass media digitization in the case of the sopa discourse. Mis Quarterly. 2016; 40 (2):303. doi: 10.25300/MISQ/2016/40.2.02. [ CrossRef ] [ Google Scholar ]
  • Modgil, S., Singh, R. K., Gupta, S., & Dennehy, D. (2021). A confirmation bias view on social media induced polarisation during Covid-19.  Information Systems Frontiers . 10.1007/s10796-021-10222-9. [ PMC free article ] [ PubMed ]
  • Nagi, K. (2020). From bits and bytes to big data-An historical overview . Available at SSRN 3622921.
  • Nerur SP, Rasheed AA, Natarajan V. The intellectual structure of the strategic management field: an author co-citation analysis. Strategic Management Journal. 2008; 29 (3):319–336. doi: 10.1002/smj.659. [ CrossRef ] [ Google Scholar ]
  • Oestreicher-Singer G, Zalmanson L. Content or community? A digital business strategy for content providers in the social age.(Special Issue: Digital Business Strategy)(Report) Mis Quarterly. 2013; 37 (2):591. doi: 10.25300/MISQ/2013/37.2.12. [ CrossRef ] [ Google Scholar ]
  • Orso, D., Federici, N., Copetti, R., Vetrugno, L., & Bove, T. (2020). Infodemic and the spread of fake news in the COVID-19-era.  European Journal of Emergency Medicine [ PMC free article ] [ PubMed ]
  • Pan Z, Lu Y, Wang B, Chau PYK. Who do you think you are? Common and differential effects of social self-identity on social media usage. Journal of Management Information Systems. 2017; 34 (1):71–101. doi: 10.1080/07421222.2017.1296747. [ CrossRef ] [ Google Scholar ]
  • Parra, C. M., Gupta, M., & Dennehy, D. (2021). Likelihood of questioning ai-based recommendations due to perceived racial/gender bias. IEEE Transactions on Technology and Society
  • Pennycook G, McPhetres J, Zhang Y, Lu JG, Rand DG. Fighting COVID-19 misinformation on social media: Experimental evidence for a scalable accuracy-nudge intervention. Psychological Science. 2020; 31 (7):770–780. doi: 10.1177/0956797620939054. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Piccialli F, di Cola VS, Giampaolo F, Cuomo S. The role of artificial intelligence in fighting the COVID-19 pandemic. Information Systems Frontiers. 2021; 23 (6):1467–1497. doi: 10.1007/s10796-021-10131-x. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Pierri F, Artoni A, Ceri S. Investigating Italian disinformation spreading on Twitter in the context of 2019 European elections. Plos One. 2020; 15 (1):e0227821. doi: 10.1371/journal.pone.0227821. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Posetti, J., & Matthews, A. (2018). A short guide to the history of ‘fake news’ and disinformation.  International Center For Journalists , 2018–2007
  • Preti A, Miotto P. Self-deception, social desirability, and psychopathology. Behavioral and Brain Sciences. 2011; 34 (1):37–37. doi: 10.1017/S0140525X10002487. [ CrossRef ] [ Google Scholar ]
  • Prosser, C., Fieldhouse, E., Green, J., Mellon, J., & Evans, G. (2020). Tremors but no Youthquake: Measuring changes in the age and turnout gradients at the 2015 and 2017 British general elections. Electoral Studies, 64 . 10.1016/j.electstud.2020.102129.
  • Ragin C. New directions in the logic of social inquiry. Political Research Quarterly. 2013; 66 (1):171–174. [ Google Scholar ]
  • Ragin, C. C. (2009). Qualitative comparative analysis using fuzzy sets (fsQCA). Configurational comparative methods: Qualitative comparative analysis (QCA) and related techniques , 51 , 87-121
  • Ragin CC, Pennings P. Fuzzy sets and social research. Sociological Methods & Research. 2005; 33 (4):423–430. doi: 10.1177/0049124105274499. [ CrossRef ] [ Google Scholar ]
  • Roozenbeek J, van der Linden S. The fake news game: actively inoculating against the risk of misinformation. Journal of Risk Research. 2019; 22 (5):570–580. doi: 10.1080/13669877.2018.1443491. [ CrossRef ] [ Google Scholar ]
  • Rubin VL. Disinformation and misinformation triangle. Journal of Documentation. 2019; 75 (5):1013–1034. doi: 10.1108/JD-12-2018-0209. [ CrossRef ] [ Google Scholar ]
  • Scholthof KBG. The disease triangle: pathogens, the environment and society. Nature Reviews Microbiology. 2007; 5 (2):152–156. doi: 10.1038/nrmicro1596. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sommariva S, Vamos C, Mantzarlis A, Đào LUL, Martinez Tyson D. Spreading the (fake) news: exploring health messages on social media and the implications for health professionals using a case study. American Journal of Health Education. 2018; 49 (4):246–255. doi: 10.1080/19325037.2018.1473178. [ CrossRef ] [ Google Scholar ]
  • Tandoc EC, Jenkins J, Craft S. Fake news as a critical incident in journalism. Journalism Practice. 2019; 13 (6):673–689. doi: 10.1080/17512786.2018.1562958. [ CrossRef ] [ Google Scholar ]
  • Tandoc EC, Lim ZW, Ling R. Defining “fake news” Digital Journalism. 2018; 6 (2):137–153. doi: 10.1080/21670811.2017.1360143. [ CrossRef ] [ Google Scholar ]
  • Tran T, Valecha R, Rad P, Rao HR. An investigation of misinformation harms related to social media during two humanitarian crises. Information Systems Frontiers. 2021; 23 (4):931–939. doi: 10.1007/s10796-020-10088-3. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Union, U. (2020). UN tackles ‘infodemic’ of misinformation and cybercrime in COVID-19 crisis .  https://www.un.org/en/un-coronavirus-communications-team/un-tackling-%E2%80%98infodemic%E2%80%99-misinformation-and-cybercrime-covid-19 . Accessed 19 May 2020.
  • Venkatraman, S., Cheung, M. K., Lee, C., Davis, Z. W. Y. D., & Venkatesh, V. (2018). The “Darth” side of technology use: an inductively derived typology of cyberdeviance. Journal of Management Information Systems, 35 (4), 1060–1091. 10.1080/07421222.2018.1523531.
  • Vosoughi S, Roy D, Aral S. The spread of true and false news online. Science. 2018; 359 (6380):1146–1151. doi: 10.1126/science.aap9559. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Wang, Y., Li, Y., & Luo, J. (2016). Deciphering the 2016 US Presidential campaign in the Twitter sphere: A comparison of the Trumpists and Clintonists . Tenth International AAAI Conference on Web and Social Media

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Peer-reviewed

Research Article

A systematic review on fake news research through the lens of news creation and consumption: Research efforts, challenges, and future directions

Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing

Affiliation School of Intelligence Computing, Hanyang University, Seoul, Republic of Korea

Roles Conceptualization, Formal analysis, Investigation, Methodology, Supervision, Writing – original draft, Writing – review & editing

Affiliation College of Information Sciences and Technology, Pennsylvania State University, State College, PA, United States of America

Roles Funding acquisition, Methodology, Project administration, Supervision, Writing – original draft, Writing – review & editing

Roles Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing

* E-mail: [email protected]

ORCID logo

  • Bogoan Kim, 
  • Aiping Xiong, 
  • Dongwon Lee, 
  • Kyungsik Han

PLOS

  • Published: December 9, 2021
  • https://doi.org/10.1371/journal.pone.0260080
  • Reader Comments

28 Dec 2023: The PLOS One Staff (2023) Correction: A systematic review on fake news research through the lens of news creation and consumption: Research efforts, challenges, and future directions. PLOS ONE 18(12): e0296554. https://doi.org/10.1371/journal.pone.0296554 View correction

Fig 1

Although fake news creation and consumption are mutually related and can be changed to one another, our review indicates that a significant amount of research has primarily focused on news creation. To mitigate this research gap, we present a comprehensive survey of fake news research, conducted in the fields of computer and social sciences, through the lens of news creation and consumption with internal and external factors.

We collect 2,277 fake news-related literature searching six primary publishers (ACM, IEEE, arXiv, APA, ELSEVIER, and Wiley) from July to September 2020. These articles are screened according to specific inclusion criteria (see Fig 1). Eligible literature are categorized, and temporal trends of fake news research are examined.

As a way to acquire more comprehensive understandings of fake news and identify effective countermeasures, our review suggests (1) developing a computational model that considers the characteristics of news consumption environments leveraging insights from social science, (2) understanding the diversity of news consumers through mental models, and (3) increasing consumers’ awareness of the characteristics and impacts of fake news through the support of transparent information access and education.

We discuss the importance and direction of supporting one’s “digital media literacy” in various news generation and consumption environments through the convergence of computational and social science research.

Citation: Kim B, Xiong A, Lee D, Han K (2021) A systematic review on fake news research through the lens of news creation and consumption: Research efforts, challenges, and future directions. PLoS ONE 16(12): e0260080. https://doi.org/10.1371/journal.pone.0260080

Editor: Luigi Lavorgna, Universita degli Studi della Campania Luigi Vanvitelli, ITALY

Received: March 24, 2021; Accepted: November 2, 2021; Published: December 9, 2021

Copyright: © 2021 Kim et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: All relevant data are within the manuscript.

Funding: This research was supported by the Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (2019-0-01584, 2020-0-01373).

Competing interests: The authors have declared that no competing interests exist.

1 Introduction

The spread of fake news not only deceives the public, but also affects society, politics, the economy and culture. For instance, Buzzfeed ( https://www.buzzfeed.com/ ) compared and analyzed participation in 20 real news and 20 fake news articles (e.g., likes, comments, share activities) that spread the most on Facebook during the last three months of the 2016 US Presidential Election. According to the results, the participation rate of fake news (8.7 million) was higher than that of mainstream news (7.3 million), and 17 of the 20 fake news played an advantageous role in winning the election [ 1 ]. Pakistan’s ministry of Defense posted a tweet fiercely condemning Israel after coming to believe that Israel had threatened Pakistan with nuclear weapons, which was later found to be false [ 2 ]. Recently, the spread of the absurd rumor that COVID-19 propagates through 5G base stations in the UK caused many people to become upset and resulted in a base station being set on fire [ 3 ].

Such fake news phenomenon has been rapidly evolving with the emergence of social media [ 4 , 5 ]. Fake news can be quickly shared by friends, followers, or even strangers within only a few seconds. Repeating a series of these processes could lead the public to form the wrong collective intelligence [ 6 ]. This could further develop into diverse social problems (i.e., setting a base station on fire because of rumors). In addition, some people believe and propagate fake news due to their personal norms, regardless of the factuality of the content [ 7 ]. Research in social science has suggested that cognitive bias (e.g., confirmation bias, bandwagon effect, and choice-supportive bias) [ 8 ] is one of the most pivotal factors in making irrational decisions in terms of the both creation and consumption of fake news [ 9 , 10 ]. Cognitive bias greatly contributes to the formation and enhancement of the echo chamber [ 11 ], meaning that news consumers share and consume information only in the direction of strengthening their beliefs [ 12 ].

Research using computational techniques (e.g., machine or deep learning) has been actively conducted for the past decade to investigate the current state of fake news and detect it effectively [ 13 ]. In particular, research into text-based feature selection and the development of detection models has been very actively and extensively conducted [ 14 – 17 ]. Research has been also active in the collection of fake news datasets [ 18 , 19 ] and fact-checking methodologies for model development [ 20 – 22 ]. Recently, Deepfake, which can manipulate images or videos through deep learning technology, has been used to create fake news images or videos, significantly increasing social concerns [ 23 ], and a growing body of research is being conducted to find ways of mitigating such concerns [ 24 – 26 ]. In addition, some research on system development (i.e., a game to increase awareness of the negative aspects of fake news) has been conducted to educate the public to avoid and prevent them from the situation where they could fall into the echo chamber, misunderstandings, wrong decision-making, blind belief, and propagating fake news [ 27 – 29 ].

While the creation and consumption of fake news are clearly different behaviors, due to the characteristics of the online environment (e.g., information can be easily created, shared, and consumed by anyone at anytime from anywhere), the boundaries between fake news creators and consumers have started to become blurred. Depending on the situation, people can quickly change their roles from fake news consumers to creators, or vice versa (with or without their intention). Furthermore, news creation and consumption are the most fundamental aspects that form the relationship between news and people. However, a significant amount of fake news research has positioned in news creation while considerably less research focus has been placed in news consumption (see Figs 1 & 2 ). This suggests that we must consider fake news as a comprehensive aspect of news consumption and creation .

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

https://doi.org/10.1371/journal.pone.0260080.g001

thumbnail

The papers were published in IEEE, ACM, ELSEVIER, arXiv, Wiley, APA from 2010 to 2020 classified by publisher, main category, sub category, and evaluation method (left to right).

https://doi.org/10.1371/journal.pone.0260080.g002

In this paper, we looked into fake news research through the lens of news creation and consumption ( Fig 3 ). Our survey results offer different yet salient insights on fake news research compared with other survey papers (e.g., [ 13 , 30 , 31 ]), which primarily focus on fake news creation. The main contributions of our survey are as follows:

  • We investigate trends in fake news research from 2010 to 2020 and confirm a need for applying a comprehensive perspective to fake news phenomenon.
  • We present fake news research through the lens of news creation and consumption with external and internal factors.
  • We examine key findings with a mental model approach, which highlights individuals’ differences in information understandings, expectations, or consumption.
  • We summarize our review and discuss complementary roles of computer and social sciences and potential future directions for fake news research.

thumbnail

We investigate fake news research trend (Section 2), and examine fake news creation and consumption through the lenses of external and internal factors. We also investigate research efforts to mitigate external factors of fake news creation and consumption: (a) indicates fake news creation (Section 3), and (b) indicates fake news consumption (Section 4). “Possible moves” indicates that news consumers “possibly” create/propagate fake news without being aware of any negative impact.

https://doi.org/10.1371/journal.pone.0260080.g003

2 Fake news definition and trends

There is still no definition of fake news that can encompass false news and various types of disinformation (e.g., satire, fabricated content) and can reach a social consensus [ 30 ]. The definition continues to change over time and may vary depending on the research focus. Some research has defined fake news as false news based on the intention and factuality of the information [ 4 , 15 , 32 – 36 ]. For example, Allcott and Gentzkow [ 4 ] defined fake news as “news articles that are intentionally and verifiably false and could mislead readers.” On the other hand, other studies have defined it as “a news article or message published and propagated through media, carrying false information regardless of the means and motives behind it” [ 13 , 37 – 43 ]. Given this definition, fake news refers to false information that causes an individual to be deceived or doubt the truth, and fake news can only be useful if it actually deceives or confuses consumers. Zhou and Zafarani [ 31 ] proposed a broad definition (“Fake news is false news.”) that encompasses false online content and a narrow definition (“Fake news is intentionally and verifiably false news published by a news outlet.”). The narrow definition is valid from the fake news creation perspective. However, given that fake news creators and consumers are now interchangeable (e.g., news consumers also play a role of gatekeeper for fake news propagation), it has become important to understand and investigate the fake news through consumption perspectives. Thus, in this paper, we use the broad definition of fake news.

Our research motivation for considering news creation and consumption in fake news research was based on the trend analysis. We collected 2,277 fake news-related literature using four keywords (i.e., fake news, false information, misinformation, rumor) to identify longitudinal trends of fake news research from 2010 to 2020. The data collection was conducted from July to September 2020. The criteria of data collection was whether any of these keywords exists in the title or abstract. To reflect diverse research backgrounds/domains, we considered six primary publishers (ACM, IEEE, arXiv, APA, ELSEVIER, and Wiley). The number of papers collected for each publisher is as follows: 852 IEEE (37%), 639 ACM (28%), 463 ELSEVIER (20%), 142 arXiv (7%), 141 Wiley (6%), 40 APA (2%). We excluded 59 papers that did not have the abstract and used 2,218 papers for the analysis. We then randomly chose 200 papers, and two coders conducted manual inspection and categorization. The inter-coder reliability was verified by the Cohen’s Kappa measurement. The scores for each main/sub-category were higher than 0.72 (min: 0.72, max: 0.95, avg: 0.85), indicating that the inter-coder reliability lies between “substantial” to “perfect” [ 44 ]. Through the coding procedure, we excluded non-English studies (n = 12) and reports on study protocol only (n = 6), and 182 papers were included in synthesis. The PRISMA flow chart depicts the number of articles identified, included, and excluded (see Fig 1 ).

The papers were categorized into two main categories: (1) creation (studies with efforts to detect fake news or mitigate spread of fake news) and (2) consumption (studies that reported the social impacts of fake news on individuals or societies and how to appropriately handle fake news). Each main category was then classified into sub-categories. Fig 4 shows the frequency of the entire literature by year and the overall trend of fake news research. It appears that the consumption perspective of fake news still has not received sufficient attention compared with the creation perspective ( Fig 4(a) ). Fake news studies have exploded since the 2016 US Presidential Election, and the trend of increase in fake news research continues. In the creation category, the majority of papers (135 out of 158; 85%) were related to the false information (e.g., fake news, rumor, clickbait, spam) detection model ( Fig 4(b) ). On the other hand, in the consumption category, much research pertains to data-driven fake news trend analysis (18 out of 42; 43%) or fake content consumption behavior (16 out of 42; 38%), including studies for media literacy education or echo chamber awareness ( Fig 4(c) ).

thumbnail

We collected 2,277 fake news related-papers and randomly chose and categorized 200 papers. Each marker indicates the number of fake news studies per type published in a given year. Fig 4(a) shows a research trend of news creation and consumption (main category). Fig 4(b) and 4(c) show a trend of the sub-categories of news creation and consumption. In Fig 4(b), “Miscellaneous” includes studies on stance/propaganda detection and a survey paper. In Fig 4(c), “Data-driven fake news trend analysis” mainly covers the studies reporting the influence of fake news that spread around specific political/social events (e.g., fake news in Presidential Election 2016, Rumor in Weibo after 2015 Tianjin explosions). “Conspiracy theory” refers to an unverified rumor that was passed on to the public.

https://doi.org/10.1371/journal.pone.0260080.g004

3 Fake news creation

Fake news is no longer merely propaganda spread by inflammatory politicians; it is also made for financial benefit or personal enjoyment [ 45 ]. With the development of social media platforms people often create completely false information for reasons beyond satire. Further, there is a vicious cycle of this false information being abused by politicians and agitators.

Fake news creators are indiscriminately producing fake news while considering the behavioral and psychological characteristics of today’s news consumers [ 46 ]. For instance, the sleeper effect [ 47 ] refers to a phenomenon in which the persuasion effect increases over time, even though the pedigree of information shows low reliability. In other words, after a long period of time, memories of the pedigree become poor and only the content tends to be remembered regardless of the reliability of the pedigree. Through this process, less reliable information becomes more persuasive over time. Fake news creators have effectively created and propagated fake news by targeting the public’s preference for news consumption through peripheral processing routes [ 35 , 48 ].

Peripheral routes are based on the elaboration likelihood model (ELM) [ 49 ], one of the representative psychological theories that handles persuasive messages. According to the ELM, the path of persuasive message processing can be divided into the central and the peripheral routes depending on the level of involvement. On one hand, if the message recipient puts a great deal of cognitive effort into processing, the central path is chosen. On the other hand, if the process of the message is limited due to personal characteristics or distractions, the peripheral route is chosen. Through a peripheral route, a decision is made based on other secondary cues (e.g., speakers, comments) rather than the logic or strength of the argument.

Wang et al. [ 50 ] demonstrated that most of the links shared or mentioned in social media have never even been clicked. This implies that many people perceive and process information in only fragmentary way, such as via news headlines and the people sharing news, rather than considering the logical flow of news content.

In this section, we closely examined each of the external and internal factors affecting fake news creation, as well as the research efforts carried out to mitigate the negative results based on the fake news creation perspective.

3.1 External factors: Fake news creation facilitators

We identified two external factors that facilitate fake news creation and propagation: (1) the unification of news creation, consumption, and distribution, (2) the misuse of AI technology, and (3) the use of social media as a news platform (see Fig 5 ).

thumbnail

We identify two external factors—The unification of news and the misuse of AI technology—That facilitate fake news creation.

https://doi.org/10.1371/journal.pone.0260080.g005

3.1.1 The unification of news creation, consumption, and distribution.

The public’s perception of news and the major media of news consumption has gradually changed. The public no longer passively consumes news exclusively through traditional news organizations with specific formats (e.g., the inverted pyramid style, verified sources) nor view those news simply as a medium for information acquisition. The public’s active news consumption behaviors began in earnest with the advent of citizen journalism by implementing journalistic behavior based on citizen participation [ 51 ] and became commonplace with the emergence of social media. As a result, the public began to prefer interactive media, in which new information could be acquired, their opinions can be offered, and they can discuss the news with other news consumers. This environment has motivated the public to make content about their beliefs and deliver the content to many people as “news.” For example, a recent police crackdown video posted in social media quickly spread around the world that influenced protesters and civic movements. Then, it was reported later by the mainstream media [ 52 ].

The boundaries between professional journalists and amateurs, as well as between news consumers and creators, are disappearing. This has led to a potential increase in deceptive communications, making news consumers suspicious and misinterpreted the reality. Online platforms (e.g., YouTube, Facebook) that allow users to freely produce and distribute content have been growing significantly. As a result, fake news content can be used to attract secondary income (e.g., multinational enterprises’ advertising fees), which contributes to accelerating fake news creation and propagation. An environment in which the public can only consume news that suits their preferences and personal cognitive biases has made it much easier for fake news creators to achieve their specific purposes (e.g., supporting a certain political party or a candidate they favor).

3.1.2 The misuse of AI technology.

The development of AI technology has made it easier to develop and utilize tools for creating fake news, and many studies have confirmed the impact of these technologies— (1) social bots, (2) trolls, and (3) fake media —on social networks and democracy over the past decade.

3.1.2.1 Social bots . Shao et al. [ 53 ] analyzed the pattern of fake news spread and confirmed that social bots play a significant role in fake news propagation and social bot-based automated accounts were largely affected by the initial stage of spreading fake news. In general, it is uneasy for the public to determine whether such accounts are people or bots. In addition, social bots are not illegal tools and many companies legally purchase them as a part of marketing, thus it is not easy to curb the use of social bots systematically.

3.1.2.2 Trolls . The term “trolls” refers to people who deliberately cause conflict or division by uploading inflammatory, provocative content or unrelated posts to online communities. They work with the aim of stimulating people’s feelings or beliefs and hindering mature discussions. For example, the Russian troll army has been active in social media to advance its political agenda and cause social turmoil in the US [ 54 ]. Zannettou et al. [ 55 ] confirmed how effectively the Russian troll army has been spreading fake news URLs on Twitter and its significant impact on making other Twitter users believe misleading information.

3.1.2.3 Fake media . It is now possible to manipulate or reproduce content in 2D or even 3D through AI technology. In particular, the advent of fake news using Deepfake technology (combining various images on an original video and generating a different video) has raised another major social concern that had not been imagined before. Due to the popularity of image or video sharing on social media, such media types have become the dominant form of news consumption, and the Deepfake technology itself is becoming more advanced and applied to images and videos in a variety of domains. We witnessed a video clip of former US President Barack Obama criticizing Donald Trump, which was manipulated by the US online media company BuzzFeed to highlight the influence and danger of Deepfake, causing substantial social confusion [ 56 ].

3.2 Internal factors: Fake news creation purposes

We identified three main purposes for fake news creation— (1) ideological purposes, (2) monetary purposes, and (3) fear/panic reduction .

3.2.1 Ideological purpose.

Fake news has been created and propagated for political purposes by individuals or groups that positively affect the parties or candidates they support or undermine those who are not on the same side. Fake news with this political purpose has shown to negatively influence people and society. For instance, Russia created a fake Facebook account that caused many political disputes and enhanced polarization, affecting the 2016 US Presidential Election [ 57 ]. As polarization has intensified, there has also been a trend in the US that “unfriending” people who have different political tendencies [ 58 ]. This has led the public to decide whether to trust the news or not regardless of its factuality and has resulted in worsening in-group biases. During the Brexit campaign in the UK, many selective news articles were exposed on Facebook, and social bots and trolls were also confirmed as being involved in creating public opinions [ 59 , 60 ].

3.2.2 Monetary purpose.

Financial benefit is another strong motivation for many fake news creators [ 34 , 61 ]. Fake news websites usually reach the public through social media and make profits through posted advertisements. The majority of fake websites are focused on earning advertising revenue by spreading fake news that would attract readers’ attention, rather than political goals. For example, during the 2016 US Presidential Election in Macedonia, young people in their 10s and 20s used content from some extremely right-leaning blogs in the US to mass-produce fake news, earning huge advertising revenues [ 62 ]. This is also why fake news creators use provocative titles, such as clickbait headlines, to induce clicks and attempt to produce as many fake news articles as possible.

3.2.3 Fear and panic reduction.

In general, when epidemics become more common around the world, rumors of absurd and false medical tips spread rapidly in social media. When there is a lack of verified information, people feel great anxious and afraid and easily believe such tips, regardless of whether they are true [ 63 , 64 ]. The term infodemic , which first appeared during the 2003 SARS pandemics, describes this phenomenon [ 65 ]. Regarding COVID-19, health authorities have recently announced that preventing the creation and propagation of fake news about the virus is as important as alleviating the contagious power of COVID-19 [ 66 , 67 ]. The spread of fake news due to the absence of verified information has become more common regarding health-related social issues (e.g., infectious diseases), natural disasters, etc. For example, people with disorders affecting cognition (e.g., neurodegenerative disorder) are tend to easily believe unverified medical news [ 68 – 70 ]. Robledo and Jankovic [ 68 ] confirmed that many fake or exaggerated medical journals are misleading people with Parkinson’s disease by giving false hopes and unfounded fake articles. Another example is a rumor that climate activists set fire to raise awareness of climate change quickly spread as fake news [ 71 ], when a wildfire broke out in Australia in 2019. As a result, people became suspicious and tended to believe that the causes of climate change (e.g., global warming) may not be related to humans, despite scientific evidence and research data.

3.3 Fake news detection and prevention

The main purpose of fake news creation is to make people confused or deceived regardless of topic, social atmosphere, or timing. Due to this purpose, it appears that fake news tends to have similar frames and structural patterns. Many studies have attempted to mitigate the spread of fake news based on these identifiable patterns. In particular, research on developing computational models that detect fake information (text/images/videos), based on machine or deep learning techniques has been actively conducted, as summarized in Table 1 . Other modeling studies include the credibility of weblogs [ 84 , 85 ], communication quality [ 88 ], susceptibility level [ 90 ], and political stance [ 86 , 87 ]. The table was intended to characterize a research scope and direction of the development of fake information creation (e.g., the features employed in each model development), not to present an exhaustive list.

thumbnail

https://doi.org/10.1371/journal.pone.0260080.t001

3.3.1 Fake text information detection.

Research has considered many text-based features, such as structural (e.g., website URLs and headlines with all capital letters or exclamations) and linguistic information (e.g., grammar, spelling, and punctuation errors) about the news. Research has also considered the sentiments of news articles, the frequency of the words used, user information, and who left comments on the news articles, and social network information among users (who were connected based on activities of commenting, replying, liking or following) were used as key features for model development. These text-based models have been developed for not only fake news articles but also other types of fake information, such as clickbaits, fake reviews, spams, and spammers. Many of the models developed in this context performed a binary classification that distinguished between fake and non-fake articles, with the accuracy of such models ranging from 86% to 93%. Mainstream news articles were used to build most models, and some studies used articles on social media, such as Twitter [ 15 , 17 ]. Some studies developed fake news detection models by extracting features from images, as well as text, in news articles [ 16 , 17 , 75 ].

3.3.2 Fake visual media detection.

The generative adversary network (GAN) is an unsupervised learning method that estimates the probability distribution of original data and allows an artificial neural network to produce similar distributions [ 109 ]. With the advancement of GAN, it has become possible to transform faces in images into those of others. However, photos of famous celebrities have been misused (e.g., being distorted into pornographic videos), increasing concerns about the possible misuse of such technology [ 110 ] (e.g., creating rumors about a certain political candidate). To mitigate this, research has been conducted to develop detection models for fake images. Most studies developed binary classification models (fake image or not), and the accuracy of fake image detection models was high, ranging from 81% to 97%. However, challenges still exist. Unlike fake news detection models that employ fact-checking websites or mainstream news as data verification or ground-truth, fake image detection models were developed using the same or slightly modified image datasets (e.g., CelebA [ 97 ], FFHQ [ 99 ]), asking for the collection and preparation of a large amount of highly diverse data.

4 Fake news consumption

4.1 external factors: fake news consumption circumstances.

The implicit social contract between civil society and the media has gradually disintegrated in modern society, and accordingly, citizens’ trust in the media began to decline [ 111 ]. In addition, the growing number of digital media platforms has changed people’s news consumption environment. This change has increased the diversity of news content and the autonomy of information creation and sharing. At the same time, however, it blurred the line between traditional mainstream media news and fake news in the Internet environment, contributing to polarization.

Here, we identified three external factors that have forced the public to encounter fake news: (1) the decline of trust in the mainstream media, (2) a high-choice media environment, and (3) the use of social media as a news platform .

4.1.1 Fall of mainstream media trust.

Misinformation and unverified or biased reports have gradually undermined the credibility of the mainstream media. According to the 2019 American mass media trust survey conducted by Gallup, only 13% of Americans said they trusted traditional mainstream media: newspapers or TV news [ 112 ]. The decline in traditional media trust is not only a problem for the US, but also a common concern in Europe and Asia [ 113 – 115 ].

4.1.2 High-choice media environment.

Over the past decade, news consumption channels have been radically diversified, and the mainstream has shifted from broadcasting and print media to mobile and social media environments. Despite the diversity of news consumption channels, personalized preferences and repetitive patterns have led people to be exposed to limited information and continue to consume such information increasingly [ 116 ]. This selective news consumption attitude has enhanced the polarization of the public in many multi-media environments [ 117 ]. In addition, the commercialization of digital platforms have created an environment in which cognitive bias can be easily strengthened. In other words, a digital platform based on recommended algorithms has the convenience of providing similar content continuously after a given type of content is consumed. As a result, it may be easy for users to fall into the echo chamber because they only access recommended content. A survey of 1,000 YouTube videos found that more than two-thirds of the videos contained content in favor of a particular candidate [ 118 ].

News consumption in social media does not simply mean the delivery of messages from creators to consumers. The multi-directionality of social media has blurred the boundaries between information creators and consumers. In other words, users are already interacting with one another in various fashions, and when a new interaction type emerges and is supported by the platform, users will display other types of new interactions, which will also influence ways of consuming news information.

4.1.3 Use of social media as news platform.

Here we focus on the most widely used social media platforms—YouTube, Facebook, and Twitter—where each has characteristics of encouraging limited news consumption.

First, YouTube is the most unidirectional of social media. Many YouTube creators tend to convey arguments in a strong, definitive tone through their videos, and these content characteristics make viewers judge the objectivity of the information via non-verbal elements (e.g., speaker, thumbnail, title, comments) rather than facts. Furthermore, many comments often support the content of the video, which may increase the chances of viewers accepting somewhat biased information. In addition, a YouTube video recommendation algorithm causes users who watch certain news to continuously be exposed to other news containing the same or similar information. This behavior and direction on the part of isolated content consumption could undermine the viewer’s media literacy, and is likely to create a screening effect that blocks the user’s eyes and ears.

Second, Facebook is somewhat invisible regarding the details of news articles because this platform ostensibly shows only the title, the number of likes, and the comments of the posts. Often, users have to click on the article and go to the URL to read the article. This structure and consumptive content orientation on the part of Facebook presents obstacles that prevent users from checking the details of their posts. As a result, users have become likely to make limited and biased judgments and perceive content through provocative headlines and comments.

Third, the largest feature of Twitter is anonymity because Twitter asks users to make their own pseudonyms [ 119 ]. Twitter has a limited number of letters to upload, and compared to other platforms, users can produce and spread indiscriminate information anonymously and do not know who is behind the anonymity [ 120 , 121 ]. On the other hand, many accounts on Facebook operate under real names and generally share information with others who are friends or followers. Information creators are not held accountable for anonymous information.

4.2 Internal factors: Cognitive mechanism

Due to the characteristics of the Internet and social media, people are accustomed to consuming information quickly, such as reading only news headlines and checking photos in news articles. This type of news consumption practice could lead people to consider news information mostly based on their beliefs or values. This practice can make it easier for people to fall into an echo chamber and further social confusion. We identified two internal factors affecting fake news consumption: (1) cognitive biases and (2) personal traits (see Fig 6 ).

thumbnail

https://doi.org/10.1371/journal.pone.0260080.g006

4.2.1 Cognitive biases.

Cognitive bias is an observer effect that is broadly recognized in cognitive science and includes basic statistical and memory errors [ 8 ]. However, this bias may vary depending on what factors are most important to affect individual judgments and choices. We identified five cognitive biases that affect fake news consumption: confirmation bias, in-group bias, choice-supportive bias, cognitive dissonance, and primacy effect.

Confirmation bias relates to a human tendency to seek out information in line with personal thoughts or beliefs, as well as to ignore information that goes against such beliefs. This stems from the human desire to be reaffirmed, rather than accept denials of one’s opinion or hypothesis. If the process of confirmation bias is repeated, a more solid belief is gradually formed, and the belief remains unchanged even after encountering logical and objective counterexamples. Evaluating information with an objective attitude is essential to properly investigating any social phenomenon. However, confirmation bias significantly hinders this. Kunda [ 122 ] discussed experiments that investigated the cognitive processes as a function of accuracy goals and directional goals. Her analysis demonstrated that people use different cognitive processes to achieve the two different goals. For those who pursue accuracy goals (reaching a “right conclusion”), information is used as a tool to determine whether they are right or not [ 123 ], and for those with directional goals (reaching a desirable conclusion), information is used as a tool to justify their claims. Thus, biased information processing is more frequently observed by people with directional goals [ 124 ].

People with directional goals have a desire to reach the conclusion they want. The more we emphasize the seriousness and omnipresence of fake news, the less people with directional goals can identify fake news. Moreover, their confirmation bias through social media could result in an echo chamber, triggering a differentiation of public opinion in the media. The algorithm of the media platform further strengthens the tendency of biased information consumption (e.g., filter bubble).

In-group bias is a phenomenon in which an individual favors a group that he or she belongs to. The causes of in-group bias are two [ 125 ]. One is a categorization process, which exaggerates the similarities between members within one category (the internal group) and differences with others (the external groups). Consequently, positive reactions towards the internal group and negative reactions (e.g., hostility) towards the external group are both increased. The other reason is self-respect based on social identity theory. To positively evaluate the internal group, a member tends to perceive that other group members are similar to himself or herself.

In-group bias has a significant impact on fake news consumption because of radical changes in the media environment [ 126 ]. The public recognizes and forms groups based on issues through social media. The emotions and intentions of such groups of people online can be easily transferred or developed into offline activities, such as demonstrations and rallies. Information exchanges within such internal groups proceeds similarly to the situation with confirmation bias. If confirmation bias is keeping to one’s beliefs, in-group bias equates the beliefs of my group with my beliefs.

Choice-supportive bias refers to an individual’s tendency to justify his or her decision by highlighting the evidence that he or she did not consider in making the decision [ 127 ]. For instance, people sometimes have no particular purpose when they purchase a certain brand of products or service, or support a particular politician or political party. They emphasize that their choices at the time were right and inevitable. They also tend to focus more on positive aspects than negative effects or consequences to justify their choice. However, these positive aspects can be distorted because they are mainly based on memory. Thus, choice-supportive bias, can be regarded as the cognitive errors caused by memory distortion.

The behavioral condition of choice-supportive bias is used to justify oneself, which usually occurs in the context of external factors (e.g., maintaining social status or relationships) [ 7 ]. For example, if people express a certain political opinion within a social group, people may seek information with which to justify the opinion and minimize its flaws. In this procedure, people may accept fake news as a supporting source for their opinions.

Cognitive dissonance was based on the notion that some psychological tension would occur when an individual had two perceptions that were inconsistent [ 128 ]. Humans have a desire to identify and resolve the psychological tension that occurs when a cognitive dissonance is established. Regarding fake news consumption, people easily accept fake news if it is aligned with their beliefs or faith. However, if such news is seen as working against their beliefs or faith, people define even real news as fake and consume biased information in order to avoid cognitive dissonance. This is quite similar to cognitive bias. Selective exposure to biased information intensifies its extent and impact in social media. In these circumstances, an individual’s cognitive state is likely to be formed by information from unclear sources, which can be seen as a negative state of perception. In that case, information consumers selectively consume only information that can be in harmony with negative perceptions.

Primacy effect means that information presented previously will have a stronger effect on the memory and decision-making than information presented later [ 129 ]. The “interference theory [ 130 ]” is often referred to as a theoretical basis for supporting the primacy effect, which highlights the fact that the impression formed by the information presented earlier influences subsequent judgments and the process of forming the next impression.

The significance of the primary effect for fake news consumption is that it can be a starting point for biased cognitive processes. If an individual first encounters an issue in fake news and does not go through a critical thinking process about that information, he or she may form false attitudes regarding the issue [ 131 , 132 ]. Fake news is a complex combination of facts and fiction, making it difficult for information consumers to correctly judge whether the news is right or wrong. These cognitive biases induce the selective collection of information that feels more valid for news consumers, rather than information that is really valid.

4.2.2 Personal traits.

We two aspects of personal characteristics or traits can influence one’s behaviors in terms of news consumption: susceptibility and personality.

4.2.2.1 Susceptibility . The most prominent feature of social media is that consumers can be also creators, and the boundaries between the creators and consumers of information become unclear. New media literacy (i.e., the ability to critically and suitably consume messages in a variety of digital media channels, such as social media) can have a significant impact on the degree of consumption and dissemination of fake news [ 133 , 134 ]. In other words, the higher new media literacy is, the higher the probability that an individual is likely to take a critical standpoint toward fake news. Also, the susceptibility level of fake news is related to one’s selective news consumption behaviors. Bessi et al. [ 35 ] studied misinformation on Facebook and found that users who frequently interact with alternative media tend to interact with intentionally false claims more often.

Personality is an individual’s traits or behavior style. Many scholars have agreed that the personality can be largely divided into five categories (Big Five)—extraversion, agreeableness, neuroticism, openness, and conscientiousness [ 135 , 136 ]—and used them to understand the relationship between personality and news consumption.

Extroversion is related to active information use. Previous studies have confirmed that extroverts tend to use social media and that their main purpose of use is to acquire information [ 137 ] and better determine the factuality of news on social media [ 138 ]. Furthermore, people with high agreeableness, which refers to how friendly, warm, and tactful, tend to trust real news than fake news [ 138 ]. Neuroticism refers to a broad personality trait dimension representing the degree to which a person experiences the world as distressing, threatening, and unsafe. People with high neuroticism usually show negative emotions or information sharing behavior [ 139 ]. Neuroticism is positively related to fake news consumption [ 138 ]. Openness refers to the degree of enjoying new experiences. High openness is associated with high curiosity and engagement in learning [ 140 ], which enhances critical thinking ability and decreases negative effects of fake news consumption [ 138 , 141 ]. Conscientiousness refers to a person’s work ethic, being orderly, and thoroughness [ 142 ]. People with high conscientiousness tend to regard social media use as distraction from their tasks [ 143 – 145 ].

4.3 Fake news awareness and prevention

4.3.1 decision-making support tools..

News on social media does not go through the verification process, because of its high degree of freedom to create, share, and access information. The study reported that most citizens in advanced countries will have more fake information than real information in 2022 [ 146 ]. This indicates that potential personal and social damage from fake news may increase. Paradoxically, many countries that suffer from fake news problems strongly guarantee the freedom of expression under their constitutions; thus, it would be very difficult to block all possible production and distribution of fake news sources through laws and regulations. In this respect, it would be necessary to put in place not only technical efforts to detect and prevent the production and dissemination of fake news but also social efforts to make news consumers aware of the characteristics of online fake information.

Inoculation theory highlights that human attitudes and beliefs can form psychological resistance by being properly exposed to arguments against belief in advance. To have the ability to strongly protest an argument, it is necessary to expose and refute the same sort of content with weak arguments first. Doris-Down et al. [ 147 ] asked people who were from different political backgrounds to communicate directly through mobile apps and investigated whether these methods alleviated their echo-chamberness. As a result, the participants made changes, such as realizing that they had a lot in common with people who had conflicting political backgrounds and that what they thought was different was actually trivial. Karduni et al. [ 148 ] provided comprehensive information (e.g., connections among news accounts and a summary of the location entities) to study participants through the developed visual analytic system and examined how they accepted fake news. Another study was conducted to confirm how people determine the veracity of news by establishing a system similar to social media and analyzing the eye tracking of the study participants while reading fake news articles [ 28 ].

Some research has applied the inoculation theory to gamification. A “Bad News” game was designed to proactively warn people and expose them to a certain amount of false information through interactions with the gamified system [ 29 , 149 ]. The results confirmed the high effectiveness of inoculation through the game and highlighted the need to educate people about how to respond appropriately to misinformation through computer systems and games [ 29 ].

4.3.2 Fake information propagation analysis.

Fake information tends to show a certain pattern in terms of consumption and propagation, and many studies have attempted to identify the propagation patterns of fake information (e.g., the count of unique users, the depth of a network) [ 150 – 153 ].

4.3.2.1 Psychological characteristics . The theoretical foundation of research intended to examine the diffusion patterns of fake news lies in psychology [ 154 , 155 ] because psychological theories explain why and how people react to fake news. For instance, a news consumer who comes across fake news will first have doubts, judge the news against his background knowledge, and want to clarify the sources in the news. This series of processes ends when sufficient evidence is collected. Then the news consumer ends in accepting, ignoring, or suspecting the news. The psychological elements that can be defined in this process are doubts, negatives, conjectures, and skepticism [ 156 ].

4.3.2.2 Temporal characteristics . Fake news exhibits different propagation patterns from real news. The propagation of real news tends to slowly decrease over time after a single peak in the public’s interest, whereas fake news does not have a fixed timing for peak consumption, and a number of peaks appear in many cases [ 157 ]. Tambuscio et al. [ 151 ] proved that the pattern of the spread of rumors is similar to the existing epidemic model [ 158 ]. Their empirical observations confirmed that the same fake news reappears periodically and infects news consumers. For example, rumors that include the malicious political message that “Obama is a Muslim” are still being spread a decade later [ 159 ]. This pattern of proliferation and consumption shows that fake news may be consumed for a certain purpose.

5 A mental-model approach

We have examined news consumers’ susceptibility to fake news due to internal and external factors, including personal traits, cognitive biases, and the contexts. Beyond an investigation on the factor level, we seek to understand people’s susceptibility to misinformation by considering people’s internal representations and external environments holistically [ 5 ]. Specifically, we propose to comprehend people’s mental models of fake news. In this section, we first briefly introduce mental models and discuss their connection to misinformation. Then, we discuss the potential contribution of using a mental-model approach to the field of misinformation.

5.1 Mental models

A mental model is an internal representation or simulation that people carry in their minds of how the world works [ 160 , 161 ]. Typically, mental models are constructed in people’s working memory, in which information from long-term memory and the environments are combined [ 162 ]. They also indicate that individuals represent complex phenomena with somewhat abstraction based on their own experiences and understanding of the contexts. People rely on mental models to understand and predict their interactions with environments, artifacts and computing systems, as well as other individuals [ 163 , 164 ]. Generally, individuals’ ability to represent the continually changing environments is limited and unique. Thus, mental models tend to be functional and dynamic but not necessarily accurate or complete [ 163 , 165 ]. Mental models also differ between various groups and in particular between experts and novices [ 164 , 166 ].

5.2 Mental models and misinformation

Mental models have been proposed to understand human behaviors in spatial navigation [ 167 ], learning [ 168 , 169 ], deductive reasoning [ 170 ], mental presentations of real or imagined situations [ 171 ], risk communication [ 172 ], and usable cybersecurity and privacy [ 166 , 173 , 174 ]. People use mental models to facilitate their comprehension, judgment, and actions, and can be the basis of individual behaviors. In particular, the connection between a mental-model approach and misinformation has been revealed in risk communication regarding vaccines [ 175 , 176 ]. For example, Downs et al. [ 176 ] interviewed 30 parents from three US cities to understand their mental models about vaccination for their children aged 18 to 23 months. The results revealed two mental models about vaccination: (1) heath oriented : parents who focused on health-oriented topics trusted anecdotal communication more than statistical arguments; and (2) risk oriented : parents with some knowledge about vaccine mechanisms trusted communication with statistical arguments more than anecdotal information. Also, the authors found that many parents, even those favorable to vaccination, can be confused by ongoing debate, suggesting somewhat incompleteness of their mental models.

5.3 Potential contributions of a mental-model approach

Recognizing and dealing with the plurality of news consumers’ perception, cognition and actions is currently considered as key aspects of misinformation research. Thus, a mental model approach could significantly improve our understanding of people’s susceptibility to misinformation, as well as inform the development of mechanisms to mitigate misinformation.

One possible direction is to investigate the demographic differences in the context of mental models. As more Americans have adopted social media, the social media users have become more representative for the population. Usage by older adults has increased in recent years, with the use rate of about 12% in 2012 to about 35% in 2016 ( https://www.pewresearch.org/internet/fact-sheet/social-media/ ). Guess et al. (2019) analyzed participants’ profiles and their sharing activity on Facebook during the 2016 US Presidential campaign. A strong age effect was revealed. While controlled the effects of ideology and education, their results showed that Facebook users who are over 65 years old were associated with sharing nearly seven times as many articles from fake news domains on Facebook as those who are between 18–29 years old, or about 2.3 times as many as those in the age between 45 to 65.

Besides older adults, college students were shown more susceptibility to misinformation [ 177 ]. We can identify which mental models a particular age group ascribes to, and compare the incompleteness or incorrectness of the mental models by age. On the other hand, such comparison might be informative to design general mechanisms to mitigate misinformation independent of the different concrete mental models possessed by different types of users.

Users’ actions and decisions are directed by their mental models. We can also explore news consumers’ mental models and discover unanticipated and potentially risky human system interactions, which will inform the development and design of user interactions and education endeavors to mitigate misinformation.

A mental-model approach supplies an important, and as yet unconsidered, dimension to fake news research. To date, research on people’s susceptibility to fake news in social media has lagged behind research on computational aspect research on fake news. Scholars have not considered issues of news consumers’ susceptibility across the spectrum of their internal representations and external environments. An investigation from the mental model’s perspective is a step toward addressing such need.

6 Discussion and future work

In this section, we highlight the importance of balancing research efforts on fake news creation and consumption and discuss potential future directions of fake news research.

6.1 Leveraging insights of social science to model development

Developing fake news detection models has achieved great performance. Feature groups used in the model are diverse including linguistics, vision, sentiment, topic, user, and network, and many models used multiple groups to increase the performance. By using datasets with different size and characteristics, research has demonstrated the effectiveness of the models through a comparison analysis. However, much research has considered and used the features that are easily quantifiable, and many of them tend to have unclear justification or rationale of being used in modeling. For example, what is the relationship between the use of question (?), exclamation (!), or quotation marks (“…”) and fake news?, what does it mean by a longer description relates to news trustworthiness?. There are also many important aspects that can be used as additional features for modeling and have not yet found a way to be quantified. For example, journalistic styles are important characteristics that determine a level of information credibility [ 156 ], but it is challenging to accurately and reliably quantified them. There are many intentions (e.g., ideological standpoint, financial gain, panic creation) that authors may implicitly or explicitly display in the post but measuring them is uneasy and not straightforward. Social science research can play a role in here coming up with a valid research methodology to measure such subjective perceptions or notions considering various types and characteristics of them depending on a context or environment. Some research efforts in this research direction include quantifying salient factors of people’s decision-making identified in social science research and demonstrating the effectiveness of using the factors in improving model performance and interpreting model results [ 70 ]. Yet more research that applies socio-technical aspects in model development and application would be needed to better study complex characteristics of fake news.

6.1.1 Future direction.

Insights from social science may help develop transparent and applicable fake news detection models. Such socio-technical models may allow news consumers to have a better understanding of fake news detection results and its application as well as to take more appropriate actions to control fake news phenomenon.

6.2 Lack of research on fake news consumption

Regarding fake news consumption, we confirmed that only few studies involve the development of web- or mobile-based technology systems to help consumers aware possible dangers of fake news. Those studies [ 28 , 29 , 147 , 148 ] tried to demonstrate the feasibility of developed self-awareness systems through user studies. However, due to the limited number of study participants (min: 11, max: 60) and their lack of demographic diversity (i.e., recruited only college students of one school, the psychology research pool at the authors’ institution), the generalization and applicability of these systems are still questionable. On the other hand, research that involves the development of fake news detection models or network analysis to identify the pattern of fake news propagation has been relatively active. These results can be used to identify people (or entities) who intentionally create malicious fake content; however, it is still challenging to restrict people who originally had not shown any behaviors or indications of sharing or creating fake information but later manipulated real news to fake or disseminated fake news with their malicious intention or cognitive biases.

In other words, although fake news detection models have shown great, promising performance, the influence of the models may be exerted in limited cases. This is because fake news detection models heavily rely on the data that were labeled as fake by other fact-checking institutions or sites. If someone manipulates the news that were not covered by fact-checking, the format or characteristics of the manipulated news may be different from those (i.e., conventional features) that are identified and managed in the detection model. Such differences may not be captured by the model. Therefore, to prevent fake news phenomenon more effectively, research needs to consider changes of news consumption.

6.2.1 Future direction.

It may be desirable to support people recognizing that their news consumption behaviors (e.g., like, comment, share) can have a significant ripple effect. Developing a system that tracks activities of people’s news consumption and creation, measures similarity and differences between those activities, and presents behaviors or patterns of news consumption and creation to people would be helpful.

6.3 Limited coverage of fact-checking websites and regulatory approach

Some of the well-known fact-checking websites (e.g., snopes.com, politifact.com) cover news shared mostly on the Internet and label the authenticity or deficiencies of the content (e.g., miscaptioned, legend, misattributed). However, these fact-checking websites may show limited coverage in that they are only used for those who are willing to check the veracity of certain news articles. Social media platforms have been making continuous efforts to mitigate the spread of fake news. For example, Facebook shows that content that has been falsely assessed by fact-checkers is relatively less exposed to news feeds or shows warning indicators [ 178 ]. Instagram has also changed the way that warning labels are displayed when users attempt to view the content that has been falsely assessed [ 179 ]. However, this type of an interface could lead news consumers to relying on algorithmic decision-making rather than self-judgment because these ostensible regulations (e.g., warning labels) tend to lack transparency of the decision. As we explained previously, this is related to filter bubbles. Therefore, it is important to provide a more clear and transparent communicative interface for news consumers to access and understand underlying information of the algorithm results.

6.3.1 Future direction.

It is necessary to create a news consumption circumstance that gives a wider coverage of fake news and more transparent information of algorithmic decisions on news credibility. This will help news consumers preemptively avoid fake news consumption and contribute more to preventing fake news propagation. Consumers also make more proper and accurate decisions based on their understanding of the news.

6.4 New media literacy

With the diversification of news channels, we can easily consume news. However, we are also in a media environment that asks us to self-critically verify news content (e.g., whether the news title reads like a clickbait, whether the news title and content are related), which in reality is hard to be done. Moreover, in social media, news consumers can be news creators or reproducers. During this process, news information could be changed based on a consumer’s beliefs or interests. A problem here is that people may not know how to verify news content or not be aware of whether the information could be distorted or biased. As the news consumer environment changes rapidly and faces modern media deluge, the importance of media literacy education is high. Media literacy refers to the ability to decipher media content, but in a broad sense, to understand the principles of media operation and media content sensibly and critically, and in turn to the ability to utilize and creatively reproduce content. Being a “lazy thinker” is more susceptible to fake news than having a “partisan bias” [ 32 ]. As “screen time” (i.e., time spent looking at smartphone, computer, or television screens) has become more common, people are consuming only stimulating (e.g., sensual pleasure and excitement) information [ 180 ]. This could gradually lower one’s ability of critical, reasonable thinking, leading to making wrong judgments and actions. In France, when fake news problem became more serious, and a great amount of efforts were made to create “European Media Literacy Week” in schools [ 181 ]. The US is also making legislative efforts to add media literacy to the general education curriculum [ 182 ]. However, the acquisition of new media literacy through education may be limited to people in school (e.g., young students) and would be challenging to be expanded to wider populations. Thus, there is also a need for supplementary tools and research efforts to support more people to critically interpret and appropriately consume news.

In addition, more critical social attention is needed because visual content (e.g., images, videos), which had been naturally accepted as facts, can be easily manipulated in a malicious fashion and looked very natural. We have seen that people prefer to watch YouTube videos for news consumption rather than reading news articles. This visual content makes it relatively easy for news consumers to trust the content compared to text-based information and makes it easier to obtain information simply by playing the video. Since visual content will become a more dominant medium in future news consumption, educating and inoculating news consumers about potential threats of fake information in such news media would be important. More attention and research are needed on the technology supporting fake visual content awareness.

6.4.1 Future direction.

Research in both computer science and social science should find ways (e.g., developing a game-based education system or curriculum) to help news consumers aware of their practice of news consumption and maintain right news consumption behaviors.

7 Conclusion

We presented a comprehensive summary of fake news research through the lenses of news creation and consumption. The trends analysis indicated a growing increase in fake news research and a great amount of research focus on news creation compared to news consumption. By looking into internal and external factors, we unpacked the characteristics of fake news creation and consumption and presented the use of people’s mental models to better understand people’s susceptibility to misinformation. Based on the reviews, we suggested four future directions on fake news research—(1) a socio-technical model development using insights from social science, (2) in-depth understanding of news consumption behaviors, (3) preemptive decision-making and action support, and (4) educational, new media literacy support—as ways to reduce the gap between news creation and consumption and between computer science and social science research and to support healthy news environments.

Supporting information

S1 checklist..

https://doi.org/10.1371/journal.pone.0260080.s001

  • View Article
  • Google Scholar
  • 2. Goldman R. Reading fake news, Pakistani minister directs nuclear threat at Israel. The New York Times . 2016;24.
  • PubMed/NCBI
  • 6. Lévy P, Bononno R. Collective intelligence: Mankind’s emerging world in cyberspace. Perseus Books; 1997.
  • 11. Jamieson KH, Cappella JN. Echo chamber: Rush Limbaugh and the conservative media establishment. Oxford University Press; 2008.
  • 14. Shu K, Cui L, Wang S, Lee D, Liu H. defend: Explainable fake news detection. In: In Proc. of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD); 2019. p. 395–405.
  • 15. Ruchansky N, Seo S, Liu Y. Csi: A hybrid deep model for fake news detection. In: In Proc. of the 2017 ACM on Conference on Information and Knowledge Management (CIKM); 2017. p. 797–806.
  • 16. Cui L, Wang S, Lee D. Same: sentiment-aware multi-modal embedding for detecting fake news. In: In Proc. of the 2019 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM); 2019. p. 41–48.
  • 17. Wang Y, Ma F, Jin Z, Yuan Y, Xun G, Jha K, et al. Eann: Event adversarial neural networks for multi-modal fake news detection. In: In Proc. of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data mining (KDD); 2018. p. 849–857.
  • 18. Nørregaard J, Horne BD, Adalı S. Nela-gt-2018: A large multi-labelled news for the study of misinformation in news articles. In: In Proc. of the International AAAI Conference on Web and Social Media (ICWSM). vol. 13; 2019. p. 630–638.
  • 20. Nguyen AT, Kharosekar A, Krishnan S, Krishnan S, Tate E, Wallace BC, et al. Believe it or not: Designing a human-ai partnership for mixed-initiative fact-checking. In: In Proc. of the 31st Annual ACM Symposium on User Interface Software and Technology (UIST); 2018. p. 189–199.
  • 23. Brandon J. Terrifying high-tech porn: creepy’deepfake’videos are on the rise. Fox News . 2018;20.
  • 24. Nguyen TT, Nguyen CM, Nguyen DT, Nguyen DT, Nahavandi S. Deep Learning for Deepfakes Creation and Detection. arXiv . 2019;1.
  • 25. Rossler A, Cozzolino D, Verdoliva L, Riess C, Thies J, Nießner M. Faceforensics++: Learning to detect manipulated facial images. In: IEEE International Conference on Computer Vision (ICCV); 2019. p. 1–11.
  • 26. Nirkin Y, Keller Y, Hassner T. Fsgan: Subject agnostic face swapping and reenactment. In: In Proc. of the IEEE International Conference on Computer Vision (ICCV); 2019. p. 7184–7193.
  • 28. Simko J, Hanakova M, Racsko P, Tomlein M, Moro R, Bielikova M. Fake news reading on social media: an eye-tracking study. In: In Proc. of the 30th ACM Conference on Hypertext and Social Media (HT); 2019. p. 221–230.
  • 35. Horne B, Adali S. This just in: Fake news packs a lot in title, uses simpler, repetitive content in text body, more similar to satire than real news. In: In Proc. of the 11th International AAAI Conference on Web and Social Media (ICWSM); 2017. p. 759–766.
  • 36. Golbeck J, Mauriello M, Auxier B, Bhanushali KH, Bonk C, Bouzaghrane MA, et al. Fake news vs satire: A dataset and analysis. In: In Proc. of the 10th ACM Conference on Web Science (WebSci); 2018. p. 17–21.
  • 37. Mustafaraj E, Metaxas PT. The fake news spreading plague: was it preventable? In: In Proc. of the 9th ACM Conference on Web Science (WebSci); 2017. p. 235–239.
  • 40. Jin Z, Cao J, Zhang Y, Luo J. News verification by exploiting conflicting social viewpoints in microblogs. In: In Proc. of the 13th AAAI Conference on Artificial Intelligence (AAAI); 2016. p. 2972–2978.
  • 41. Rubin VL, Conroy N, Chen Y, Cornwell S. Fake news or truth? using satirical cues to detect potentially misleading news. In: In Proc. of the Second Workshop on Computational Approaches to Deception Detection ; 2016. p. 7–17.
  • 45. Kahneman D, Tversky A. Prospect theory: An analysis of decision under risk. In: Handbook of the fundamentals of financial decision making: Part I. World Scientific; 2013. p. 99–127.
  • 46. Hanitzsch T, Wahl-Jorgensen K. Journalism studies: Developments, challenges, and future directions. The Handbook of Journalism Studies . 2020; p. 3–20.
  • 48. Osatuyi B, Hughes J. A tale of two internet news platforms-real vs. fake: An elaboration likelihood model perspective. In: In Proc. of the 51st Hawaii International Conference on System Sciences (HICSS); 2018. p. 3986–3994.
  • 49. Cacioppo JT, Petty RE. The elaboration likelihood model of persuasion. ACR North American Advances. 1984; p. 673–675.
  • 50. Wang LX, Ramachandran A, Chaintreau A. Measuring click and share dynamics on social media: a reproducible and validated approach. In Proc of the 10th International AAAI Conference on Web and Social Media (ICWSM). 2016; p. 108–113.
  • 51. Bowman S, Willis C. How audiences are shaping the future of news and information. We Media . 2003; p. 1–66.
  • 52. Hill E, Tiefenthäler A, Triebert C, Jordan D, Willis H, Stein R. 8 Minutes and 46 Seconds: How George Floyd Was Killed in Police Custody; 2020. Available from: https://www.nytimes.com/2020/06/18/us/george-floyd-timing.html .
  • 54. Carroll O. St Petersburg ‘troll farm’ had 90 dedicated staff working to influence US election campaign; 2017.
  • 55. Zannettou S, Caulfield T, Setzer W, Sirivianos M, Stringhini G, Blackburn J. Who let the trolls out? towards understanding state-sponsored trolls. In: Proc. of the 10th ACM Conference on Web Science (WebSci); 2019. p. 353–362.
  • 56. Vincent J. Watch Jordan Peele use AI to make Barack Obama deliver a PSA about fake news. The Verge . 2018;17.
  • 58. Linder M. Block. Mute. Unfriend. Tensions rise on Facebook after election results. Chicago Tribune . 2016;9.
  • 60. Howard PN, Kollanyi B. Bots, #StrongerIn, and #Brexit: computational propaganda during the UK-EU referendum. arXiv . 2016; p. arXiv–1606.
  • 61. Kasra M, Shen C, O’Brien JF. Seeing is believing: how people fail to identify fake images on the Web. In Proc of the 2018 CHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI). 2018; p. 1–6.
  • 62. Kirby EJ. The city getting rich from fake news. BBC News . 2016;5.
  • 63. Hu Z, Yang Z, Li Q, Zhang A, Huang Y. Infodemiological study on COVID-19 epidemic and COVID-19 infodemic. Preprints . 2020; p. 2020020380.
  • 71. Knaus C. Disinformation and lies are spreading faster than Australia’s bushfires. The Guardian . 2020;11.
  • 72. Karimi H, Roy P, Saba-Sadiya S, Tang J. Multi-source multi-class fake news detection. In: In Proc. of the 27th International Conference on Computational Linguistics ; 2018. p. 1546–1557.
  • 73. Wang WY. “Liar, liar pants on fire”: A new benchmark dataset for fake news detection. arXiv . 2017; p. arXiv–1705.
  • 74. Pérez-Rosas V, Kleinberg B, Lefevre A, Mihalcea R. Automatic Detection of Fake News. arXiv . 2017; p. arXiv–1708.
  • 75. Yang Y, Zheng L, Zhang J, Cui Q, Li Z, Yu PS. TI-CNN: Convolutional Neural Networks for Fake News Detection. arXiv . 2018; p. arXiv–1806.
  • 76. Kumar V, Khattar D, Gairola S, Kumar Lal Y, Varma V. Identifying clickbait: A multi-strategy approach using neural networks. In: In Proc. of the 41st International ACM SIGIR Conference on Research & Development in Information Retrieval (SIGIR); 2018. p. 1225–1228.
  • 77. Yoon S, Park K, Shin J, Lim H, Won S, Cha M, et al. Detecting incongruity between news headline and body text via a deep hierarchical encoder. In: Proc. of the AAAI Conference on Artificial Intelligence. vol. 33; 2019. p. 791–800.
  • 78. Lu Y, Zhang L, Xiao Y, Li Y. Simultaneously detecting fake reviews and review spammers using factor graph model. In: In Proc. of the 5th Annual ACM Web Science Conference (WebSci); 2013. p. 225–233.
  • 79. Mukherjee A, Venkataraman V, Liu B, Glance N. What yelp fake review filter might be doing? In: In Proc. of The International AAAI Conference on Weblogs and Social Media (ICWSM); 2013. p. 409–418.
  • 80. Benevenuto F, Magno G, Rodrigues T, Almeida V. Detecting spammers on twitter. In: In Proc. of the 8th Annual Collaboration , Electronic messaging , Anti-Abuse and Spam Conference (CEAS). vol. 6; 2010. p. 12.
  • 81. Lee K, Caverlee J, Webb S. Uncovering social spammers: social honeypots+ machine learning. In: In Proc. of the 33rd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR); 2010. p. 435–442.
  • 82. Li FH, Huang M, Yang Y, Zhu X. Learning to identify review spam. In: In Proc. of the 22nd International Joint Conference on Artificial Intelligence (IJCAI); 2011. p. 2488–2493.
  • 83. Wang J, Wen R, Wu C, Huang Y, Xion J. Fdgars: Fraudster detection via graph convolutional networks in online app review system. In: In Proc. of The 2019 World Wide Web Conference (WWW); 2019. p. 310–316.
  • 84. Castillo C, Mendoza M, Poblete B. Information credibility on twitter. In: In Proc. of the 20th International Conference on World Wide Web (WWW); 2011. p. 675–684.
  • 85. Jo Y, Kim M, Han K. How Do Humans Assess the Credibility on Web Blogs: Qualifying and Verifying Human Factors with Machine Learning. In: In Proc. of the 2019 CHI Conference on Human Factors in Computing Systems (CHI); 2019. p. 1–12.
  • 86. Che X, Metaxa-Kakavouli D, Hancock JT. Fake News in the News: An Analysis of Partisan Coverage of the Fake News Phenomenon. In: In Proc. of the 21st ACM Conference on Computer-Supported Cooperative Work and Social Computing (CSCW); 2018. p. 289–292.
  • 87. Potthast M, Kiesel J, Reinartz K, Bevendorff J, Stein B. A Stylometric Inquiry into Hyperpartisan and Fake News. arXiv . 2017; p. arXiv–1702.
  • 89. Popat K, Mukherjee S, Strötgen J, Weikum G. Credibility assessment of textual claims on the web. In: In Proc. of the 25th ACM International on Conference on Information and Knowledge Management (CIKM); 2016. p. 2173–2178.
  • 90. Shen TJ, Cowell R, Gupta A, Le T, Yadav A, Lee D. How gullible are you? Predicting susceptibility to fake news. In: In Proc. of the 10th ACM Conference on Web Science (WebSci); 2019. p. 287–288.
  • 91. Gupta A, Lamba H, Kumaraguru P, Joshi A. Faking sandy: characterizing and identifying fake images on twitter during hurricane sandy. In: In Proc. of the 22nd International Conference on World Wide Web ; 2013. p. 729–736.
  • 92. He P, Li H, Wang H. Detection of fake images via the ensemble of deep representations from multi color spaces. In: In Proc. of the 26th IEEE International Conference on Image Processing (ICIP). IEEE; 2019. p. 2299–2303.
  • 93. Sun Y, Chen Y, Wang X, Tang X. Deep learning face representation by joint identification-verification. Advances in Neural Information Processing Systems . 2014; p. 1–9.
  • 94. Huh M, Liu A, Owens A, Efros AA. Fighting fake news: Image splice detection via learned self-consistency. In: In Proc. of the European Conference on Computer Vision (ECCV); 2018. p. 101–117.
  • 95. Dang H, Liu F, Stehouwer J, Liu X, Jain AK. On the detection of digital face manipulation. In: In Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); 2020. p. 5781–5790.
  • 96. Tariq S, Lee S, Kim H, Shin Y, Woo SS. Detecting both machine and human created fake face images in the wild. In Proc of the 2nd International Workshop on Multimedia Privacy and Security (MPS). 2018; p. 81–87.
  • 97. Liu Z, Luo P, Wang X, Tang X. Deep learning face attributes in the wild. In: In Proc. of the IEEE International Conference on Computer Vision (ICCV); 2015. p. 3730–3738.
  • 98. Wang R, Ma L, Juefei-Xu F, Xie X, Wang J, Liu Y. Fakespotter: A simple baseline for spotting ai-synthesized fake faces. arXiv . 2019; p. arXiv–1909.
  • 99. Karras T, Laine S, Aila T. A style-based generator architecture for generative adversarial networks. In: In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2019. p. 4401–4410.
  • 100. Yang X, Li Y, Qi H, Lyu S. Exposing GAN-synthesized faces using landmark locations. In Proc of the ACM Workshop on Information Hiding and Multimedia Security (IH&MMSec). 2019; p. 113–118.
  • 101. Zhang X, Karaman S, Chang SF. Detecting and simulating artifacts in gan fake images. In Proc of the 2019 IEEE International Workshop on Information Forensics and Security (WIFS). 2019; p. 1–6.
  • 102. Amerini I, Galteri L, Caldelli R, Del Bimbo A. Deepfake video detection through optical flow based cnn. In Proc of the IEEE International Conference on Computer Vision Workshops (ICCV). 2019; p. 1205–1207.
  • 103. Li Y, Lyu S. Exposing deepfake videos by detecting face warping artifacts. arXiv . 2018; p. 46–52.
  • 104. Korshunov P, Marcel S. Deepfakes: a new threat to face recognition? assessment and detection. arXiv . 2018; p. arXiv–1812.
  • 105. Jeon H, Bang Y, Woo SS. Faketalkerdetect: Effective and practical realistic neural talking head detection with a highly unbalanced dataset. In Proc of the IEEE International Conference on Computer Vision Workshops (ICCV). 2019; p. 1285–1287.
  • 106. Chung JS, Nagrani A, Zisserman A. Voxceleb2: Deep speaker recognition. arXiv . 2018; p. arXiv–1806.
  • 107. Songsri-in K, Zafeiriou S. Complement face forensic detection and localization with faciallandmarks. arXiv . 2019; p. arXiv–1910.
  • 108. Ma S, Cui L, Dai D, Wei F, Sun X. Livebot: Generating live video comments based on visual and textual contexts. In Proc of the AAAI Conference on Artificial Intelligence (AAAI). 2019; p. 6810–6817.
  • 109. Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, et al. Generative adversarial nets. Advances in Neural Information Processing Systems . 2014; p. arXiv–1406.
  • 110. Metz R. The number of deepfake videos online is spiking. Most are porn; 2019. Available from: https://cnn.it/3xPJRT2 .
  • 111. Strömbäck J. In search of a standard: Four models of democracy and their normative implications for journalism. Journalism Studies . 2005; p. 331–345.
  • 112. Brenan M. Americans’ Trust in Mass Media Edges Down to 41%; 2019. Available from: https://bit.ly/3ejl6ql .
  • 114. Ladd JM. Why Americans hate the news media and how it matters. Princeton University Press; 2012.
  • 116. Weisberg J. Bubble trouble: Is web personalization turning us into solipsistic twits; 2011. Available from: https://bit.ly/3xOGFqD .
  • 117. Pariser E. The filter bubble: How the new personalized web is changing what we read and how we think. Penguin; 2011.
  • 118. Lewis P, McCormick E. How an ex-YouTube insider investigated its secret algorithm. The Guardian . 2018;2.
  • 120. Kavanaugh AL, Yang S, Li LT, Sheetz SD, Fox EA, et al. Microblogging in crisis situations: Mass protests in Iran, Tunisia, Egypt; 2011.
  • 121. Mustafaraj E, Metaxas PT, Finn S, Monroy-Hernández A. Hiding in Plain Sight: A Tale of Trust and Mistrust inside a Community of Citizen Reporters. In Proc of the 6th International AAAI Conference on Weblogs and Social Media (ICWSM) . 2012; p. 250–257.
  • 125. Tajfel H. Human groups and social categories: Studies in social psychology. Cup Archive ; 1981.
  • 127. Correia V, Festinger L. Biased argumentation and critical thinking. Rhetoric and Cognition: Theoretical Perspectives and Persuasive Strategies . 2014; p. 89–110.
  • 128. Festinger L. A theory of cognitive dissonance. Stanford University Press; 1957.
  • 136. John OP, Srivastava S, et al. The Big Five trait taxonomy: History, measurement, and theoretical perspectives. Handbook of Personality: theory and research . 1999; p. 102–138.
  • 138. Shu K, Wang S, Liu H. Understanding user profiles on social media for fake news detection. In: 2018 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR). IEEE; 2018. p. 430–435.
  • 142. Costa PT, McCrae RR. The NEO personality inventory. Psychological Assessment Resources; 1985.
  • 146. Panetta K. Gartner top strategic predictions for 2018 and beyond; 2017. Available from: https://gtnr.it/33kuljQ .
  • 147. Doris-Down A, Versee H, Gilbert E. Political blend: an application designed to bring people together based on political differences. In Proc of the 6th International Conference on Communities and Technologies (C&T). 2013; p. 120–130.
  • 148. Karduni A, Wesslen R, Santhanam S, Cho I, Volkova S, Arendt D, et al. Can You Verifi This? Studying Uncertainty and Decision-Making About Misinformation Using Visual Analytics. In Proc of the 12th International AAAI Conference on Web and Social Media (ICWSM). 2018;12(1).
  • 149. Basol M, Roozenbeek J, van der Linden S. Good news about bad news: gamified inoculation boosts confidence and cognitive immunity against fake news. Journal of Cognition . 2020;3(1).
  • 151. Tambuscio M, Ruffo G, Flammini A, Menczer F. Fact-checking effect on viral hoaxes: A model of misinformation spread in social networks. In Proc of the 24th International Conference on World Wide Web (WWW). 2015; p. 977–982.
  • 152. Friggeri A, Adamic L, Eckles D, Cheng J. Rumor cascades. In Proc of the 8th International AAAI Conference on Weblogs and Social Media (ICWSM) . 2014;8.
  • 153. Lerman K, Ghosh R. Information contagion: An empirical study of the spread of news on digg and twitter social networks. arXiv . 2010; p. arXiv–1003.
  • 155. Cantril H. The invasion from Mars: A study in the psychology of panic. Transaction Publishers; 1952.
  • 158. Bailey NT, et al. The mathematical theory of infectious diseases and its applications. Charles Griffin & Company Ltd; 1975.
  • 159. on Religion PF, Life P. Growing Number of Americans Say Obama Is a Muslim; 2010.
  • 160. Craik KJW. The nature of explanation. Cambridge University Press; 1943.
  • 161. Johnson-Laird PN. Mental models: Towards a cognitive science of language, inference, and consciousness. 6. Harvard University Press; 1983.
  • 162. Johnson-Laird PN, Girotto V, Legrenzi P. Mental models: a gentle guide for outsiders. Sistemi Intelligenti . 1998;9(68).
  • 164. Rouse WB, Morris NM. On looking into the black box: Prospects and limits in the search for mental models. Psychological Bulletin . 1986;100(3).
  • 166. Wash R, Rader E. Influencing mental models of security: a research agenda. In Proc of the 2011 New Security Paradigms Workshop (NSPW). 2011; p. 57–66.
  • 167. Tversky B. Cognitive maps, cognitive collages, and spatial mental models. In Proc of European conference on spatial information theory (COSIT). 1993; p. 14–24.
  • 169. Mayer RE, Mathias A, Wetzell K. Fostering understanding of multimedia messages through pre-training: Evidence for a two-stage theory of mental model construction. Journal of Experimental Psychology: Applied . 2002;8(3).
  • 172. Morgan MG, Fischhoff B, Bostrom A, Atman CJ, et al. Risk communication: A mental models approach. Cambridge University Press; 2002.
  • 174. Kang R, Dabbish L, Fruchter N, Kiesler S. “My Data Just Goes Everywhere:” User mental models of the internet and implications for privacy and security. In Proc of 11th Symposium On Usable Privacy and Security . 2015; p. 39–52.
  • 178. Facebook Journalism Project. Facebook’s Approach to Fact-Checking: How It Works; 2020. https://bit.ly/34QgOlj .
  • 179. Sardarizadeh S. Instagram fact-check: Can a new flagging tool stop fake news?; 2019. Available from: https://bbc.in/33fg5ZR .
  • 180. Greenfield S. Mind change: How digital technologies are leaving their mark on our brains. Random House Incorporated ; 2015.
  • 181. European Commission. European Media Literacy Week; 2020. https://bit.ly/36H9MR3 .
  • 182. Media Literacy Now. U.S. media literacy policy report 2020; 2020. https://bit.ly/33LkLqQ .

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 18 May 2024

Emotions unveiled: detecting COVID-19 fake news on social media

  • Bahareh Farhoudinia   ORCID: orcid.org/0000-0002-2294-8885 1 ,
  • Selcen Ozturkcan   ORCID: orcid.org/0000-0003-2248-0802 1 , 2 &
  • Nihat Kasap   ORCID: orcid.org/0000-0001-5435-6633 1  

Humanities and Social Sciences Communications volume  11 , Article number:  640 ( 2024 ) Cite this article

2368 Accesses

1 Citations

43 Altmetric

Metrics details

  • Business and management
  • Science, technology and society

The COVID-19 pandemic has highlighted the pernicious effects of fake news, underscoring the critical need for researchers and practitioners to detect and mitigate its spread. In this paper, we examined the importance of detecting fake news and incorporated sentiment and emotional features to detect this type of news. Specifically, we compared the sentiments and emotions associated with fake and real news using a COVID-19 Twitter dataset with labeled categories. By utilizing different sentiment and emotion lexicons, we extracted sentiments categorized as positive, negative, and neutral and eight basic emotions, anticipation, anger, joy, sadness, surprise, fear, trust, and disgust. Our analysis revealed that fake news tends to elicit more negative emotions than real news. Therefore, we propose that negative emotions could serve as vital features in developing fake news detection models. To test this hypothesis, we compared the performance metrics of three machine learning models: random forest, support vector machine (SVM), and Naïve Bayes. We evaluated the models’ effectiveness with and without emotional features. Our results demonstrated that integrating emotional features into these models substantially improved the detection performance, resulting in a more robust and reliable ability to detect fake news on social media. In this paper, we propose the use of novel features and methods that enhance the field of fake news detection. Our findings underscore the crucial role of emotions in detecting fake news and provide valuable insights into how machine-learning models can be trained to recognize these features.

Similar content being viewed by others

research about fake news in social media

ANN: adversarial news net for robust fake news classification

research about fake news in social media

Emotions explain differences in the diffusion of true vs. false social media rumors

research about fake news in social media

Machine learning-based guilt detection in text

Introduction.

Social media has changed human life in multiple ways. People from all around the world are connected via social media. Seeking information, entertainment, communicatory utility, convenience utility, expressing opinions, and sharing information are some of the gratifications of social media (Whiting and Williams, 2013 ). Social media is also beneficial for political parties or companies since they can better connect with their audience through social media (Kumar et al., 2016 ). Despite all the benefits that social media adds to our lives, there are also disadvantages to its use. The emergence of fake news is one of the most important and dangerous consequences of social media (Baccarella et al., 2018 , 2020 ). Zhou et al. ( 2019 ) suggested that fake news threatens public trust, democracy, justice, freedom of expression, and the economy. In the 2016 United States (US) presidential election, fake news engagement outperformed mainstream news engagement and significantly impacted the election results (Silverman, 2016 ). In addition to political issues, fake news can cause irrecoverable damage to companies. For instance, Pepsi stock fell by 4% in 2016 when a fake story about the company’s CEO spread on social media (Berthon and Pitt, 2018 ). During the COVID-19 pandemic, fake news caused serious problems, e.g., people in Europe burned 5G towers because of a rumor claiming that these towers damaged the immune system of humans (Mourad et al., 2020 ). The World Health Organization (WHO) asserted that misinformation and propaganda propagated more rapidly than the COVID-19 pandemic, leading to psychological panic, the circulation of misleading medical advice, and an economic crisis.

This study, which is a part of a completed PhD thesis (Farhoundinia, 2023 ), focuses on analyzing the emotions and sentiments elicited by fake news in the context of COVID-19. The purpose of this paper is to investigate how emotions can help detect fake news. This study aims to address the following research questions: 1. How do the sentiments associated with real news and fake news differ? 2. How do the emotions elicited by fake news differ from those elicited by real news? 3. What particular emotions are most prevalent in fake news? 4. How can these feelings be used to recognize fake news on social media?

This paper is arranged into six sections: Section “Related studies” reviews the related studies; Section “Methods” explains the proposed methodology; and Section “Results and analysis” presents the implemented models, analysis, and related results in detail. Section “Discussion and limitations” discusses the research limitations, and the conclusion of the study is presented in Section “Conclusion”.

Related studies

Research in the field of fake news began following the 2016 US election (Carlson, 2020 ; Wang et al., 2019 ). Fake news has been a popular topic in multiple disciplines, such as journalism, psychology, marketing, management, health care, political science, information science, and computer science (Farhoudinia et al., 2023 ). Therefore, fake news has not been defined in a single way; according to Berthon and Pitt ( 2018 ), misinformation is the term used to describe the unintentional spread of fake news. Disinformation is the term used to describe the intentional spread of fake news to mislead people or attack an idea, a person, or a company (Allcott and Gentzkow, 2017 ). Digital assets such as images and videos could be used to spread fake news (Rajamma et al., 2019 ). Advancements in computer graphics, computer vision, and machine learning have made it feasible to create fake images or movies by merging them together (Agarwal et al., 2020 ). Additionally, deep fake videos pose a risk to public figures, businesses, and individuals in the media. Detecting deep fakes is challenging, if not impossible, for humans.

The reasons for believing and sharing fake news have attracted the attention of several researchers (e.g., Al-Rawi et al., 2019 ; Apuke and Omar, 2020 ; Talwar, Dhir et al., 2019 ). Studies have shown that people have a tendency to favor news that reinforces their existing beliefs, a cognitive phenomenon known as confirmation bias. This inclination can lead individuals to embrace misinformation that aligns with their preconceived notions (Kim and Dennis, 2019 ; Meel and Vishwakarma, 2020 ). Although earlier research focused significantly on the factors that lead people to believe and spread fake news, it is equally important to understand the cognitive mechanisms involved in this process. These cognitive mechanisms, as proposed by Kahneman ( 2011 ), center on two distinct systems of thinking. In system-one cognition, conclusions are made without deep or conscious thoughts; however, in system-two cognition, there is a deeper analysis before decisions are made. Based on Moravec et al. ( 2020 ), social media users evaluate news using ‘system-one’ cognition; therefore, they believe and share fake news without deep thinking. It is essential to delve deeper into the structural aspects of social media platforms that enable the rapid spread of fake news. Social media platforms are structured to show that posts and news are aligned with users’ ideas and beliefs, which is known as the root cause of the echo chamber effect (Cinelli et al., 2021 ). The echo chamber effect has been introduced as an aspect that causes people to believe and share fake news on social media (e.g., Allcott and Gentzkow, 2017 ; Berthon and Pitt, 2018 ; Chua and Banerjee, 2018 ; Peterson, 2019 ).

In the context of our study, we emphasize the existing body of research that specifically addresses the detection of fake news (Al-Rawi et al., 2019 ; Faustini and Covões, 2020 ; Ozbay and Alatas, 2020 ; Raza and Ding, 2022 ). Numerous studies that are closely aligned with the themes of our present investigation have delved into methodological approaches for identifying fake news (Er and Yılmaz, 2023 ; Hamed et al., 2023 ; Iwendi et al., 2022 ). Fake news detection methods are classified into three categories: (i) content-based, (ii) social context, and (iii) propagation-based methods. (i) Content-based fake news detection models are based on the content and linguistic features of the news rather than user and propagation characteristics (Zhou and Zafarani, 2019 , p. 49). (ii) Fake news detection based on social context employs user demographics such as age, gender, education, and follower–followee relationships of the fake news publishers as features to recognize fake news (Jarrahi and Safari, 2023 ). (iii) Propagation-based approaches are based on the spread of news on social media. The input of the propagation-based fake news detection model is a cascade of news, not text or user profiles. Cascade size, cascade depth, cascade breadth, and node degree are common features of detection models (Giglietto et al., 2019 ; de Regt et al., 2020 ; Vosoughi et al., 2018 ).

Machine learning methods are widely used in the literature because they enable researchers to handle and process large datasets (Ongsulee, 2017 ). The use of machine learning in fake news research has been extremely beneficial, especially in the domains of content-based, social context-based, and propagation-based fake news identification. These methods leverage the advantages of a range of characteristics, including sentiment-related, propagation, temporal, visual, linguistic, and user/account aspects. Fake news detection frequently makes use of machine learning techniques such as logistic regressions, decision trees, random forests, naïve Bayes, and support vector machine (SVM). Studies on the identification of fake news also include deep learning models, such as convolutional neural networks (CNN) and long short-term memory (LSTM) networks, which can provide better accuracy in certain situations. Even with a small amount of training data, pretrained language models such as bidirectional encoder representations from transformers (BERT) show potential for identifying fake news (Kaliyar et al., 2021 ). Amer et al. ( 2022 ) investigated the usefulness of these models in benchmark studies covering different topics.

The role of emotions in identifying fake news within academic communities remains an area with considerable potential for additional research. Despite many theoretical and empirical studies, this topic remains inadequately investigated. Ainapure et al. ( 2023 ) analyzed the sentiments elicited by tweets in India during the COVID-19 pandemic with deep learning and lexicon-based techniques using the valence-aware dictionary and sentiment reasoner (Vader) and National Research Council (NRC) lexicons to understand the public’s concerns. Dey et al. ( 2018 ) applied several natural language processing (NLP) methods, such as sentiment analysis, to a dataset of tweets about the 2016 U.S. presidential election. They found that fake news had a strong tendency toward negative sentiment; however, their dataset was too limited (200 tweets) to provide a general understanding. Cui et al. ( 2019 ) found that sentiment analysis was the best-performing component in their fake news detection framework. Ajao et al. ( 2019 ) studied the hypothesis that a relationship exists between fake news and the sentiments elicited by such news. The authors tested hypotheses with different machine learning classifiers. The best results were obtained by sentiment-aware classifiers. Pennycook and Rand ( 2020 ) argued that reasoning and analytical thinking help uncover news credibility; therefore, individuals who engage in reasoning are less likely to believe fake news. Prior psychology research suggests that an increase in the use of reason implies a decrease in the use of emotions (Mercer, 2010 ).

In this study, we apply sentiment analysis to the more general topic of fake news detection. The focus of this study is on the tweets that were shared during the COVID-19 pandemic. Many scholars focused on the effects of media reports, providing comprehensive information and explanations about the virus. However, there is still a gap in the literature on the characteristics and spread of fake news during the COVID-19 pandemic. A comprehensive study can enhance preparedness efforts for any similar future crisis. The aim of this study is to answer the question of how emotions aid in fake news detection during the COVID-19 pandemic. Our hypothesis is that fake news carries negative emotions and is written with different emotions and sentiments than those of real news. We expect to extract more negative sentiments and emotions from fake news than from real news. Existing works on fake news detection have focused mainly on news content and social context. However, emotional information has been underutilized in previous studies (Ajao et al., 2019 ). We extract sentiments and eight basic emotions from every tweet in the COVID-19 Twitter dataset and use these features to classify fake and real news. The results indicate how emotions can be used in differentiating and detecting fake and real news.

With our methodology, we employed a multifaceted approach to analyze tweet text and discern sentiment and emotion. The steps involved were as follows: (a) Lexicons such as Vader, TextBlob, and SentiWordNet were used to identify sentiments embedded in the tweet content. (b) The NRC emotion lexicon was utilized to recognize the range of different emotions expressed in the tweets. (c) Machine learning models, including the random forest, naïve Bayes, and SVM classifiers, as well as a deep learning model, BERT, were integrated. These models were strategically applied to the data for fake news detection, both with and without considering emotions. This comprehensive approach allowed us to capture nuanced patterns and dependencies within the tweet data, contributing to a more effective and nuanced analysis of the fake news content on social media.

An open, science-based, publicly available dataset was utilized. The dataset comprises 10,700 English tweets with hashtags relevant to COVID-19, categorized with real and fake labels. Previously used by Vasist and Sebastian ( 2022 ) and Suter et al. ( 2022 ), the manually annotated dataset was compiled by Patwa et al. ( 2021 ) in September 2020 and includes tweets posted in August and September 2020. According to their classification, the dataset is balanced, with 5600 real news stories and 5100 fake news stories. The dataset used for the study was generated by sourcing fake news data from public fact-checking websites and social media outlets, with manual verification against the original documents. Web-based resources, including social media posts and fact-checking websites such as PolitiFact and Snopes, played a key role in collecting and adjudicating details on the veracity of claims related to COVID-19. For real news, tweets from official and verified sources were gathered, and each tweet was assessed by human reviewers based on its contribution of relevant information about COVID-19 (Patwa et al., 2021 ; Table 2 on p. 4 of Suter et al., 2022 , which is excerpted from Patwa et al. ( 2021 ), also provides an illustrative overview).

Preprocessing is an essential step in any data analysis, especially when dealing with textual data. Appropriate preprocessing steps can significantly enhance the performance of the models. The following preprocessing steps were applied to the dataset: removing any characters other than alphabets, change the letters to lower-case, deleting stop words such as “a,” “the,” “is,” and “are,” which carry very little helpful information, and performing lemmatization. The text data were transformed into quantitative data by the scikit-learn ordinal encoder class.

The stages involved in this research are depicted in a high-level schematic that is shown in Fig. 1 . First, the sentiments and emotions elicited by the tweets were extracted, and then, after studying the differences between fake and real news in terms of sentiments and emotions, these characteristics were utilized to construct fake news detection models.

figure 1

The figure depicts the stages involved in this research in a high-level schematic.

Sentiment analysis

Sentiment analysis is the process of deriving the sentiment of a piece of text from its content (Vinodhini and Chandrasekaran, 2012 ). Sentiment analysis, as a subfield of natural language processing, is widely used in analyzing the reviews of a product or service and social media posts related to different topics, events, products, or companies (Wankhade et al., 2022 ). One major application of sentiment analysis is in strategic marketing. Păvăloaia et al. ( 2019 ), in a comprehensive study on two companies, Coca-Cola and PepsiCo, confirmed that the activity of these two brands on social media has an emotional impact on existing or future customers and the emotional reactions of customers on social media can influence purchasing decisions. There are two methods for sentiment analysis: lexicon-based and machine-learning methods. Lexicon-based sentiment analysis uses a collection of known sentiments that can be divided into dictionary-based lexicons or corpus-based lexicons (Pawar et al., 2015 ). These lexicons help researchers derive the sentiments generated from a text document. Numerous dictionaries, such as Vader (Hutto and Gilbert, 2014 ), SentiWordNet (Esuli and Sebastiani, 2006 ), and TextBlob (Loria, 2018 ), can be used for scholarly research.

In this research, Vader, TextBlob, and SentiWordNet are the three lexicons used to extract the sentiments generated from tweets. The Vader lexicon is an open-source lexicon attuned specifically to social media (Hutto and Gilbert, 2014 ). TextBlob is a Python library that processes text specifically designed for natural language analysis (Loria, 2018 ), and SentiWordNet is an opinion lexicon adapted from the WordNet database (Esuli and Sebastiani, 2006 ). Figure 2 shows the steps for the sentiment analysis of tweets.

figure 2

The figure illustrates the steps for the sentiment analysis of tweets.

Different methods and steps were used to choose the best lexicon. First, a random partition of the dataset was manually labeled as positive, negative, or neutral. The results of every lexicon were compared with the manually labeled sentiments, and the performance metrics for every lexicon are reported in Table 1 . Second, assuming that misclassifying negative and positive tweets as neutral is not as crucial as misclassifying negative tweets as classifying positive tweets, the neutral tweets were ignored, and a comparison was made on only positive and negative tweets. The three-class and two-class classification metrics are compared in Table 1 .

Third, this study’s primary goal was to identify the precise distinctions between fake and real tweets to improve the detection algorithm. We addressed how well fake news was detected with the three sentiment lexicons, as different results were obtained. This finding means that a fake news detection model was trained with the dataset using the outputs from three lexicons: Vader, TextBlob, and SentiWordNet. As previously indicated, the dataset includes labels for fake and real news, which allows for the application of supervised machine learning detection models and the evaluation of how well various models performed. The Random Forest algorithm is a supervised machine learning method that has achieved good performance in the classification of text data. The dataset contains many tweets and numerical data reporting the numbers of hospitalized, deceased, and recovered individuals who do not carry any sentiment. During this phase, tweets containing numerical data were excluded; this portion of the tweets constituted 20% of the total. Table 2 provides information on the classification power using the three lexicons with nonnumerical data. The models were more accurate when using sentiments drawn from Vader. This finding means the Vader lexicon may include better classifications of fake and real news. Vader was selected as the superior sentiment lexicon after evaluating all three processes. The steps for choosing the best lexicon are presented in Fig. 3 (also see Appendix A in Supplementary Information for further details on the procedure). Based on the results achieved when using Vader, the tweets that are labeled as fake include more negative sentiments than those of real tweets. Conversely, real tweets include more positive sentiments.

figure 3

The figure exhibits the steps for choosing the best lexicon.

Emotion extraction

Emotions elicited in tweets were extracted using the NRC emotion lexicon. This lexicon measures emotional effects from a body of text, contains ~27,000 words, and is based on the National Research Council Canada’s affect lexicon and the natural language toolkit (NLTK) library’s WordNet synonym sets (Mohammad and Turney, 2013 ). The lexicon includes eight scores for eight emotions based on Plutchick’s model of emotion (Plutchik, 1980 ): joy, trust, fear, surprise, sadness, anticipation, anger, and disgust. These emotions can be classified into four opposing pairs: joy–sadness, anger–fear, trust–disgust, and anticipation–surprise. The NRC lexicon assigns each text the emotion with the highest score. Emotion scores from the NRC lexicon for every tweet in the dataset were extracted and used as features for the fake news detection model. The features of the model include the text of the tweet, sentiment, and eight emotions. The model was trained with 80% of the data and tested with 20%. Fake news had a greater prevalence of negative emotions, such as fear, disgust, and anger, than did real news, and real news had a greater prevalence of positive emotions, such as anticipation, joy, and surprise, than did fake news.

Fake news detection

In the present study, the dataset was divided into a training set (80%) and a test set (20%). The dataset was analyzed using three machine learning models: random forest, SVM, and naïve Bayes. Appendices A and B provide information on how the results were obtained and how they correlate with the research corpus.

Random forest : An ensemble learning approach that fits several decision trees to random data subsets. This classifier is popular for text classification, high-dimensional data, and feature importance since it overfits less than decision trees. The Random Forest classifier in scikit-learn was used in this study (Breiman, 2001 ).

Naïve Bayes : This model uses Bayes’ theorem to solve classification problems, such as sorting documents into groups and blocking spam. This approach works well with text data and is easy to use, strong, and good for problems with more than one label. The Naïve Bayes classifier from scikit-learn was used in this study (Zhang, 2004 ).

Support vector machines (SVMs) : Supervised learning methods that are used to find outliers, classify data, and perform regression. These methods work well with data involving many dimensions. SVMs find the best hyperplanes for dividing classes. In this study, the SVM model from scikit-learn was used (Cortes and Vapnik, 1995 ).

Deep learning models can learn how to automatically describe data in a hierarchical way, making them useful for tasks such as identifying fake news (Salakhutdinov et al., 2012 ). A language model named bidirectional encoder representations from transformers (BERT) was used in this study to help discover fake news more easily.

BERT : A cutting-edge NLP model that uses deep neural networks and bidirectional learning and can distinguish patterns on both sides of a word in a sentence, which helps it understand the context and meaning of text. BERT has been pretrained with large datasets and can be fine-tuned for specific applications to capture unique data patterns and contexts (Devlin et al., 2018 ).

In summary, we applied machine learning models (random forest, naïve Bayes, and SVM) and a deep learning model (BERT) to analyze text data for fake news detection. The impact of emotion features on detecting fake news was compared between models that include these features and models that do not include these features. We found that adding emotion scores as features to machine learning and deep learning models for fake news detection can improve the model’s accuracy. A more detailed analysis of the results is given in the section “Results and analysis”.

Results and analysis

In the sentiment analysis using tweets from the dataset, positive and negative sentiment tweets were categorized into two classes: fake and real. Figure 4 shows a visual representation of the differences, while the percentages of the included categories are presented in Table 3 . In fake news, the number of negative sentiments is greater than the number of positive sentiments (39.31% vs. 31.15%), confirming our initial hypothesis that fake news disseminators use extreme negative emotions to attract readers’ attention.

figure 4

The figure displays a visual representation of the differences of sentiments in each class.

Fake news disseminators aim to attack or satirize an idea, a person, or a brand using negative words and emotions. Baumeister et al. ( 2001 ) suggested that negative events are stronger than positive events and that negative events have a more significant impact on individuals than positive events. Accordingly, individuals sharing fake news tend to express more negativity for increased impressiveness. The specific topics of the COVID-19 pandemic, such as the source of the virus, the cure for the illness, the strategy the government is using against the spread of the virus, and the spread of vaccines, are controversial topics. These topics, known for their resilience against strong opposition, have become targets of fake news featuring negative sentiments (Frenkel et al., 2020 ; Pennycook et al., 2020 ). In real news, the pattern is reversed, and positive sentiments are much more frequent than negative sentiments (46.45% vs. 35.20%). Considering that real news is spread among reliable news channels, we can conclude that reliable news channels express news with positive sentiments so as not to hurt their audience psychologically and mentally.

The eight scores for the eight emotions of anger, anticipation, disgust, fear, joy, sadness, surprise, and trust were extracted from the NRC emotion lexicon for every tweet. Each text was assigned the emotion with the highest score. Table 4 and Fig. 5 include more detailed information about the emotion distribution.

figure 5

The figure depicts more detailed information about the emotion distribution.

The NRC lexicon provides scores for each emotion. Therefore, the intensities of emotions can also be compared. Table 5 shows the average score of each emotion for the two classes, fake and real news.

A two-sample t -test was performed using the pingouin (PyPI) statistical package in Python (Vallat, 2018 ) to determine whether the difference between the two groups was significant (Tables 6 and 7 ).

As shown in Table 6 , the P values indicate that the differences in fear, anger, trust, surprise, disgust, and anticipation were significant; however, for sadness and joy, the difference between the two groups of fake and real news was not significant. Considering the statistics provided in Tables 4 , 5 , and Fig. 5 , the following conclusions can be drawn:

Anger, disgust, and fear are more commonly elicited in fake news than in real news.

Anticipation and surprise are more commonly elicited in real news than in fake news.

Fear is the most commonly elicited emotion elicited in both fake and real news.

Trust is the second most commonly elicited emotion in fake and real news.

The most significant differences were observed for trust, fear, and anticipation (5.92%, 5.33%, and 3.05%, respectively). The differences between fake and real news in terms of joy and sadness were not significant.

In terms of intensity, based on Table 5 ,

Fear is the mainly elicited emotion in both fake and real news; however, fake news has a higher fear intensity score than does real news.

Trust is the second most commonly elicited emotion in two categories—real and fake—but is more powerful in real news.

Positive emotions, such as anticipation, surprise, and trust, are more strongly elicited in real news than in fake news.

Anger, disgust, and fear are among the stronger emotions elicited by fake news. Joy and sadness are elicited in both classes almost equally.

During the COVID-19 pandemic, fake news disseminators seized the opportunity to create fearful messages aligned with their objectives. The existence of fear in real news is also not surprising because of the extraordinary circumstances of the pandemic. The most crucial point of the analysis is the significant presence of negative emotions elicited by fake news. This observation confirms our hypothesis that fake news elicits extremely negative emotions. Positive emotions such as anticipation, joy, and surprise are elicited more often in real news than in fake news, which also aligns with our hypothesis. The largest differences in elicited emotions are as follows: trust, fear, and anticipation.

We used nine features for every tweet in the dataset: sentiment and eight scores for every emotion and sentiment in every tweet. These features were utilized for supervised machine learning fake news detection models. A schematic explanation of the models is given in Fig. 6 . The dataset was divided into training and test sets, with an 80%–20% split. The scikit-learn random forest, SVM, and Naïve Bayes machine learning models with default hyperparameters were implemented using emotion features to detect fake news in nonnumerical data. Then, we compared the prediction power of the models with that of models without these features. The performance metrics of the models, such as accuracy, precision, recall, and F1-score, are given in Table 7 .

figure 6

The figure exhibits a schematic explanation of the model.

When joy and sadness were removed from the models, the accuracy decreased. Thus, the models performed better when all the features were included (see Table C.1. Feature correlation scores in Supplementary Information). The results confirmed that elicited emotions can help identify fake and real news. Adding emotion features to the detection models significantly increased the performance metrics. Figure 7 presents the importance of the emotion features used in the random forest model.

figure 7

The figure illustrates the importance of the emotion features used in the Random Forest model.

In the random forest classifier, the predominant attributes were anticipation, trust, and fear. The difference in the emotion distribution between the two classes of fake and real news was also more considerable for anticipation, trust, and fear. It can be claimed that fear, trust, and anticipation emotions have good differentiating power between fake and real news.

BERT was the other model that was employed for the task of fake news detection using emotion features. The BERT model includes a number of preprocessing stages. The text input is segmented using the BERT tokenizer, with sequence truncation and padding ensuring that the length does not exceed 128 tokens, a reduction from the usual 512 tokens due to constraints on computing resources. The optimization process utilized the AdamW optimizer with a set learning rate of 0.00001. To ascertain the best number of training cycles, a 5-fold cross-validation method was applied, which established that three epochs were optimal. The training phase consisted of three unique epochs. The model was executed on Google Colab using Python, a popular programming language. The model was evaluated with the test set after training. Table 8 shows the performance of the BERT model with and without using emotions as features.

The results indicate that adding emotion features had a positive impact on the performance of the random forest, SVM, and BERT models; however, the naïve Bayes model achieved better performance without adding emotion features.

Discussion and limitations

This research makes a substantial impact on the domain of detecting fake news. The goal was to explore the range of sentiments and emotional responses linked to both real and fake news in pursuit of fulfilling the research aims and addressing the posed inquiries. By identifying the emotions provoked as key indicators of fake news, this study adds valuable insights to the existing corpus of related scholarly work.

Our research revealed that fake news triggers a higher incidence of negative emotions compared to real news. Sentiment analysis indicated that creators of fake news on social media platforms tend to invoke more negative sentiments than positive ones, whereas real news generally elicits more positive sentiments than negative ones. We extracted eight emotions—anger, anticipation, disgust, fear, joy, sadness, surprise, and trust—from each tweet analyzed. Negative and potent emotions such as fear, disgust, and anger were more frequently found elicited in fake news, in contrast to real news, which was more likely to arouse lighter and positive emotions such as anticipation, joy, and surprise. The difference in emotional response extended beyond the range of emotions to their intensity, with negative feelings like fear, anger, and disgust being more pronounced in fake news. We suggest that the inclusion of emotional analysis in the development of automated fake news detection algorithms could improve the effectiveness of the machine learning and deep learning models designed for fake news detection in this study.

Due to negativity bias (Baumeister et al., 2001 ), bad news, emotions, and feedback tend to have a more outsized influence than positive experiences. This suggests that humans are more likely to assign greater weight to negative events over positive ones (Lewicka et al., 1992 ). Our findings indicate that similar effects are included in social media user behavior, such as sharing and retweeting. Furthermore, the addition of emotional features to the fake news detection models was found to improve their performance, providing an opportunity to investigate their moderating effects on fake news dissemination in future research.

The majority of the current research on identifying fake news involves analyzing the social environment and news content (Amer et al., 2022 ; Jarrahi and Safari, 2023 ; Raza and Ding, 2022 ). Despite its possible importance, the investigation of emotional data has not received sufficient attention in the past (Ajao et al., 2019 ). Although sentiment in fake news has been studied in the literature, earlier studies mostly neglected a detailed examination of certain emotions. Dey et al. ( 2018 ) contributed to this field by revealing a general tendency toward negativity in fake news. Their results support our research and offer evidence for the persistent predominance of negative emotions elicited by fake news. Dey et al. ( 2018 ) also found that trustworthy tweets, on the other hand, tended to be neutral or positive in sentiment, highlighting the significance of sentiment polarity in identifying trustworthy information.

Expanding upon this sentiment-focused perspective, Cui et al. ( 2019 ) observed a significant disparity in the sentiment polarity of comments on fake news as opposed to real news. Their research emphasized the clear emotional undertones in user reactions to false material, highlighting the importance of elicited emotions in the context of fake news. Similarly, Dai et al. ( 2020 ) analyzed false health news and revealed a tendency for social media replies to real news to be marked by a more upbeat tone. These comparative findings highlight how elicited emotions play a complex role in influencing how people engage with real and fake news.

Our analysis revealed that the emotions conveyed in fake tweets during the COVID-19 pandemic are in line with the more general trends found in other studies on fake news. However, our research extends beyond that of current studies by offering detailed insights into the precise distribution and strength of emotions elicited by fake tweets. This detailed research closes a significant gap in the body of literature by adding a fresh perspective on our knowledge of emotional dynamics in the context of disseminating false information. Our research contributes significantly to the current discussion on fake news identification by highlighting these comparative aspects and illuminating both recurring themes and previously undiscovered aspects of emotional data in the age of misleading information.

The present analysis was performed with a COVID-19 Twitter dataset, which does not cover the whole period of the pandemic. A complementary study on a dataset that covers a wider time interval might yield more generalizable findings, while our study represents a new effort in the field. In this research, the elicited emotions of fake and real news were compared, and the emotion with the highest score was assigned to each tweet, while an alternative method could be to compare the emotion score intervals for fake and real news. The performance of detection models could be further improved by using pretrained emotion models and adding additional emotion features to the models. In a future study, our hypothesis that “fake news and real news are different in terms of elicited emotions, and fake news elicits more negative emotions” could be examined in an experimental field study. Additionally, the premises and suppositions underlying this study could be tested in emergency scenarios beyond the COVID-19 context to enhance the breadth of crisis readiness.

The field of fake news research is interdisciplinary, drawing on the expertise of scholars from various domains who can contribute significantly by formulating pertinent research questions. Psychologists and social scientists have the opportunity to delve into the motivations and objectives behind the creators of fake news. Scholars in management can offer strategic insights for organizations to deploy in countering the spread of fake news. Legislators are in a position to draft laws that effectively stem the flow of fake news across social media channels. In addition, the combined efforts of researchers from other academic backgrounds can make substantial additions to the existing literature on fake news.

The aim of this research was to propose novel attributes for current fake news identification techniques and to explore the emotional and sentiment distinctions between fake news and real news. This study was designed to tackle the subsequent research questions: 1. How do the sentiments associated with real news and fake news differ? 2. How do the emotions elicited by fake news differ from those elicited by real news? 3. What particular elicited emotions are most prevalent in fake news? 4. How could these elicited emotions be used to recognize fake news on social media? To answer these research questions, we thoroughly examined tweets related to COVID-19. We employed a comprehensive strategy, integrating lexicons such as Vader, TextBlob, and SentiWordNet together with machine learning models, including random forest, naïve Bayes, and SVM, as well as a deep learning model named BERT. We first performed sentiment analysis using the lexicons. Fake news elicited more negative sentiments, supporting the idea that disseminators use extreme negativity to attract attention. Real news elicited more positive sentiments, as expected from trustworthy news channels. For fake news, there was a greater prevalence of negative emotions, including fear, disgust, and anger, while for real news, there was a greater frequency of positive emotions, such as anticipation, joy, and surprise. The intensity of these emotions further differentiated fake and real news, with fear being the most dominant emotion in both categories. We applied machine learning models (random forest, naïve Bayes, SVM) and a deep learning model (BERT) to detect fake news using sentiment and emotion features. The models demonstrated improved accuracy when incorporating emotion features. Anticipation, trust, and fear emerged as significant differentiators between fake and real news, according to the random forest feature importance analysis.

The findings of this research could lead to reliable resources for communicators, managers, marketers, psychologists, sociologists, and crisis and social media researchers to further explain social media behavior and contribute to the existing fake news detection approaches. The main contribution of this study is the introduction of emotions as a role-playing feature in fake news detection and the explanation of how specific elicited emotions differ between fake and real news. The elicited emotions extracted from social media during a crisis such as the COVID-19 pandemic could not only be an important variable for detecting fake news but also provide a general overview of the dominant emotions among individuals and the mental health of society during such a crisis. Investigating and extracting further features of fake news has the potential to improve the identification of fake news and may allow for the implementation of preventive measures. Furthermore, the suggested methodology could be applied to detecting fake news in fields such as politics, sports, and advertising. We expect to observe a similar impact of emotions on other topics as well.

Data availability

The datasets analyzed during the current study are available in the Zenodo repository: https://doi.org/10.5281/zenodo.10951346 .

Agarwal S, Farid H, El-Gaaly T, Lim S-N (2020) Detecting Deep-Fake Videos from Appearance and Behavior. 2020 IEEE International Workshop on Information Forensics and Security (WIFS), 1–6. https://doi.org/10.1109/WIFS49906.2020.9360904

Ainapure BS, Pise RN, Reddy P, Appasani B, Srinivasulu A, Khan MS, Bizon N (2023) Sentiment analysis of COVID-19 tweets using deep learning and lexicon-based approaches. Sustainability 15(3):2573. https://doi.org/10.3390/su15032573

Article   Google Scholar  

Ajao O, Bhowmik D, Zargari S (2019) Sentiment Aware Fake News Detection on Online Social Networks. ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2507–2511. https://doi.org/10.1109/ICASSP.2019.8683170

Al-Rawi A, Groshek J, Zhang L (2019) What the fake? Assessing the extent of networked political spamming and bots in the propagation of# fakenews on Twitter. Online Inf Rev 43(1):53–71. https://doi.org/10.1108/OIR-02-2018-0065

Allcott H, Gentzkow M (2017) Social media and fake news in the 2016 election. J Econ Perspect 31(2):211–236. https://doi.org/10.1257/jep.31.2.211

Amer E, Kwak K-S, El-Sappagh S (2022) Context-based fake news detection model relying on deep learning models. Electronics (Basel) 11(8):1255. https://doi.org/10.3390/electronics11081255

Apuke OD, Omar B (2020) User motivation in fake news sharing during the COVID-19 pandemic: an application of the uses and gratification theory. Online Inf Rev 45(1):220–239. https://doi.org/10.1108/OIR-03-2020-0116

Baccarella CV, Wagner TF, Kietzmann JH, McCarthy IP (2018) Social media? It’s serious! Understanding the dark side of social media. Eur Manag J 36(4):431–438. https://doi.org/10.1016/j.emj.2018.07.002

Baccarella CV, Wagner TF, Kietzmann JH, McCarthy IP (2020) Averting the rise of the dark side of social media: the role of sensitization and regulation. Eur Manag J 38(1):3–6. https://doi.org/10.1016/j.emj.2019.12.011

Baumeister RF, Bratslavsky E, Finkenauer C, Vohs KD (2001) Bad is stronger than good. Rev Gen Psychol 5(4):323–370. https://doi.org/10.1037/1089-2680.5.4.323

Berthon PR, Pitt LF (2018) Brands, truthiness and post-fact: managing brands in a post-rational world. J Macromark 38(2):218–227. https://doi.org/10.1177/0276146718755869

Breiman L (2001) Random forests. Mach Learn 45:5–32. https://doi.org/10.1023/A:1010933404324

Carlson M (2020) Fake news as an informational moral panic: the symbolic deviancy of social media during the 2016 US presidential election. Inf Commun Soc 23(3):374–388. https://doi.org/10.1080/1369118X.2018.1505934

Chua AYK, Banerjee S (2018) Intentions to trust and share online health rumors: an experiment with medical professionals. Comput Hum Behav 87:1–9. https://doi.org/10.1016/j.chb.2018.05.021

Cinelli M, De Francisci Morales G, Galeazzi A, Quattrociocchi W, Starnini M (2021) The echo chamber effect on social media. Proc Natl Acad Sci USA 118(9). https://doi.org/10.1073/pnas.2023301118

Cortes C, Vapnik V (1995) Support-vector networks. Mach Learn 20:273–297. https://doi.org/10.1007/BF00994018

Cui L, Wang S, Lee D (2019) SAME: sentiment-aware multi-modal embedding for detecting fake news. 2019 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), 41–48. https://doi.org/10.1145/3341161.3342894

Dai E, Sun Y, Wang S (2020) Ginger cannot cure cancer: Battling fake health news with a comprehensive data repository. In Proceedings of the 14th International AAAI Conference on Web and Social Media, ICWSM 2020 (pp. 853–862). (Proceedings of the 14th International AAAI Conference on Web and Social Media, ICWSM 2020). AAAI press

de Regt A, Montecchi M, Lord Ferguson S (2020) A false image of health: how fake news and pseudo-facts spread in the health and beauty industry. J Product Brand Manag 29(2):168–179. https://doi.org/10.1108/JPBM-12-2018-2180

Devlin J, Chang M-W, Lee K, Toutanova K (2018) Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint. https://doi.org/10.48550/arXiv.1810.04805

Dey A, Rafi RZ, Parash SH, Arko SK, Chakrabarty A (2018) Fake news pattern recognition using linguistic analysis. Paper presented at the 2018 joint 7th International Conference on Informatics, Electronics & Vision (ICIEV) and 2018 2nd International Conference on Imaging, Vision & Pattern Recognition (icIVPR), Kitakyushu, Japan. pp. 305–309

Er MF, Yılmaz YB (2023) Which emotions of social media users lead to dissemination of fake news: sentiment analysis towards Covid-19 vaccine. J Adv Res Nat Appl Sci 9(1):107–126. https://doi.org/10.28979/jarnas.1087772

Esuli A, Sebastiani F (2006) Sentiwordnet: A publicly available lexical resource for opinion mining. Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)

Farhoundinia B (2023). Analyzing effects of emotions on fake news detection: a COVID-19 case study. PhD Thesis, Sabanci Graduate Business School, Sabanci University

Farhoudinia B, Ozturkcan S, Kasap N (2023) Fake news in business and management literature: a systematic review of definitions, theories, methods and implications. Aslib J Inf Manag https://doi.org/10.1108/AJIM-09-2022-0418

Faustini PHA, Covões TF (2020) Fake news detection in multiple platforms and languages. Expert Syst Appl 158:113503. https://doi.org/10.1016/j.eswa.2020.113503

Frenkel S, Davey A, Zhong R (2020) Surge of virus misinformation stumps Facebook and Twitter. N Y Times (Online) https://www.nytimes.com/2020/03/08/technology/coronavirus-misinformation-social-media.html

Giglietto F, Iannelli L, Valeriani A, Rossi L (2019) ‘Fake news’ is the invention of a liar: how false information circulates within the hybrid news system. Curr Sociol 67(4):625–642. https://doi.org/10.1177/0011392119837536

Hamed SK, Ab Aziz MJ, Yaakub MR (2023) Fake news detection model on social media by leveraging sentiment analysis of news content and emotion analysis of users’ comments. Sensors (Basel, Switzerland) 23(4):1748. https://doi.org/10.3390/s23041748

Article   ADS   PubMed   Google Scholar  

Hutto C, Gilbert E (2014) VADER: A Parsimonious Rule-Based Model for Sentiment Analysis of Social Media Text. Proceedings of the International AAAI Conference on Web and Social Media, 8(1), 216–225. https://doi.org/10.1609/icwsm.v8i1.14550

Iwendi C, Mohan S, khan S, Ibeke E, Ahmadian A, Ciano T (2022) Covid-19 fake news sentiment analysis. Comput Electr Eng 101:107967–107967. https://doi.org/10.1016/j.compeleceng.2022.107967

Article   PubMed   PubMed Central   Google Scholar  

Jarrahi A, Safari L (2023) Evaluating the effectiveness of publishers’ features in fake news detection on social media. Multimed Tools Appl 82(2):2913–2939. https://doi.org/10.1007/s11042-022-12668-8

Article   PubMed   Google Scholar  

Kahneman D (2011) Thinking, fast and slow, 1st edn. Farrar, Straus and Giroux

Kaliyar RK, Goswami A, Narang P (2021) FakeBERT: fake news detection in social media with a BERT-based deep learning approach. Multimed Tools Appl 80(8):11765–11788. https://doi.org/10.1007/s11042-020-10183-2

Kim A, Dennis AR (2019) Says who? The effects of presentation format and source rating on fake news in social media. MIS Q 43(3):1025–1039. https://doi.org/10.25300/MISQ/2019/15188

Kumar A, Bezawada R, Rishika R, Janakiraman R, Kannan PK (2016) From social to sale: the effects of firm-generated content in social media on customer behavior. J Mark 80(1):7–25. https://doi.org/10.1509/jm.14.0249

Lewicka M, Czapinski J, Peeters G (1992) Positive-negative asymmetry or when the heart needs a reason. Eur J Soc Psychol 22(5):425–434. https://doi.org/10.1002/ejsp.2420220502

Loria S (2018) Textblob documentation. Release 0.15, 2 accessible at https://readthedocs.org/projects/textblob/downloads/pdf/latest/ . available at http://citebay.com/how-to-cite/textblob/

Meel P, Vishwakarma DK (2020) Fake news, rumor, information pollution in social media and web: a contemporary survey of state-of-the-arts, challenges and opportunities. Expert Syst Appl 153:112986. https://doi.org/10.1016/j.eswa.2019.112986

Mercer J (2010) Emotional beliefs. Int Organ 64(1):1–31. https://www.jstor.org/stable/40607979

Mohammad SM, Turney PD (2013) Crowdsourcing a word–emotion association lexicon. Comput Intell 29(3):436–465. https://doi.org/10.1111/j.1467-8640.2012.00460.x

Article   MathSciNet   Google Scholar  

Moravec PL, Kim A, Dennis AR (2020) Appealing to sense and sensibility: system 1 and system 2 interventions for fake news on social media. Inf Syst Res 31(3):987–1006. https://doi.org/10.1287/isre.2020.0927

Mourad A, Srour A, Harmanai H, Jenainati C, Arafeh M (2020) Critical impact of social networks infodemic on defeating coronavirus COVID-19 pandemic: Twitter-based study and research directions. IEEE Trans Netw Serv Manag 17(4):2145–2155. https://doi.org/10.1109/TNSM.2020.3031034

Ongsulee P (2017) Artificial intelligence, machine learning and deep learning. Paper presented at the 2017 15th International Conference on ICT and Knowledge Engineering (ICT&KE)

Ozbay FA, Alatas B (2020) Fake news detection within online social media using supervised artificial intelligence algorithms. Physica A 540:123174. https://doi.org/10.1016/j.physa.2019.123174

Patwa P, Sharma S, Pykl S, Guptha V, Kumari G, Akhtar MS, Ekbal A, Das A, Chakraborty T (2021) Fighting an Infodemic: COVID-19 fake news dataset. In: Combating online hostile posts in regional languages during emergency situation. Cham, Springer International Publishing

Păvăloaia V-D, Teodor E-M, Fotache D, Danileţ M (2019) Opinion mining on social media data: sentiment analysis of user preferences. Sustainability 11(16):4459. https://doi.org/10.3390/su11164459

Pawar KK, Shrishrimal PP, Deshmukh RR (2015) Twitter sentiment analysis: a review. Int J Sci Eng Res 6(4):957–964

Google Scholar  

Pennycook G, McPhetres J, Zhang Y, Lu JG, Rand DG (2020) Fighting COVID-19 misinformation on social media: experimental evidence for a scalable accuracy-nudge intervention. Psychol Sci 31(7):770–780. https://doi.org/10.1177/0956797620939054

Pennycook G, Rand DG (2020) Who falls for fake news? The roles of bullshit receptivity, overclaiming, familiarity, and analytic thinking. J Personal 88(2):185–200. https://doi.org/10.1111/jopy.12476

Peterson M (2019) A high-speed world with fake news: brand managers take warning. J Product Brand Manag 29(2):234–245. https://doi.org/10.1108/JPBM-12-2018-2163

Plutchik R (1980) A general psychoevolutionary theory of emotion. In: Plutchik R, Kellerman H (eds) Theories of emotion (3–33): Elsevier. https://doi.org/10.1016/B978-0-12-558701-3.50007-7

Rajamma RK, Paswan A, Spears N (2019) User-generated content (UGC) misclassification and its effects. J Consum Mark 37(2):125–138. https://doi.org/10.1108/JCM-08-2018-2819

Raza S, Ding C (2022) Fake news detection based on news content and social contexts: a transformer-based approach. Int J Data Sci Anal 13(4):335–362. https://doi.org/10.1007/s41060-021-00302-z

Salakhutdinov R, Tenenbaum JB, Torralba A (2012) Learning with hierarchical-deep models. IEEE Trans Pattern Anal Mach Intell 35(8):1958–1971. https://doi.org/10.1109/TPAMI.2012.269

Silverman C (2016) This Analysis Shows How Viral Fake Election News Stories Outperformed Real News On Facebook. BuzzFeed News 16. https://www.buzzfeednews.com/article/craigsilverman/viral-fake-election-news-outperformed-real-news-on-facebook

Suter V, Shahrezaye M, Meckel M (2022) COVID-19 Induced misinformation on YouTube: an analysis of user commentary. Front Political Sci 4:849763. https://doi.org/10.3389/fpos.2022.849763

Talwar S, Dhir A, Kaur P, Zafar N, Alrasheedy M (2019) Why do people share fake news? Associations between the dark side of social media use and fake news sharing behavior. J Retail Consum Serv 51:72–82. https://doi.org/10.1016/j.jretconser.2019.05.026

Vallat R (2018) Pingouin: statistics in Python. J Open Source Softw 3(31):1026. https://doi.org/10.21105/joss.01026

Article   ADS   Google Scholar  

Vasist PN, Sebastian M (2022) Tackling the infodemic during a pandemic: A comparative study on algorithms to deal with thematically heterogeneous fake news. Int J Inf Manag Data Insights 2(2):100133. https://doi.org/10.1016/j.jjimei.2022.100133

Vinodhini G, Chandrasekaran R (2012) Sentiment analysis and opinion mining: a survey. Int J Adv Res Comput Sci Softw Eng 2(6):282–292

Vosoughi S, Roy D, Aral S (2018) The spread of true and false news online. Science 359(6380):1146–1151. https://doi.org/10.1126/science.aap9559

Article   ADS   CAS   PubMed   Google Scholar  

Wang Y, McKee M, Torbica A, Stuckler D (2019) Systematic literature review on the spread of health-related misinformation on social media. Soc Sci Med 240:112552. https://doi.org/10.1016/j.socscimed.2019.112552

Wankhade M, Rao ACS, Kulkarni C (2022) A survey on sentiment analysis methods, applications, and challenges. Artif Intell Rev 55(7):5731–5780. https://doi.org/10.1007/s10462-022-10144-1

Whiting A, Williams D (2013) Why people use social media: a uses and gratifications approach. Qual Mark Res 16(4):362–369. https://doi.org/10.1108/QMR-06-2013-0041

Zhang H (2004) The optimality of naive Bayes. Aa 1(2):3

Zhou X, Zafarani R (2019) Network-based fake news detection: A pattern-driven approach. ACM SIGKDD Explor Newsl 21(2):48–60. https://doi.org/10.1145/3373464.3373473

Zhou X, Zafarani R, Shu K, Liu H (2019) Fake news: Fundamental theories, detection strategies and challenges. Paper presented at the Proceedings of the twelfth ACM international conference on web search and data mining. https://doi.org/10.1145/3289600.3291382

Download references

Open access funding provided by Linnaeus University.

Author information

Authors and affiliations.

Sabancı Business School, Sabancı University, Istanbul, Turkey

Bahareh Farhoudinia, Selcen Ozturkcan & Nihat Kasap

School of Business and Economics, Linnaeus University, Växjö, Sweden

Selcen Ozturkcan

You can also search for this author in PubMed   Google Scholar

Contributions

Bahareh Farhoudinia (first author) conducted the research, retrieved the open access data collected by other researchers, conducted the analysis, and drafted the manuscript as part of her PhD thesis successfully completed at Sabancı University in the year 2023. Selcen Ozturkcan (second author and PhD co-advisor) provided extensive guidance throughout the research process, co-wrote sections of the manuscript, and offered critical feedback on the manuscript. Nihat Kasap (third author and PhD main advisor) oversaw the overall project and provided valuable feedback on the manuscript.

Corresponding author

Correspondence to Selcen Ozturkcan .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Ethical approval

This article does not contain any studies with human participants performed by any of the authors.

Informed consent

Informed consent was not required as the study did not involve a design that requires consent.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Farhoudinia, B., Ozturkcan, S. & Kasap, N. Emotions unveiled: detecting COVID-19 fake news on social media. Humanit Soc Sci Commun 11 , 640 (2024). https://doi.org/10.1057/s41599-024-03083-5

Download citation

Received : 02 June 2023

Accepted : 22 April 2024

Published : 18 May 2024

DOI : https://doi.org/10.1057/s41599-024-03083-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Redaffectivelm: leveraging affect enriched embedding and transformer-based neural language model for readers’ emotion detection.

  • Anoop Kadan
  • V. L. Lajish

Knowledge and Information Systems (2024)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

research about fake news in social media

  • Frontiers in Psychology
  • Research Topics

The Psychology of Fake News on Social Media: Who falls for it, who shares it, why, and can we help users detect it?

Total Downloads

Total Views and Downloads

About this Research Topic

The proliferation of fake news on social media has become a major societal concern which has been shown to impact U.S. elections, referenda, and most recently effective public health messaging for the COVID-19 pandemic. While some advances on the use of automated systems to detect and highlight fake news have ...

Keywords : Fake news, misinformation, social media, election, referendum, politics, democracy, communication, Facebook, Twitter, psychology, individual differences, intervention

Important Note : All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

Topic Editors

Topic coordinators, recent articles, submission deadlines.

Submission closed.

Participating Journals

Total views.

  • Demographics

No records found

total views article views downloads topic views

Top countries

Top referring sites, about frontiers research topics.

With their unique mixes of varied contributions from Original Research to Review Articles, Research Topics unify the most influential researchers, the latest key findings and historical advances in a hot research area! Find out more on how to host your own Frontiers Research Topic or contribute to one as an author.

Smart. Open. Grounded. Inventive. Read our Ideas Made to Matter.

Which program is right for you?

MIT Sloan Campus life

Through intellectual rigor and experiential learning, this full-time, two-year MBA program develops leaders who make a difference in the world.

Earn your MBA and SM in engineering with this transformative two-year program.

A rigorous, hands-on program that prepares adaptive problem solvers for premier finance careers.

A 12-month program focused on applying the tools of modern data science, optimization and machine learning to solve real-world business problems.

Combine an international MBA with a deep dive into management science. A special opportunity for partner and affiliate schools only.

A doctoral program that produces outstanding scholars who are leading in their fields of research.

Bring a business perspective to your technical and quantitative expertise with a bachelor’s degree in management, business analytics, or finance.

Apply now and work for two to five years. We'll save you a seat in our MBA class when you're ready to come back to campus for your degree.

Executive Programs

The 20-month program teaches the science of management to mid-career leaders who want to move from success to significance.

A full-time MBA program for mid-career leaders eager to dedicate one year of discovery for a lifetime of impact.

A joint program for mid-career professionals that integrates engineering and systems thinking. Earn your master’s degree in engineering and management.

Non-degree programs for senior executives and high-potential managers.

A non-degree, customizable program for mid-career professionals.

Credit: Gordon Johnson from Pixabay

Social Media

What can be done to reduce the spread of fake news? MIT Sloan research finds that shifting peoples’ attention toward accuracy can decrease online misinformation sharing

MIT Sloan Office of Communications

Mar 17, 2021

Findings have implications for how social media companies stem the flow of false news 

Cambridge, Mass., March 17, 2021—Simple interventions to reduce the spread of misinformation can shift peoples’ attention toward accuracy and help them become more discerning about the veracity of the information they share on social media, according to new research led by  David Rand  , Erwin H. Schell Professor and an Associate Professor of Management Science and Brain and Cognitive Sciences, at the  MIT Sloan School of Management .

Rand conducted the research with his colleagues, Gordon Pennycook of the Hill/Levene Schools of Business at the University of Regina, Ziv Epstein, a doctoral student at the MIT Media Lab, Mohsen Mosleh of the University of Exeter Business School, Antonio Arechar, a research associate at MIT Sloan, and Dean Eckles, the Mitsubishi Career Development Professor and an Associate Professor of Marketing at MIT Sloan. The team’s findings are published in a forthcoming issue of the journal Nature. 

The study arrives at a time when the sharing of misinformation on social media—including both patently false political “fake news” and misleading hyperpartisan content—has become a key focus of public debate around the world. The topic gained prominence in 2016 in the aftermath of the U.S. presidential election and the referendum on Britain’s exit from the European Union, known as Brexit, during which fabricated stories, presented as legitimate news, received wide distribution on social media. The proliferation of false news during the COVID-19 pandemic, and this January’s violent insurrection in the nation’s Capital, illustrate that disinformation on platforms including Facebook and Twitter remains a pervasive problem.

The study comprises a series of surveys and field experiments. In the first survey, which involved roughly 850 social media users, the researchers found a disconnect between how people judge a news article’s accuracy and their decision of whether or not to share it. Even though people rated true headlines as much more accurate than false headlines, headline veracity had little impact on sharing. Although this may seem to indicate that people share inaccurate content because, for example, they care more about furthering their political agenda than they care about truth, Prof. Rand and his team propose an alternative explanation: Most people do not want to spread misinformation, but the social media context focuses their attention on factors other than truth and accuracy. Indeed, when directly asked, most participants said it was important to only share news that is accurate – even when they had just indicated they would share numerous false headlines only minutes before.

“The problem is not so much that people don’t care about the truth or want to purposely spread fake news; it’s that social media makes us share things that we would think better of if we stopped to think,” says Prof. Rand. “It’s understandable: scrolling through Twitter and Facebook is distracting. You’re moving at top speed, and reading the news while also being bombarded with pictures of cute babies and funny cat videos. You forget to think about what’s true or not. When it comes to retweeting a headline—even one you would realize was inaccurate if you thought about it—you fail to carefully consider its truthful because your attention is elsewhere.”

Subsequent survey experiments with thousands of Americans found that subtly prompting people to think about accuracy increases the quality of the news they share. In fact, when participants had to consider accuracy before making their decisions the sharing of misinformation was cut in half.

Finally, the team conducted a digital field experiment involving over 5,000 Twitter users who had previously shared news from websites known for publishing misleading content. The researchers used bot accounts to send the users a message asking them to evaluate the accuracy of a random non-political headline – and found that this simple accuracy prompt significantly improved the quality of the news the users subsequently retweeted. “Our message made the idea of accuracy more top-of-mind,” says Prof. Pennycook, who was the co-lead author on the paper with Mosleh and Epstein. “So, when they went back to their newsfeeds, they were more likely to ask themselves if posts they saw were accurate before deciding whether to share them.”

The research team’s findings have implications for how social media companies can stem the flow of misinformation. Platforms could, for instance, implement simple accuracy prompts to shift users’ attention towards the reliability of the content they read before they share it online. “By leveraging people’s existing but latent capabilities for discerning what is true, this approach has the advantage of preserving user autonomy. Therefore, it doesn’t require social media platforms to be the arbiters of truth, but instead enables the users of those platforms,” says Epstein. The team has been working with researchers at Google to develop applications based on this idea, and hope that social media companies like Facebook and Twitter will follow suit.

“Our research shows that people are actually often fairly good at discerning falsehoods from facts, but in the social media context they’re distracted and lack the time and inclination to consider it,” says Prof. Mosleh. “But if the social media platforms reminded users to think about accuracy—maybe when they log on or as they’re scrolling through their feeds—it could be just the subtle prod people need to get in a mindset where they think twice before they retweet” concludes Prof. Rand.

The MIT Sloan School of Management

The MIT Sloan School of Management is where smart, independent leaders come together to solve problems, create new organizations, and improve the world. Learn more at mitsloan.mit.edu .

Related Articles

Cell phone with COVID 19 molecule, syringe and Fake News

What are you looking for?

The researchers sought to understand how the reward structure of social media sites drives users to develop habits of posting misinformation on social media. (Photo/AdobeStock)

USC study reveals the key reason why fake news spreads on social media

The USC-led study of more than 2,400 Facebook users suggests that platforms — more than individual users — have a larger role to play in stopping the spread of misinformation online.

USC researchers may have found the biggest influencer in the spread of fake news: social platforms’ structure of rewarding users for habitually sharing information.

The team’s findings, published Monday by Proceedings of the National Academy of Sciences , upend popular misconceptions that misinformation spreads because users lack the critical thinking skills necessary for discerning truth from falsehood or because their strong political beliefs skew their judgment.

Just 15% of the most habitual news sharers in the research were responsible for spreading about 30% to 40% of the fake news.

The research team from the USC Marshall School of Business and the USC Dornsife College of Letters, Arts and Sciences wondered: What motivates these users? As it turns out, much like any video game, social media has a rewards system that encourages users to stay on their accounts and keep posting and sharing. Users who post and share frequently, especially sensational, eye-catching information, are likely to attract attention.

“Due to the reward-based learning systems on social media, users form habits of sharing information that gets recognition from others,” the researchers wrote. “Once habits form, information sharing is automatically activated by cues on the platform without users considering critical response outcomes, such as spreading misinformation.”

Posting, sharing and engaging with others on social media can, therefore, become a habit.

“[Misinformation is] really a function of the structure of the social media sites themselves.” — Wendy Wood , USC expert on habits

“Our findings show that misinformation isn’t spread through a deficit of users. It’s really a function of the structure of the social media sites themselves,” said Wendy Wood , an expert on habits and USC emerita Provost Professor of psychology and business.

“The habits of social media users are a bigger driver of misinformation spread than individual attributes. We know from prior research that some people don’t process information critically, and others form opinions based on political biases, which also affects their ability to recognize false stories online,” said Gizem Ceylan, who led the study during her doctorate at USC Marshall and is now a postdoctoral researcher at the Yale School of Management . “However, we show that the reward structure of social media platforms plays a bigger role when it comes to misinformation spread.”

In a novel approach, Ceylan and her co-authors sought to understand how the reward structure of social media sites drives users to develop habits of posting misinformation on social media.

Why fake news spreads: behind the social network

Overall, the study involved 2,476 active Facebook users ranging in age from 18 to 89 who volunteered in response to online advertising to participate. They were compensated to complete a “decision-making” survey approximately seven minutes long.

Surprisingly, the researchers found that users’ social media habits doubled and, in some cases, tripled the amount of fake news they shared. Their habits were more influential in sharing fake news than other factors, including political beliefs and lack of critical reasoning.

Frequent, habitual users forwarded six times more fake news than occasional or new users.

“This type of behavior has been rewarded in the past by algorithms that prioritize engagement when selecting which posts users see in their news feed, and by the structure and design of the sites themselves,” said second author Ian A. Anderson , a behavioral scientist and doctoral candidate at USC Dornsife. “Understanding the dynamics behind misinformation spread is important given its political, health and social consequences.”

Experimenting with different scenarios to see why fake news spreads

In the first experiment, the researchers found that habitual users of social media share both true and fake news.

In another experiment, the researchers found that habitual sharing of misinformation is part of a broader pattern of insensitivity to the information being shared. In fact, habitual users shared politically discordant news — news that challenged their political beliefs — as much as concordant news that they endorsed.

Lastly, the team tested whether social media reward structures could be devised to promote sharing of true over false information. They showed that incentives for accuracy rather than popularity (as is currently the case on social media sites) doubled the amount of accurate news that users share on social platforms.

The study’s conclusions:

  • Habitual sharing of misinformation is not inevitable.
  • Users could be incentivized to build sharing habits that make them more sensitive to sharing truthful content.
  • Effectively reducing misinformation would require restructuring the online environments that promote and support its sharing.

These findings suggest that social media platforms can take a more active step than moderating what information is posted and instead pursue structural changes in their reward structure to limit the spread of misinformation.

About the study:  The research was supported and funded by the USC Dornsife College of Letters, Arts and Sciences Department of Psychology, the USC Marshall School of Business and the Yale University School of Management.

Related Articles

From lab to sea: inside usc’s carbon capture research, how usc researchers are using computer code to write the future of medicine, what is ‘blue carbon’ inside usc’s research on carbon capture in upper newport bay.

Are We Really All Suckers for Fake News?

A new study sheds light on the need for greater media literacy.

A man looking at his phone against a colored background

The spread of fake news online has been called one of the greatest threats to democracy, with fabricated stories regularly racking up hundreds of thousands of views on social media. Some say that the onslaught of misinformation is fueling political polarization and dysfunction, since these materials are often designed to inflame partisan passions. 

But new research by economists Andrea Prat of Columbia and Charles Angelucci of MIT shows that the vast majority of Americans can reliably discern real news stories from fake ones and that a bigger threat to democracy may be lack of access to reliable news sources. In a series of laboratory experiments involving nearly fifteen thousand participants, Prat and Angelucci found that only a small percentage of adults are routinely fooled into believing that fabricated articles are true. Moreover, they discovered that Republicans and Democrats are only slightly more likely to believe fake news articles that smear the other side — a finding that the authors say belies the notion, widely held among pundits, that Americans no longer share a common view of the facts.

“The debate on political news has centered around ‘the death of truth’ and the existence of ‘parallel narratives,’ sometimes leading to urgent calls for drastic reforms,” such as new limitations on online speech, the authors write in American Economic Review . “Our work casts doubt on this narrative.”

But the researchers do conclude that a person’s ability to spot fake news is significantly influenced by their demographic characteristics. For example, they find that older, college-educated, and high-earning Americans are up to 18 percent more likely to detect fake news stories than younger, less-educated, and poorer individuals.

And this finding, Prat says, indicates that new efforts to improve media literacy and political engagement across all segments of the US population are necessary. “The key message is that there is brutal information inequality in US society,” he says. “Some people are informed; others are not. And this doesn’t correspond to the country’s ideological divide. Rather, it runs along socioeconomic lines.” Prat says that subsidizing access to serious journalism, such as by providing people vouchers, could help. “As a society, we’re devoting enormous resources to fighting misinformation and fake news. We should also be devoting resources to making sure that everybody gets access to real news.”

This article appears in the Fall 2024 print edition of Columbia Magazine with the title "How susceptible are Americans to fake news?"

More From Science & Technology

Conservation biologist Corinne Kendall holding a vulture

Why Vultures Need Our Protection

Corinne Kendall ’08GSAS, one of America’s top vulture experts, is working to preserve an unfairly maligned bird of prey

Disney Imagineer Lanny Smoot

Disney ‘Imagineer’ Lanny Smoot’s Lightsaber Moment

The engineer joins the National Inventors Hall of Fame

Person using AI chatbot

A New Way to Spot Text Written by AI, and Other Science News

Research briefs from Columbia

Stay Connected.

Sign up for our newsletter.

General Data Protection Regulation

Columbia University Privacy Notice

June 21, 2018

Biases Make People Vulnerable to Misinformation Spread by Social Media

Researchers have developed tools to study the cognitive, societal and algorithmic biases that help fake news spread

By Giovanni Luca Ciampaglia , Filippo Menczer & The Conversation US

research about fake news in social media

Roy Scott Getty Images

The following essay is reprinted with permission from The Conversation , an online publication covering the latest research.

Social media are among the  primary sources of news in the U.S.  and across the world. Yet users are exposed to content of questionable accuracy, including  conspiracy theories ,  clickbait ,  hyperpartisan content ,  pseudo science  and even  fabricated “fake news” reports .

It’s not surprising that there’s so much disinformation published: Spam and online fraud  are lucrative for criminals , and government and political propaganda yield  both partisan and financial benefits . But the fact that  low-credibility content spreads so quickly and easily  suggests that people and the algorithms behind social media platforms are vulnerable to manipulation.

On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing . By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.

Explaining the tools developed at the Observatory on Social Media.

Our research has identified three types of bias that make the social media ecosystem vulnerable to both intentional and accidental misinformation. That is why our  Observatory on Social Media  at Indiana University is building  tools  to help people become aware of these biases and protect themselves from outside influences designed to exploit them.

Bias in the brain

Cognitive biases originate in the way the brain processes the information that every person encounters every day. The brain can deal with only a finite amount of information, and too many incoming stimuli can cause  information overload . That in itself has serious implications for the quality of information on social media. We have found that steep competition for users’ limited attention means that  some ideas go viral despite their low quality —even when people prefer to share high-quality content.*

To avoid getting overwhelmed, the brain uses a  number of tricks . These methods are usually effective, but may also  become biases  when applied in the wrong contexts.

One cognitive shortcut happens when a person is deciding whether to share a story that appears on their social media feed. People are  very affected by the emotional connotations of a headline , even though that’s not a good indicator of an article’s accuracy. Much more important is  who wrote the piece .

To counter this bias, and help people pay more attention to the source of a claim before sharing it, we developed  Fakey , a mobile news literacy game (free on  Android  and  iOS ) simulating a typical social media news feed, with a mix of news articles from mainstream and low-credibility sources. Players get more points for sharing news from reliable sources and flagging suspicious content for fact-checking. In the process, they learn to recognize signals of source credibility, such as hyperpartisan claims and emotionally charged headlines.

Bias in society

Another source of bias comes from society. When people connect directly with their peers, the social biases that guide their selection of friends come to influence the information they see.

In fact, in our research we have found that it is possible to  determine the political leanings of a Twitter user  by simply looking at the partisan preferences of their friends. Our analysis of the structure of these  partisan communication networks  found social networks are particularly efficient at disseminating information – accurate or not – when  they are closely tied together and disconnected from other parts of society .

The tendency to evaluate information more favorably if it comes from within their own social circles creates “ echo chambers ” that are ripe for manipulation, either consciously or unintentionally. This helps explain why so many online conversations devolve into  “us versus them” confrontations .

To study how the structure of online social networks makes users vulnerable to disinformation, we built  Hoaxy , a system that tracks and visualizes the spread of content from low-credibility sources, and how it competes with fact-checking content. Our analysis of the data collected by Hoaxy during the 2016 U.S. presidential elections shows that Twitter accounts that shared misinformation were  almost completely cut off from the corrections made by the fact-checkers.

When we drilled down on the misinformation-spreading accounts, we found a very dense core group of accounts retweeting each other almost exclusively – including several bots. The only times that fact-checking organizations were ever quoted or mentioned by the users in the misinformed group were when questioning their legitimacy or claiming the opposite of what they wrote.

Bias in the machine

The third group of biases arises directly from the algorithms used to determine what people see online. Both social media platforms and search engines employ them. These personalization technologies are designed to select only the most engaging and relevant content for each individual user. But in doing so, it may end up reinforcing the cognitive and social biases of users, thus making them even more vulnerable to manipulation.

For instance, the detailed  advertising tools built into many social media platforms  let disinformation campaigners exploit  confirmation bias  by  tailoring messages  to people who are already inclined to believe them.

Also, if a user often clicks on Facebook links from a particular news source, Facebook will  tend to show that person more of that site’s content . This so-called “ filter bubble ” effect may isolate people from diverse perspectives, strengthening confirmation bias.

Our own research shows that social media platforms expose users to a less diverse set of sources than do non-social media sites like Wikipedia. Because this is at the level of a whole platform, not of a single user, we call this the  homogeneity bias .

Another important ingredient of social media is information that is trending on the platform, according to what is getting the most clicks. We call this  popularity bias , because we have found that an algorithm designed to promote popular content may negatively affect the overall quality of information on the platform. This also feeds into existing cognitive bias, reinforcing what appears to be popular irrespective of its quality.

All these algorithmic biases can be manipulated by  social bots , computer programs that interact with humans through social media accounts. Most social bots, like Twitter’s  Big Ben , are harmless. However, some conceal their real nature and are used for malicious intents, such as  boosting disinformation  or falsely  creating the appearance of a grassroots movement , also called “astroturfing.” We found  evidence of this type of manipulation  in the run-up to the 2010 U.S. midterm election.

To study these manipulation strategies, we developed a tool to detect social bots called  Botometer . Botometer uses machine learning to detect bot accounts, by inspecting thousands of different features of Twitter accounts, like the times of its posts, how often it tweets, and the accounts it follows and retweets. It is not perfect, but it has revealed that as many as  15 percent of Twitter accounts show signs of being bots .

Using Botometer in conjunction with Hoaxy, we analyzed the core of the misinformation network during the 2016 U.S. presidential campaign. We found many bots exploiting both the cognitive, confirmation and popularity biases of their victims and Twitter’s algorithmic biases.

These bots are able to construct filter bubbles around vulnerable users, feeding them false claims and misinformation. First, they can attract the attention of human users who support a particular candidate by tweeting that candidate’s hashtags or by mentioning and retweeting the person. Then the bots can amplify false claims smearing opponents by retweeting articles from low-credibility sources that match certain keywords. This activity also makes the algorithm highlight for other users false stories that are being shared widely.

Understanding complex vulnerabilities

Even as our research, and others’, shows how individuals, institutions and even entire societies can be manipulated on social media, there are  many questions  left to answer. It’s especially important to discover how these different biases interact with each other, potentially creating more complex vulnerabilities.

Tools like ours offer internet users more information about disinformation, and therefore some degree of protection from its harms. The solutions will  not likely be only technological , though there will probably be some technical aspects to them. But they must take into account  the cognitive and social aspects  of the problem.

*Editor’s note: This article was updated on Jan. 10, 2019, to remove a link to a study that has been retracted. The text of the article is still accurate, and remains unchanged.

This article was originally published on The Conversation . Read the original article .

New Research

New Study Finds Fake News Spreads Faster and Deeper Than Verified Stories on Twitter

Looking at 126,000 stories sent by ~3 million people, researchers found that humans, not bots, were primarily responsible for the spread of disinformation

Jason Daley

Correspondent

Twitter

It’s comforting to imagine that when faced with outright falsehoods, readers would recognize "fake news" for what it is and stop it in its tracks. Indeed, some have argued that the only reason fake news stories have penetrated the national conversation is because bots and nefarious outside actors have tried to push lies on a virtuous public. But reporting on a new study, Robinson Meyer at The Atlantic writes that data science contradicts that idea. In fact, it seems we like fake news, seek it out and spread it much more quickly than the truth.

To investigate how fake news spreads, MIT data scientist Soroush Vosoughi and his colleagues collected 12 years of data from Twitter. They then looked at tweets that had been investigated and debunked by fact-checking websites. Using bot technology software, they were able to exclude any traffic created by bots from their results. As Katie Langin at Science reports, that left them with a set of 126,000 “fake news” stories shared on Twitter 4.5 million times by some 3 million people. They looked at how quickly those stories spread versus tweets that were verified as true. What they found was that fake stories reached more people and propagated faster through the Twittersphere than real stories.

“It seems to be pretty clear [from our study] that false information outperforms true information,” Vosoughi tells Meyer. “And that is not just because of bots. It might have something to do with human nature.” The research appears in the journal Science .

Based on the study's findings, it appears that people are more willing to share fake news than accurate news. A false story was 70 percent more likely to earn a retweet than verified news, Meyer reports. While fake news was found in every category, from business to sports and science, false political stories, not surprisingly, were the most likely to be retweeted.

So why are people seemingly drawn to these false tweets? The study doesn’t address that directly, but the researchers do hypothesize that the novelty of fake news makes it more appealing to share. Brian Resnick at Vox reports that studies have shown that people are more likely to believe headlines or stories that they’ve read or heard many times before but were less likely to share them. They are more likely to share novel stories on social media that are emotionally or morally charged, even if they are not verified.

It’s that urge that fake news is designed to appeal to. “Fake news is perfect for spreadability: It’s going to be shocking, it’s going to be surprising, and it’s going to be playing on people’s emotions, and that’s a recipe for how to spread misinformation,” Miriam Metzger, a UC Santa Barbara communications researcher not involved in the study, tells Resnick.

So what can be done to combat fake news? A ccording to a press release , the team points out that the platforms themselves are currently complicit in spreading fake news by allowing them to appear on things like trending lists and by allowing fake news stories to game their algorithms. The researchers suggest the social media companies should take steps to assess those publishing information on their sites or they risk some sort of government regulation.

Twitter’s cooperation with the study was a good start. In a perspective paper published alongside the study, David Lazer of Northeastern University and Matthew Baum of the Harvard Kennedy School are now calling for more cooperation among social media companies and academics to get a handle on the anything-but-fake problem.

Get the latest stories in your inbox every weekday.

Jason Daley | | READ MORE

Jason Daley is a Madison, Wisconsin-based writer specializing in natural history, science, travel, and the environment. His work has appeared in Discover , Popular Science , Outside , Men’s Journal , and other magazines.

  • About The Journalist’s Resource
  • Follow us on Facebook
  • Follow us on Twitter
  • Criminal Justice
  • Environment
  • Politics & Government
  • Race & Gender

Expert Commentary

Why do Americans share so much fake news? One big reason is they aren’t paying attention, new research suggests

Americans who share fake news on social media might not lack media literacy skills. Chances are they don't stop to check accuracy, a new study suggests.

fake news share social media research misinformation

Republish this article

Creative Commons License

This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License .

by Denise-Marie Ordway, The Journalist's Resource March 17, 2021

This <a target="_blank" href="https://journalistsresource.org/media/share-fake-news-social-media/">article</a> first appeared on <a target="_blank" href="https://journalistsresource.org">The Journalist's Resource</a> and is republished here under a Creative Commons license.<img src="https://journalistsresource.org/wp-content/uploads/2020/11/cropped-jr-favicon-150x150.png" style="width:1em;height:1em;margin-left:10px;">

Many Americans share fake news on social media because they’re simply not paying attention to whether the content is accurate — not necessarily because they can’t tell real from made-up news, a new study in Nature suggests .

Lack of attention was the driving factor behind 51.2% of misinformation sharing among social media users who participated in an experiment conducted by a group of researchers from MIT, the University of Regina in Canada, University of Exeter Business School in the United Kingdom and Center for Research and Teaching in Economics in Mexico. The results of a second, related experiment indicate a simple intervention — prompting social media users to think about news accuracy before posting and interacting with content — might help limit the spread of online misinformation.

“It seems that the social media context may distract people from accuracy,” study coauthor Gordon Pennycook , an assistant professor of behavioral science at the University of Regina, told The Journalist’s Resource in an email interview. “People are often capable of distinguishing between true and false news content, but fail to even consider whether content is accurate before they share it on social media.”

Pennycook and his colleagues conducted seven behavioral science and survey experiments as part of their study, “ Shifting Attention to Accuracy Can Reduce Misinformation Online ,” published Wednesday. Some experiments focused on Facebook and others focused on Twitter.

The researchers recruited participants for most of the experiments through Amazon’s Mechanical Turk, an online crowdsourcing marketplace that many academics use. For one experiment, they selected Twitter users who previously had shared links to two well-known, right-leaning websites that professional fact-checkers consistently rate as untrustworthy — Breitbart.com and Infowars.com . The sample size for each experiment varies from 401 U.S. adults for the smallest to 5,379 for the largest.

For several experiments, researchers asked participants to review the basic elements of news stories — headlines, the first sentences and accompanying images. Half the stories represented actual news coverage while the other half contained fabricated information. Half the content was favorable to Republicans and half was favorable to Democrats. Participants were randomly assigned to either judge the accuracy of headlines or determine whether they would share them online.

For the final experiment, researchers sent private messages to 5,379 Twitter users who previously had shared content from Breitbart and Infowars. The messages asked those individuals to rate the veracity of one news headline about a topic unrelated to politics. Researchers then monitored the content those participants shared over the next 24 hours.

The experiments reveal a host of insights on why people share misinformation on social media:

  • One-third — 33.1% — of participants’ decisions to share false headlines were because they didn’t realize they were inaccurate.
  • More than half of participants’ decisions to share false headlines — 51.2% — were because of inattention.
  • Participants reported valuing accuracy over partisanship — a finding that challenges the idea that people share misinformation to benefit their political party or harm the opposing party. Nearly 60% of participants who completed a survey said it’s “extremely important” that the content they share on social media is accurate. About 25% said it’s “very important.”
  • Partisanship was a driving factor behind 15.8% of decisions to share false headlines on social media.
  • Social media platform design could contribute to misinformation sharing. “Our results suggest that the current design of social media platforms — in which users scroll quickly through a mix of serious news and emotionally engaging content, and receive instantaneous quantified social feedback on their sharing — may discourage people from reflecting on accuracy,” the authors write in their paper.
  • Twitter users who previously shared content from Breitbart and Infowars were less likely to share misinformation after receiving private messages asking them for their opinion of the accuracy of a news headline. During the 24 hours after receiving the messages, these Twitter users were 2.8 times more likely to share a link to a mainstream news outlet than a link to a fake news or hyper-partisan website.

Pennycook and his colleagues note that the Twitter intervention — sending private messages — seemed particularly effective among people with a larger number of Twitter followers. Pennycook told JR that’s likely because Twitter accounts with more followers are more influential within their networks.

“The downstream effect of improving the quality of news sharing increases with the influence of the user who is making better choices,” he explained. “It may be that the effect is as effective (if not more so) for users with more followers because the importance of ‘I better make sure this is true’ is literally greater for those with more followers.”

Pennycook said social media platforms could encourage the sharing of higher-quality content — and re-orient people back to truth — by nudging users to pay more attention to accuracy.

Platforms, the authors point out, “could periodically ask users to rate the accuracy of randomly selected headlines, thus reminding them about accuracy in a subtle way that should avoid reactance (and simultaneously generating useful crowd ratings that can help identify misinformation.”

The researchers received funding for their study from the Ethics and Governance of Artificial Intelligence Initiative of the Miami Foundation, the William and Flora Hewlett Foundation, the Omidyar Network, the John Templeton Foundation, the Canadian Institutes of Health Research, and the Social Sciences and Humanities Research Council of Canada.

About The Author

' src=

Denise-Marie Ordway

Suggestions or feedback?

MIT News | Massachusetts Institute of Technology

  • Machine learning
  • Sustainability
  • Black holes
  • Classes and programs

Departments

  • Aeronautics and Astronautics
  • Brain and Cognitive Sciences
  • Architecture
  • Political Science
  • Mechanical Engineering

Centers, Labs, & Programs

  • Abdul Latif Jameel Poverty Action Lab (J-PAL)
  • Picower Institute for Learning and Memory
  • Lincoln Laboratory
  • School of Architecture + Planning
  • School of Engineering
  • School of Humanities, Arts, and Social Sciences
  • Sloan School of Management
  • School of Science
  • MIT Schwarzman College of Computing

A remedy for the spread of false news?

Press contact :, media download, *terms of use:.

Images for download on the MIT News office website are made available to non-commercial entities, press and the general public under a Creative Commons Attribution Non-Commercial No Derivatives license . You may not alter the images provided, other than to crop them to size. A credit line must be used when reproducing images; if one is not provided below, credit the images to "MIT."

Previous image Next image

Stopping the spread of political misinformation on social media may seem like an impossible task. But a new study co-authored by MIT scholars finds that most people who share false news stories online do so unintentionally, and that their sharing habits can be modified through reminders about accuracy.

When such reminders are displayed, it can increase the gap between the percentage of true news stories and false news stories that people share online, as shown in online experiments that the researchers developed.

“Getting people to think about accuracy makes them more discerning in their sharing, regardless of ideology,” says MIT Professor David Rand, co-author of a newly published paper detailing the results. “And it translates into a scalable and easily implementable intervention for social media platforms.”

The study also indicates why people share false information online. Among people who shared a set of false news stories used in the study, around 50 percent did so because of inattention, related to the hasty way people use social media; another 33 percent were mistaken about the accuracy of the news they saw and shared it because they (incorrectly) thought it was true; and about 16 percent knowingly shared false news headlines.

“Our results suggest that the large majority of people across the ideological spectrum want to share only accurate content,” says Rand, the Erwin H. Schell Professor at the MIT Sloan School of Management and director of MIT Sloan’s Human Cooperation Laboratory and Applied Cooperation Team. “It’s not like most people are just saying, ‘I know this is false and I don’t care.’”

The paper, “Shifting attention to accuracy can reduce misinformation online,” is being published today in Nature . In addition to Rand, the co-authors are Gordon Pennycook, an assistant professor at the University of Regina; Ziv Epstein, a PhD candidate at the MIT Media Lab; Mohsen Mosleh, a lecturer at the University of Exeter Business School and a research affiliate at MIT Sloan; Antonio Arechar, a research associate at MIT Sloan; and Dean Eckles, the Mitsubishi Career Development Professor and an associate professor of marketing at MIT Sloan.

Inattention, confusion, or political motivation?

Observers have offered different ideas to explain why people spread false news content online. One interpretation is that people share false material for partisan gain, or to gain attention; another view is that people accidentally share inaccurate stories because they are confused. The authors advance a third possibility: inattention and the simple failure to stop and think about accuracy.

The study consists of multiple experiments, using more than 5,000 survey respondents from the U.S., as well as a field experiment conducted on Twitter. The first survey experiment asked 1,015 participants to rate the accuracy of 36 news stories (based on the headline, first sentence, and an image), and to say if they would share those items on social media. Half of the news items were true and half were false; half were favorable to Democrats and half were favorable to Republicans.

Overall, respondents considered sharing news items that were false but aligned with their views 37.4 percent of the time, even though they considered such headlines to be accurate just 18.2 percent of the time. And yet, at the end or the survey, a large majority of the experiment’s participants said accuracy was very important when it comes to sharing news online.

But if people are being honest about valuing accuracy, why do they share so many false stories? The study’s balance of evidence points to inattention and a knowledge deficit, not deception.

For instance, in a second experiment with 1,507 participants, the researchers examined the effect of shifting users’ attention toward the concept of accuracy. Before deciding whether they would share political news headlines, half of the participants were asked to rate the accuracy of a random nonpolitical headline — thereby emphasizing the concept of accuracy from the outset.

Participants who did not do the initial accuracy rating task said they were likely to share about 33 percent of true stories and 28 percent of false ones. But those who were given an initial accuracy reminder said they would share 34 percent of true stories and 22 percent of the false ones. Two more experiments replicated these results using other headlines and a more representative sample of the U.S. population.

To test whether these results could be applied on social media, the researchers conducted a field experiment on Twitter. “We created a set of bot accounts and sent messages to 5,379 Twitter users who regularly shared links to misinformation sites,” explains Mosleh. “Just like in the survey experiments, the message asked whether a random nonpolitical headline was accurate, to get users thinking about the concept of accuracy.” The researchers found that after reading the message, the users shared news from higher-quality news sites, as judged by professional fact-checkers.

How can we know why people share false news?

A final follow-up experiment, with 710 respondents, shed light on the nagging question of why people share false news. Instead of just deciding whether to share news headlines or not, the participants were asked to explicitly assess the accuracy of each story first. After doing that, the percentage of false stories that participants were willing to share dropped from about 30 percent to 15 percent.

Because that figure dropped in half, the researchers could conclude that 50 percent of the previously shared false headlines had been shared because of simple inattention to accuracy. And about a third of the shared false headlines were believed to be true by participants — meaning about 33 percent of the misinformation was spread due to confusion about accuracy.

The remaining 16 percent of the false news items were shared even though the respondents recognized them as being false. This small minority of cases represents the high-profile, “post-truth” type of purposeful sharing of misinformation.

A ready remedy?

“Our results suggest that in general, people are doing the best they can to spread accurate information,” Epstein says. “But the current design of social media environments, which can prioritize engagement and user retention over accuracy, stacks the deck against them.”

Still, the scholars think, their results shows that some simple remedies are available to the social media platforms.

“A prescription is to occasionally put content into people’s feeds that primes the concept of accuracy,” Rand says.

“My hope is that this paper will help inspire the platforms to develop these kinds of interventions,” he adds. “Social media companies by design have been focusing people’s attention on engagement. But they don’t have to only pay attention to engagement — you can also do proactive things to refocus users’ attention on accuracy.” The team has been exploring potential applications of this idea in collaboration with researchers at Jigsaw, a Google unit, and hope to do the same with social media companies.

Support for the research was provided, in par,t by the Ethics and Governance of Artificial Intelligence Initiative of the Miami Foundation, the William and Flora Hewlett Foundation, the Omidyar Network, the John Templeton Foundation, the Canadian Institutes of Health Research, and the Social Sciences and Humanities Research Council of Canada.

Share this news article on:

Press mentions.

Prof. David Rand and Prof. Gordon Pennycook of the University of Regina in Canada found that people improved the accuracy of their social media posts when asked to rate the accuracy of the headline first, reports Faye Flam for Bloomberg . “It’s not necessarily that [users] don’t care about accuracy. But instead, it’s that the social media context just distracts them, and they forget to think about whether it’s accurate or not before they decide to share it,” says Rand.

Previous item Next item

Related Links

  • Human Cooperation Laboratory
  • MIT Sloan School of Management

Related Topics

  • Social media
  • Technology and society

Related Articles

“When people are consuming news on social media, their inclination to share that news with others interferes with their ability to assess its accuracy, according to a new study co-authored by MIT researchers.”

Our itch to share helps spread Covid-19 misinformation

MIT professor David Rand is a one-time punk rock guitarist whose academic research digs into psychology, cooperation, and politics. “To me, the essence of punk rock is saying, ‘Let me think about this for myself,’” Rand says. “Let me not be bound by social norms and conventions. … That’s what I try to do in my research. It’s the punk-rock approach to social science.”

Playing a new tune

A new study co-authored by MIT Professor David Rand shows that labeling some news stories as false makes all other news stories seem more legitimate online.

The catch to putting warning labels on fake news

A new study co-authored by an MIT professor shows that crowdsourced judgments about the quality of news sources may effectively marginalize false news stories and other kinds of online misinformation.

Want to squelch fake news? Let the readers take charge

More mit news.

Five square slices show glimpse of LLMs, and the final one is green with a thumbs up.

Study: Transparency is often lacking in datasets used to train large language models

Read full story →

Charalampos Sampalis wears a headset while looking at the camera

How MIT’s online resources provide a “highly motivating, even transformative experience”

A small model shows a wooden man in a sparse room, with dramatic lighting from the windows.

Students learn theater design through the power of play

Illustration of 5 spheres with purple and brown swirls. Below that, a white koala with insets showing just its head. Each koala has one purple point on either the forehead, ears, and nose.

A framework for solving parabolic partial differential equations

Feyisayo Eweje wears lab coat and gloves while sitting in a lab.

Designing better delivery for medical therapies

Saeed Miganeh poses standing in a hallway. A street scene is visible through windows in the background

Making a measurable economic impact

  • More news on MIT News homepage →

Massachusetts Institute of Technology 77 Massachusetts Avenue, Cambridge, MA, USA

  • Map (opens in new window)
  • Events (opens in new window)
  • People (opens in new window)
  • Careers (opens in new window)
  • Accessibility
  • Social Media Hub
  • MIT on Facebook
  • MIT on YouTube
  • MIT on Instagram

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

About half of TikTok users under 30 say they use it to keep up with politics, news

The Pew-Knight Initiative supports new research on how Americans absorb civic information, form beliefs and identities, and engage in their communities.

Pew Research Center is a nonpartisan, nonadvocacy fact tank that informs the public about the issues, attitudes and trends shaping the world. Knight Foundation is a social investor committed to supporting informed and engaged communities. Learn more >

TikTok has been so popular among young Americans that presidential campaigns are using it for voter outreach. And some young adults are using TikTok to keep up with politics or get news, a March Pew Research Center survey shows.

Pew Research Center conducted this analysis to understand age differences in TikTok users’ views and experiences on the platform. The questions are drawn from a broader survey exploring the views and experiences of TikTok, X, Facebook and Instagram users. For this analysis, we surveyed 10,287 adult internet users in the United States from March 18 to 24, 2024.

Everyone who took part in the survey is a member of the Center’s American Trends Panel (ATP), an online survey panel that is recruited through national, random sampling of residential addresses. This way, nearly all U.S. adults have a chance of selection. The survey was weighted by combining the sample of internet users with data from ATP members who do not use the internet and weighting the combined dataset to be representative of all U.S. adults by gender, race, ethnicity, partisan affiliation, education and other categories. This analysis is based on those who use TikTok. Read more about the ATP’s methodology .

Here are the questions used for this analysis , along with responses, and the survey methodology .

This is a Pew Research Center analysis from the Pew-Knight Initiative, a research program funded jointly by The Pew Charitable Trusts and the John S. and James L. Knight Foundation. Find related reports online at https://www.pewresearch.org/pew-knight/ .

Our survey explored various reasons people might use TikTok and other social media platforms. Young TikTok users stand out from their older peers on several of these reasons, including:

A bar chart showing that young adults stand out in using TikTok to keep up with politics and get news.

Keeping up with politics or political issues. For 48% of TikTok users ages 18 to 29, this is a major or minor reason why they’re on the platform.

By comparison, 36% of those ages 30 to 49 and even smaller shares of older users say the same:

  • 22% of those 50 to 64
  • 24% of those 65 and older

Getting news. We also asked TikTok users if getting news in general is a reason they use the platform – regardless of whether that’s political news or another topic entirely. About half of those under 30 say getting news is a major or minor reason they use TikTok.

That compares with 41% of TikTok users ages 30 to 49 who say getting news is a reason they’re on it. The shares of older users saying so are even smaller:

  • 29% of those 50 to 64
  • 23% of those 65 and older

TikTok has increasingly become a destination for news, bucking trends on other social media sites. A 2023 Center study showed more Americans – and especially young Americans – regularly get news on the platform compared with a few years ago. 

For more on what motivates TikTok use – like entertainment, which is a major draw for most TikTok users – read our deep dive into why and how people use the platform .

What people see and share on TikTok

A bar chart showing that TikTok users under 30 are more likely than those 50 and older to say they see at least some political content there.

Seeing political content

Nearly half of all TikTok users (45%) say they see at least some content about politics or political issues on the platform. That includes 6% of users who say political content is all or most of what they see.

Half of users under 30 say they see at least some political content on TikTok. That’s higher than the 39% of those 50 and older who say the same. However, the shares of 18- to 29-year-old users and 30- to 49-year-old users who say this are statistically similar.

Sharing political content

As on other platforms we’ve studied , far smaller shares post about politics than see political content on TikTok. About one-in-ten users ages 18 to 29 (7%), 30 to 49 (8%) and 50 to 64 (8%) post at least some political content there. That compares with just 2% of TikTok users 65 and older.

But many users – 63% – post nothing at all.

Only 36% of TikTok users say they ever post or share on the platform. Users ages 30 to 49 are most likely to say this, at 44%. That compares with 37% of those 18 to 29, 26% of those 50 to 64 and 15% of those 65 and older.

Seeing news-related content

A bar chart showing that TikTok users under 30 stand out in seeing breaking news, opinions about current events.

Regardless of whether TikTok users say getting news is a reason they’re there, most see humor and opinions about news on the platform:

  • 84% say they ever see funny posts that reference current events on TikTok
  • 80% ever see people expressing opinions about current events
  • 57% ever see news articles posted, reposted, linked or screenshotted
  • 55% ever see information about a breaking news event as it’s happening

Users under 50 are more likely than older users to say they ever see each of these.

And TikTok users under 30 stand out further in seeing opinions about current events and information about breaking news. They are more likely than any other age group to ever see these two kinds of content.

TikTok and democracy

Debates around TikTok’s impact on the political environment in the United States – including for young voters specifically – are squarely in the national spotlight. We wanted to understand: Do TikTok users think the platform impacts democracy, and how?

research about fake news in social media

Overall, TikTok users are roughly twice as likely to think it’s mostly good for American democracy as they are to think it’s mostly bad (33% vs. 17%). But the largest share of users (49%) think it has no impact on democracy.

TikTok users under 30 are more positive, however – 45% of this group say it’s mostly good for democracy. That compares with:

  • 30% of users ages 30 to 49
  • 23% of users 50 to 64
  • 15% of users 65 and older

Even among users under 30, 39% say the platform has no impact on democracy. That share increases to 66% among users 65 and older.

The March survey found only minor differences by political party among TikTok users in views of its impact on democracy. Still, as lawmakers attempt to ban TikTok over national security concerns , other Center research has found that views of banning the platform have been sharply divided by political party among the general public.

To learn more about how Americans view and experience TikTok, X (formerly Twitter), Facebook and Instagram, read these companion reports:

How Americans Navigate Politics on TikTok, X, Facebook and Instagram

How americans get news on tiktok, x, facebook and instagram.

These Pew Research Center reports and this analysis are from the Pew-Knight Initiative, a research program funded jointly by The Pew Charitable Trusts and the John S. and James L. Knight Foundation.

Note: Here are the questions used for this analysis , along with responses, and the survey methodology .

  • News Media Trends
  • Politics Online
  • Social Media & the News

Download Colleen McClain's photo

Colleen McClain is a senior researcher focusing on internet and technology research at Pew Research Center .

How U.S. Adults Use TikTok

6 facts about americans and tiktok, whatsapp and facebook dominate the social media landscape in middle-income nations, most popular.

901 E St. NW, Suite 300 Washington, DC 20004 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

© 2024 Pew Research Center

Publication, Collaboration, Citation Performance, and Triple Helix Innovation Gene of Artificial Intelligence Research in the Communication Field: Comparing Asia to the Rest of the World

  • Published: 28 August 2024

Cite this article

research about fake news in social media

  • Yu Peng Zhu   ORCID: orcid.org/0000-0003-0544-3911 1 &
  • Han Woo Park   ORCID: orcid.org/0000-0002-1378-2473 2  

42 Accesses

1 Altmetric

Explore all metrics

Artificial intelligence (AI) in the communication field has become increasingly popular in recent years. This study collected data from 482 documents and cited references in the Web of Science database. It explores the knowledge structure related to AI in communication, combined with the triple helix innovation gene model. The analysis employed collaborative network analysis, two-mode network analysis, citation analysis, and quadratic assignment procedure-based correlation analysis. The results show that the most popular hotspots are human–machine communication, automatically generated publications, social media-mediated fake news, and some other AI-based applied research. Academic collaborations can be facilitated by transnational disciplinary leaders. China emerged as the core academic country with the greatest growth potential in Asia, while the core non-Asian country is the United States. In addition, the trend in collaboration among scholars in Asia is better than in non-Asian countries. However, concerning the characteristics of collaborating institutions, the triple-helix collaboration among universities, government bodies, and industries remains insufficient. Particularly, the collaboration between industry and government necessitates further development.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

research about fake news in social media

Similar content being viewed by others

Uncovering inter-specialty knowledge communication using author citation networks, the core-periphery problem in communication research: a network analysis of leading publication.

research about fake news in social media

The citation advantage of foreign language references for Chinese social science papers

Explore related subjects.

  • Artificial Intelligence

Akbari, M., Padash, H., Shahabaldini, Parizi, Z., Rezaei, H., Shahriari, E., Khosravani, A. (2022). A bibliometric review of green innovation research: Identifying knowledge domain and network. Quality & Quantity , 56, 3993–4023 https://doi.org/10.1007/s11135-021-01295-4

Allen, B., Dreyer, K., Stibolt, R., Agarwal, S., Coombs, L., Treml, C., Elkholy, M., Brink, L., & Wald, C. (2021). Evaluation and real-world performance monitoring of artificial intelligence models in clinical practice: Try it, buy it, check it. Journal of the American College of Radiology, 18 (11), 1489–1496. https://doi.org/10.1016/j.jacr.2021.08.022

Article   Google Scholar  

Bai, X. M., Xia, F., Lee, I., Zhang, J., & Ning, Z. L. (2016). Identifying anomalous citations for objective evaluation of scholarly article impact. Plose One, 11 (9), e0162364. https://doi.org/10.1371/journal.pone.0162364

Cai, Y. (2015). What contextual factors shape ‘innovation in innovation’? Integration of insights from the triple helix and the institutional logics perspective. Social Science Information, 54 (3), 299–326. https://doi.org/10.1177/0539018415583527

Cai, Y. (2022). Neo-triple helix model of innovation ecosystems: Integrating triple, quadruple and quintuple helix models. triple helix , 9(1), 76–106. https://doi.org/10.1163/21971927-bja10029

Cai, Y., & Lattu, A. (2022). Triple helix or quadruple helix: Which model of innovation to choose for empirical studies? Minerva, 60 , 257–280. https://doi.org/10.1007/s11024-021-09453-6

Carayannis, E. G., & Campbell, D. F. (2009). ‘Mode 3’ and ‘quadruple helix’: Toward a 21st century fractal innovation ecosystem. International Journal of Technology Management, 46 (3–4), 201–234. https://doi.org/10.1504/IJTM.2009.023374

Carayannis, E. G., & Campbell, D. F. (2010). Triple helix, quadruple helix and quintuple helix and how do knowledge, innovation and the environment relate to each other?: A proposed framework for a trans-disciplinary analysis of sustainable development and social ecology. International Journal of Social Ecology and Sustainable Development, 1 (1), 41–69. https://doi.org/10.4018/jsesd.2010010105

Carayannis, E. G., & Campbell, D. F. J. (2021). Democracy of climate and climate for democracy: The evolution of quadruple and quintuple helix innovation systems. Journal of the Knowledge Economy, 12 , 2050–2082. https://doi.org/10.1007/s13132-021-00778-x

Carayannis, E. G., & Campbell, D. F. (2022). Towards an emerging unified theory of helix architectures (EUTOHA): Focus on the quintuple innovation helix framework as the integrative device. Triple Helix, 9 (1), 65–75. https://doi.org/10.1163/21971927-bja10028

Choi, J.-A., & Park, S. (2024). Examining public responses to transgressions of CEOs on YouTube: Social and semantic network analysis. Journal of Contemporary Eastern Asia, 23 (1), 18–34. https://doi.org/10.17477/jcea.2024.23.1.018

Choi, J.-A., Park, S., Lim, Y. S., Nam, Y., Nam, I., Park, H. W. (2021). Network arrangements underlying strategic corporate social responsibility: Findings from globalized cyberspace and lessons for Asian regions. Journal of Contemporary Eastern Asia , 20(2), 19–34. https://doi.org/10.17477/JCEA.2021.20.2.019

Chung, C. J., Biddix, J. P., Park, H. W. (2020). Using digital technology to address confirmability and scalability in thematic analysis of participant-provided data. The Qualitative Report . https://doi.org/10.46743/2160-3715/2020.4046 .

Du-Harpur, X., Watt, F. M., & Luscombe, N. M. (2020). What is AI? Applications of artificial intelligence to dermatology. British Journal of Dermatology, 183 (3), 423–430. https://doi.org/10.1111/bjd.18880

Garcia, K. R., Mishler, S., Xiao, Y., Wang, C., Hu, B., Still, J. D., & Chen, J. (2022). Drivers’ understanding of artificial intelligence in automated driving systems: A study of a malicious stop sign. Journal of Cognitive Engineering and Decision Making, 16 (4), 237–251. https://doi.org/10.1177/15553434221117001

Goyanes, M., Demeter, M., Grané, A., et al. (2023). Research patterns in communication (2009–2019): Testing female representation and productivity differences, within the most cited authors and the field. Scientometrics, 128 , 137–156. https://doi.org/10.1007/s11192-022-04575-4

Guzman, A. L., & Lewis, S. C. (2019). Artificial intelligence and communication: A human–machine communication research agenda. New Media & Society, 22 (1), 70–86. https://doi.org/10.1177/146144481985

Hah, H., & Goldin, D. S. (2022). How clinicians perceive artificial intelligence-assisted technologies in diagnostic decision making: Mixed methods approach. Journal of Medical Internet Research, 23 (12), e33540. https://doi.org/10.2196/33540

Hameed, A., Omar, M., Bilal, M., & Park, H.W. (2023). Toward the consolidation of a multi-metric-based journal ranking and categorization system for computer science subject areas. Profesional de la información, 32 (7), e320703. 190.

Iztok, F. Jr., Iztok F. & Matjaž, P. (2016). Toward the discovery of citation cartels in citation networks. Frontiers in Physics, 4 , e00049. https://doi.org/10.3389/fphy.2016.00049

Jamshed, S., & Majeed, N. (2022). Framing evolution and knowledge domain visualization of business ethics research (1975–2019): A large-scale scientometric analysis. Quality & Quantity, 56 , 4269–4294. https://doi.org/10.1007/s11135-022-01315-x

Jimenez, D.R., Roig-Tierno, N., Tur, A.M., & Sendra-Pons, P. (2024). Conceptual structure of innovation systems: A systematic approach through qualitative data analysis, ROSA Journal, 1 (2), 1–25, https://doi.org/10.62478/BOTC4972

Johnson, J. A., Honnold, J. A., & Stevens, F. P. (2010). Using social network analysis to enhance nonprofit organizational research capacity: A case study. Journal of Community Practice, 18 (4), 493–512. https://doi.org/10.1080/10705422.2010.519683

Jones, B., & Jones, R. (2019). Public service chatbots: Automating conversation with BBC News. Digital Journalism, 7 (8), 1032–1053. https://doi.org/10.1080/21670811.2019.1609371

Lakshman, S. A., & Ebenezer, D. (2020). Application of principles of a artificial intelligence in mechanical engineering. IOP Conference Series: Materials Science and Engineering, 912 , e032075. https://doi.org/10.1088/1757-899X/912/3/032075

Lewis S. C., Guzman A. L., Schmidt T. R. (2019). Automation, journalism, and human–machine communication: Rethinking roles and relationships of humans and machines in news, Digital Journalism , 7(4), 409–427. https://doi.org/10.1080/21670811.2019.1577147

Leydesdorff, L. (2022). The triple helix, quadruple helix, …, and an N-tuple of helices: Explanatory models for analyzing the knowledge-based economy? Journal of the Knowledge Economy, 3 , 25–35. https://doi.org/10.1007/s13132-011-0049-4

Liu, D., & Zhu, Y. P. (2022). Evolution of knowledge structure in an emerging field based on a triple helix model: The case of smart factory. Journal of the Knowledge Economy . https://doi.org/10.1007/s13132-022-01073-z

Neyazi, T. A. (2019). Digital propaganda, political bots and polarized politics in India. Asian Journal of Communication, 30 (1), 39–57. https://doi.org/10.1080/01292986.2019.1699938

Ozbay, F. A., & Alatas, B. (2020). Fake news detection within online social media using supervised artificial intelligence algorithms. Physica a: Statistical Mechanics and Its Applications, 540 , e123174. https://doi.org/10.1016/j.physa.2019.123174

Park, H.W. (2024a). When there is no knowledge, the publishers and journals stop. ROSA Journal, 1, 1–2. https://doi.org/10.62478/ORVK4600

Park, H. W. (2024b). Measuring innovation and collaboration system using big data: A case study about Ho Chi Minh City. Triple Helix, 10 (2), 205–214. https://doi.org/10.1163/21971927-12340015

Park, H. W. (2014). Transition from the triple helix to n-tuple helices? An interview with Elias G. Carayannis and David F. J. Campbell. Scientometrics , 99(1), 203–207. https://doi.org/10.1007/s11192-013-1124-3

Park, H. W., & Stek, P. (2022). Measuring helix interactions in the context of economic development and public policies: From triple to quadruple and N-tuple helix vs. N-tuple and quadruple helix to triads. triple helix , 9(1), 43–53. https://doi.org/10.1163/21971927-bja10026

Park, S., Park, H. W. (2021). A webometric network analysis of electronic word of mouth (eWOM) characteristics and machine learning approach to consumer comments during a crisis. Profesional de la Información , 29(5). https://doi.org/10.3145/epi.2020.sep.16

Phillips, F. (2024). A more perilous research scene. ROSA Journal, 1, 1–8. https://doi.org/10.62478/WSMN9491

Simões, P. C., Moreira, A. C., & Dias, C. M. (2022). The “endless perspective” to university – industry – government relations. Triple Helix, 9 (3), 247–274.

Túñez-López, J. M., Toural-Bran, C., & Cacheiro-Requeijo, S. (2018). Automated-content generation using news-writing bots and algorithms: Perceptions and attitudes amongst Spain’s journalists. Profesional De La Información, 27 (4), 750–758. https://doi.org/10.3145/epi.2018.jul.04

Turing, A. M. (1950). Computing machinery and intelligence . Mind 1950; LIX 236:433–460.

Yoon, J. W., Yang, J. S., & Park, H. W. (2017). Quintuple helix structure of Sino-Korean research collaboration in science. Scientometrics, 113 (1), 61–81. https://doi.org/10.1007/s11192-017-2476-x

Yoon, S. W., Chung, S. W. (2020). The EU’s public diplomacy in Asia and the world through social media: Sentiment and semantic network analyses of official Facebook pages of European External Action Service and EU Delegation to the Republic of Korea. Journal of Contemporary Eastern Asia , 19(2), 234–263. https://doi.org/10.17477/JCEA.2020.19.2.234

Zhu, Y. P., Park, H. W. (2022). Profiling the most highly cited scholars from China: Who they are. To what extent they are interdisciplinary. Profesional de la información , 31(4), e310408. https://doi.org/10.3145/epi.2022.jul.08

Download references

Author information

Authors and affiliations.

School of Journalism and Communication, Chongqing Key Laboratory for Intelligent Communication & City’s International Promotion, Chongqing University, Chongqing, China

Yu Peng Zhu

Department of Media and Communication, Interdisciplinary Graduate Programs of Digital Convergence Business and East Asian Cultural Studies, Yeungnam University, Gyeongsangbuk-Do, Gyeongsan-Si, South Korea

Han Woo Park

You can also search for this author in PubMed   Google Scholar

Corresponding authors

Correspondence to Yu Peng Zhu or Han Woo Park .

Ethics declarations

Conflict of interest.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Han Woo Park is the corresponding author, and Yu Peng Zhu is the co-corresponding author.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Zhu, Y.P., Park, H.W. Publication, Collaboration, Citation Performance, and Triple Helix Innovation Gene of Artificial Intelligence Research in the Communication Field: Comparing Asia to the Rest of the World. J Knowl Econ (2024). https://doi.org/10.1007/s13132-024-02280-6

Download citation

Received : 26 December 2023

Accepted : 28 July 2024

Published : 28 August 2024

DOI : https://doi.org/10.1007/s13132-024-02280-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Artificial intelligence
  • Communication
  • Collaboration
  • Network analysis
  • Triple helix
  • Find a journal
  • Publish with us
  • Track your research

IMAGES

  1. Fake news is thriving thanks to social media users, study finds

    research about fake news in social media

  2. Fake news thrives on social media

    research about fake news in social media

  3. Research Finds Social Media Users Are More Likely To Believe Fake News

    research about fake news in social media

  4. (PDF) Fake News Identification on Social Media

    research about fake news in social media

  5. (PDF) Fake News on Social Media: The (In)Effectiveness of Warning Messages

    research about fake news in social media

  6. How to report fake news to social media

    research about fake news in social media

COMMENTS

  1. Fake news, disinformation and misinformation in social media: a review

    Social media outperformed television as the major news source for young people of the UK and the USA. 10 Moreover, as it is easier to generate and disseminate news online than with traditional media or face to face, large volumes of fake news are produced online for many reasons (Shu et al. 2017).Furthermore, it has been reported in a previous study about the spread of online news on Twitter ...

  2. Fake news on Social Media: the Impact on Society

    Abstract. Fake news (FN) on social media (SM) rose to prominence in 2016 during the United States of America presidential election, leading people to question science, true news (TN), and societal norms. FN is increasingly affecting societal values, changing opinions on critical issues and topics as well as redefining facts, truths, and beliefs.

  3. Fake news, disinformation and misinformation in social media: a review

    According to Pew Research Center's analysis of the news use across social media platforms, in 2020, about half of American adults get news on social media at least sometimes, Footnote 5 while in 2018, only one-fifth of them say they often get news via social media. Footnote 6. Hence, fake news can have a significant impact on society as ...

  4. Fake news on Social Media: the Impact on Society

    Fake news (FN) on social media (SM) rose to prominence in 2016 during the United States of America presidential election, leading people to question science, true news (TN), and societal norms. FN ...

  5. A systematic review on fake news research through the lens of news

    To mitigate this research gap, we present a comprehensive survey of fake news research, conducted in the fields of computer and social sciences, through the lens of news creation and consumption with internal and external factors. ... Allcott H, Gentzkow M. Social media and fake news in the 2016 election. Journal of Economic Perspectives. 2017 ...

  6. Spread of misinformation on social media: What contributes to it and

    Fake news refers to fabricated information that intends to deceive by mimicking the form of legitimate news (Lazer et al., 2018). ... By presenting a comprehensive view of strategies against the spread of misinformation on social media, this research review can provide insights for academic research and practical application. (3) Different from ...

  7. Emotions unveiled: detecting COVID-19 fake news on social media

    Research in the field of fake news began following the 2016 US election (Carlson, 2020; Wang et al., 2019).Fake news has been a popular topic in multiple disciplines, such as journalism ...

  8. MIT Sloan research about social media, misinformation, and elections

    Below is an overview of some MIT Sloan research about social media, fake news, and elections. Problems False rumors spread faster and wider than true information, according to a 2018 study published in Science by MIT Sloan professor Sinan Aral and Deb Roy and Soroush Vosoughi of the MIT Media Lab. They found falsehoods are 70% more likely to be ...

  9. PDF Trends in the Diffusion of Misinformation on Social Media

    around the election, fake news sites received almost as many Facebook engagements as the 38 major news sites in our sample. Even after the post-election decline, Facebook engagements with fake news sites still average roughly 70 million per month. This research demonstrates how novel data on social media usage can be used to understand

  10. Full article: Combating fake news, disinformation, and misinformation

    Fake news stories are shared more often on social media than articles from edited news media (Silverman & Alexander, Citation 2016), where there is some form of gatekeeping. Caplan et al. ( Citation 2018 ) corroborate this assertion and submit that social media platforms like Facebook and Twitter have been heavily cited as facilitating the ...

  11. The Psychology of Fake News on Social Media: Who falls for it, who

    The proliferation of fake news on social media has become a major societal concern which has been shown to impact U.S. elections, referenda, and most recently effective public health messaging for the COVID-19 pandemic. While some advances on the use of automated systems to detect and highlight fake news have been made, research into the human factors that allow individuals to believe, and ...

  12. Why do people around the world share fake news? New research finds

    The research has implications for policymakers and organizations like the United Nations looking to combat misinformation around the globe, as well as for social media companies trying to cut down on fake news sharing. "Really for anyone doing anti-disinformation work," Rand said.

  13. What can be done to reduce the spread of fake news? MIT Sloan research

    The topic gained prominence in 2016 in the aftermath of the U.S. presidential election and the referendum on Britain's exit from the European Union, known as Brexit, during which fabricated stories, presented as legitimate news, received wide distribution on social media. The proliferation of false news during the COVID-19 pandemic, and this ...

  14. Study reveals key reason why fake news spreads on social media

    USC study reveals the key reason why fake news spreads on social media. The USC-led study of more than 2,400 Facebook users suggests that platforms — more than individual users — have a larger role to play in stopping the spread of misinformation online. USC researchers may have found the biggest influencer in the spread of fake news ...

  15. Are We Really All Suckers for Fake News?

    Photo Illustration by Len Small / Prostock-Studio. The spread of fake news online has been called one of the greatest threats to democracy, with fabricated stories regularly racking up hundreds of thousands of views on social media. Some say that the onslaught of misinformation is fueling political polarization and dysfunction, since these materials are often designed to inflame partisan passions.

  16. Fake news and the spread of misinformation: A research roundup

    "Social Media and Fake News in the 2016 Election" ... Abstract: "This article examines the news behaviors and attitudes of teenagers, an understudied demographic in the research on youth and news media. Based on interviews with 61 racially diverse high school students, it discusses how adolescents become informed about current events and ...

  17. Biases Make People Vulnerable to Misinformation Spread by Social Media

    The following essay is reprinted with permission from The Conversation, an online publication covering the latest research. Social media are among the primary sources of news in the U.S. and ...

  18. Fake news and fact-checking: 2019 research you should know about

    Below is a sampling of the research published in 2019 — seven journal articles that examine fake news from multiple angles, including what makes fact-checking most effective and the potential use of crowdsourcing to help detect false content on social media. Because getting good news is also a great way to start 2020, I included a study that ...

  19. New Study Finds Fake News Spreads Faster and Deeper Than Verified

    But reporting on a new study, Robinson Meyer at The Atlantic writes that data science contradicts that idea. In fact, it seems we like fake news, seek it out and spread it much more quickly than ...

  20. A main reason people share fake news: Lack of attention, study finds

    Many Americans share fake news on social media because they're simply not paying attention to whether the content is accurate — not necessarily because they can't tell real from made-up news, a new study in Nature suggests. Lack of attention was the driving factor behind 51.2% of misinformation sharing among social media users who participated in an experiment conducted by a group of ...

  21. Fake news, social media and marketing: A systematic review

    Antecedents and outcomes of fake news are promising themes for marketing research. There is growing concern amongst policy makers, managers and academic researchers over the role that social media plays in spreading misinformation, widely described as 'Fake News'. However, research to date has mainly focussed on the implications of fake ...

  22. Fake news on Social Media: the Impact on Society

    Fake news (FN) on social media (SM) rose to prominence in 2016 during the United States of America presidential election, leading people to question science, true news (TN), and societal norms. FN is increasingly affecting societal values, changing opinions on critical issues and topics as well as redefining facts, truths, and beliefs. To understand the degree to which FN has changed society ...

  23. A remedy for the spread of false news?

    Stopping the spread of political misinformation on social media may seem like an impossible task. But a new study co-authored by MIT scholars finds that most people who share false news stories online do so unintentionally, and that their sharing habits can be modified through reminders about accuracy.. When such reminders are displayed, it can increase the gap between the percentage of true ...

  24. Fake News on Social Media: The (In)Effectiveness of Warning Messages

    It researches fake news and urban legends and. The (In)Effectiveness of Fake News Warning Messages. Thirty Ninth International Conference on Information Systems, San Francisco 2018 8. indicates ...

  25. Warning labels from fact checkers work

    New MIT Sloan research explores partisan differences in attitudes towards fact-checkers and shows that social media fact-checking warning labels are broadly effective across the political spectrum.

  26. Fostering Critical Thinking Skills in Tertiary-Level Students for Media

    To discern the authenticity of online content, one must possess Media and Information Literacy (MIL). MIL encompasses the knowledge, mindset, and skill set needed to determine what information is necessary, how to access it, when to do so, where to find it, critically evaluate it upon acquisition, and employ it ethically. In the context of fake news, individuals must engage in critical ...

  27. About half of TikTok users under 30 keep up with politics, news there

    That compares with 41% of TikTok users ages 30 to 49 who say getting news is a reason they're on it. The shares of older users saying so are even smaller: 29% of those 50 to 64; 23% of those 65 and older; TikTok has increasingly become a destination for news, bucking trends on other social media sites.

  28. Fact-checking false and misleading claims about Tim Walz

    Vice President Kamala Harris announced Minnesota Gov. Tim Walz as her running mate at an Aug. 6 rally in Philadelphia, sparking a flurry of online discussion about Walz and his background. It wasn ...

  29. Publication, Collaboration, Citation Performance, and Triple Helix

    This shows that scholars in Asia and elsewhere are actively working on how AI can be applied to solve practical problems in communication studies. Human-machine communication, automated publication generation, social media fake news, and other applied research based on AI have become the most popular hot topics.

  30. Fake pro-Trump X accounts using stolen photos of European ...

    (The Hill) — At least 17 social media accounts posing as supporters of former President Donald Trump shared posts using stolen photos of European women, new research shows. Working with CNN, the ...