A pharmaceutical plant, showing vaccine vials being filled from above with solution.

New method developed to detect fake vaccines in supply chains

Research published this week and led by University of Oxford researchers describes a first-of-its-kind method capable of distinguishing authentic and falsified vaccines by applying machine learning to mass spectral data. The method proved effective in differentiating between a range of authentic and ‘faked’ vaccines previously found to have entered supply chains.

This latest research will bring the world community one step closer to being able to tell apart falsified, ineffective vaccines from the real thing, making us all safer. It has been a tremendous collaborative effort, with everyone having this same important goal in mind. Co-author Professor Nicole Zitzmann (Department of Biochemistry, University of Oxford)

The results of the study provide a proof-of-concept method that could be scaled to address the urgent need for more effective global vaccine supply chain screening. A key benefit is that it uses clinical mass spectrometers already distributed globally for medical diagnostics.

The global population is increasingly reliant on vaccines to maintain population health with billions of doses used annually in immunisation programs worldwide. The vast majority of vaccines are of excellent quality. However, a rise in substandard and falsified vaccines threaten global public health. Besides failing to treat the disease for which they were intended, these can have serious health consequences, including death, and reduce confidence in vaccines. Unfortunately, there is currently no global infrastructure in place to monitor supply chains using screening methods developed to identify ineffective vaccines.

In this new study, researchers developed and validated a method that is able to distinguish authentic and falsified vaccines using instruments developed for identifying bacteria in hospital microbiology laboratories. The method is based on matrix-assisted laser desorption/ionisation-mass spectrometry (MALDI-MS), a technique used to identify the components of a sample by giving the constituent molecules a charge and then separating them. The MALDI-MS analysis is then combined with open-source machine learning. This provides a reliable multi-component model which can differentiate authentic and falsified vaccines, and is not reliant on a single marker or chemical constituent.

This innovative research provides compelling evidence that MALDI mass spectrometry techniques could be used in accessible systems for screening for vaccine falsification globally, especially in centres with hospital microbiology laboratories, enhancing public health and confidence in vaccines. Co-author Professor Paul Newton (Centre for Tropical Medicine and Global Health, University of Oxford)

The method successfully distinguished between a range of genuine vaccines – including for influenza (flu), hepatitis B virus, and meningococcal disease – and solutions commonly used in falsified vaccines, such as sodium chloride.

Professor James McCullagh , study co-leader and Professor of Biological Chemistry in the Department of Chemistry, University of Oxford said: ‘We are thrilled to see the method’s effectiveness and its potential for deployment into real-world vaccine authenticity screening. This is an important milestone for the Vaccine Identity Evaluation (VIE) consortium which focusses on the development and evaluation of innovative devices for detecting falsified and substandard vaccines, supported by multiple research partners including the World Health Organization (WHO), medicine regulatory authorities and vaccine manufacturers.’

The study ‘Using matrix assisted laser desorption ionisation mass spectrometry combined with machine learning for vaccine authenticity screening’ has been published in npj Vaccines.

This research was funded by two anonymous philanthropic families, the Oak Foundation, the Wellcome Trust and the World Health Organization (WHO).

The study was led by a team at the Mass Spectrometry Research Facility in the Department of Chemistry and the Department of Biochemistry, University of Oxford and was part of a research consortium involving teams from the Rutherford Appleton Laboratory of STFC at Harwell and the Departments of Chemistry, Biochemistry and Nuffield Department of Medicine Centre for Global Health Research at the University of Oxford.

Subscribe to News

DISCOVER MORE

  • Support Oxford's research
  • Partner with Oxford on research
  • Study at Oxford
  • Research jobs at Oxford

You can view all news or browse by category

Logo

Scientists develop new method to detect fake vaccines

detecting fake news assignment vaccines answer key

New method to detect fake vaccines

  • A team from the University of Oxford have developed a first of its kind mass spectrometry method for vaccine authenticity screening using machine learning.
  • The method repurposes clinical mass spectrometers already present in hospitals worldwide, making the approach feasible for global supply chain monitoring.
  • The discovery offers an effective solution to the rise in substandard and counterfeit vaccines threatening public health.

Graphical abstract showing vaccines and falsified vaccine surrogates being analysed with a MALDI-MS machine to get a PLS-DA Scores Plot, allowing researchers to distinguish between authentic vaccines and falsified vaccines.

Research published today in the Nature portfolio journal npj Vaccines describes a new method capable of distinguishing authentic and falsified vaccines using machine learning analysis of mass spectral data. The method proved effective in differentiating between a range of authentic and ‘faked’ vaccines previously found to have entered supply chains.

A key benefit of the novel method is that it uses clinical mass spectrometers already distributed globally for medical diagnostics, giving it the potential to address the urgent need for more effective global vaccine supply chain screening.

Professor James McCullagh , study co-leader and Professor of Biological Chemistry in the Department of Chemistry said:

This method is the culmination of a number of years of collaborative research that has brought together scientists from multiple departments and divisions across the university with outside partners including Prof. Pavel Matousek at the Rutherford Appleton Laboratory at Harwell. Rebecca Clarke (former Part II student) and John Walsby-Tickle both played key roles in the method’s development in the Department of Chemistry.

"We are thrilled to see the method’s effectiveness and its potential for deployment into real-world vaccine authenticity screening. This is an important milestone for The Vaccine Identity Evaluation (VIE) consortium which focusses on the development and evaluation of innovative devices for detecting falsified and substandard vaccines, supported by multiple research partners including the World Health Organisation (WHO), medicine regulatory authorities and vaccine manufacturers."

A researcher in a white lab coat pipettes a sample.

The global population is increasingly reliant on vaccines to maintain population health with billions of doses used annually in immunisation programs worldwide. The vast majority of vaccines are of excellent quality. However, a rise in substandard and falsified vaccines threaten global public health. Besides failing to treat the disease for which they were intended, these can have serious health consequences, including death, and reduce confidence in vaccines. Unfortunately, there is currently no global infrastructure in place to monitor supply chains using screening methods developed to identify ineffective vaccines.

In this new study, researchers developed and validated a method that is able to distinguish authentic and falsified vaccines using instruments developed for identifying bacteria in hospital microbiology laboratories. The method is based on matrix-assisted laser desorption/ionisation-mass spectrometry (MALDI-MS), a technique used to identify the components of a sample by giving the constituent molecules a charge then separating them. The MALDI-MS analysis is then combined with open-source machine learning. This provides a reliable multi-component model which can differentiate authentic and falsified vaccines, and is not reliant on a single marker or chemical constituent.

The method successfully distinguished between a range of genuine vaccines – including for influenza (flu), hepatitis B virus, and meningococcal disease – and solutions commonly used in falsified vaccines, such as sodium chloride. The results provide a proof-of-concept method that could be scaled to address the urgent need for global vaccine supply chain screening.

Co-author Professor Nicole Zitzmann (Professor of Virology in the Department of Biochemistry) said:

This latest research will bring the world community one step closer to being able to tell apart falsified, ineffective vaccines from the real thing, making us all safer. It has been a tremendous collaborative effort, with everyone having this same important goal in mind. Bevin Gangadharan, Tehmina Bharucha, Laura Gomez and Yohan Arman from the Department of Biochemistry all played key roles in co-developing the method.

Co-author Professor Paul Newton (Professor of Tropical Medicine at the Centre for Tropical Medicine and Global Health) said:

This innovative research provides compelling evidence that MALDI mass spectrometry techniques could be used in accessible systems for screening for vaccine falsification globally, especially in centres with hospital microbiology laboratories, enhancing public health and confidence in vaccines.

Read more in npj Vaccines : https://www.nature.com/articles/s41541-024-00946-5

Inset and banner images: John Walsby-Tickle (Mass Spectrometry Services Manager, Department of Chemistry Mass Spectrometry Research Facility) and Isabelle Legge (Research Assistant in the McCullagh Group, Department of Chemistry) using the MALDI-MS system for vaccine authenticity testing.

Every print subscription comes with full digital access

Science News

How to detect, resist and counter the flood of fake news.

Although most people are concerned about misinformation, few know how to spot a deceitful post

an illustration of a wave overwhelming people

As a wave of misinformation threatens to drown us, researchers are coming up with ways for us to get our footing.

Brian Stauffer

Share this:

By Alexandra Witze

May 6, 2021 at 6:00 am

From lies about election fraud to QAnon conspiracy theories and anti-vaccine falsehoods, misinformation is racing through our democracy. And it is dangerous.

Awash in bad information, people have swallowed hydroxychloroquine hoping the drug will protect them against COVID-19 — even with no evidence that it helps ( SN Online: 8/2/20 ). Others refuse to wear masks, contrary to the best public health advice available. In January, protestors disrupted a mass vaccination site in Los Angeles, blocking life-saving shots for hundreds of people. “COVID has opened everyone’s eyes to the dangers of health misinformation,” says cognitive scientist Briony Swire-Thompson of Northeastern University in Boston.

The pandemic has made clear that bad information can kill. And scientists are struggling to stem the tide of misinformation that threatens to drown society. The sheer volume of fake news, flooding across social media with little fact-checking to dam it, is taking an enormous toll on trust in basic institutions. In a December poll of 1,115 U.S. adults, by NPR and the research firm Ipsos, 83 percent said they were concerned about the spread of false information . Yet fewer than half were able to identify as false a QAnon conspiracy theory about pedophilic Satan worshippers trying to control politics and the media.

Scientists have been learning more about why and how people fall for bad information — and what we can do about it. Certain characteristics of social media posts help misinformation spread, new findings show. Other research suggests bad claims can be counteracted by giving accurate information to consumers at just the right time, or by subtly but effectively nudging people to pay attention to the accuracy of what they’re looking at. Such techniques involve small behavior changes that could add up to a significant bulwark against the onslaught of fake news.

anti-vaccine protestors outside of Dodger stadium

Misinformation is tough to fight, in part because it spreads for all sorts of reasons. Sometimes it’s bad actors churning out fake-news content in a quest for internet clicks and advertising revenue, as with “troll farms” in Macedonia that generated hoax political stories during the 2016 U.S. presidential election. Other times, the recipients of misinformation are driving its spread.

Some people unwittingly share misinformation on social media and elsewhere simply because they find it surprising or interesting. Another factor is the method through which the misinformation is presented — whether through text, audio or video. Of these, video can be seen as the most credible, according to research by S. Shyam Sundar, an expert on the psychology of messaging at Penn State. He and colleagues decided to study this after a series of murders in India started in 2017 as people circulated via WhatsApp a video purported to be of child abduction. (It was, in reality, a distorted clip of a public awareness campaign video from Pakistan.)

Sundar recently showed 180 participants in India audio, text and video versions of three fake-news stories as WhatsApp messages, with research funding from WhatsApp. The video stories were assessed as the most credible and most likely to be shared by respondents with lower levels of knowledge on the topic of the story. “Seeing is believing,” Sundar says.

Video sells

WhatsApp users looked at three versions of a story that falsely claimed that rice was being made out of plastic — in (left to right) text, audio or a video showing a man feeding plastic sheets into a machine.

a screencap of what WhatsApp users saw

Participants tended to rate the video version as more credible than the audio or text versions. The effect diminished for users who were highly involved with the topic of the false story, suggesting that video is a particularly compelling medium for those who may not be knowledgeable on the topic at hand.

Perceived credibility of a message based on format and issue involvement

detecting fake news assignment vaccines answer key

The findings, in press at the Journal of Computer-Mediated Communication , suggest several ways to fight fake news, he says. For instance, social media companies could prioritize responding to user complaints when the misinformation being spread includes video, above those that are text-only. And media-literacy efforts might focus on educating people that videos can be highly deceptive. “People should know they are more gullible to misinformation when they see something in video form,” Sundar says. That’s especially important with the rise of deepfake technologies that feature false but visually convincing videos ( SN: 9/15/18, p. 12 ).

One of the most insidious problems with fake news is how easily it lodges itself in our brains and how hard it is to dislodge once it’s there. We’re constantly deluged with information, and our minds use cognitive shortcuts to figure out what to retain and what to let go, says Sara Yeo, a science-communication expert at the University of Utah in Salt Lake City. “Sometimes that information is aligned with the values that we hold, which makes us more likely to accept it,” she says. That means people continually accept information that aligns with what they already believe, further insulating them in self-reinforcing bubbles.

Special report: Awash in deception

  • A few simple tricks make fake news stories stick in the brain
  • Vaccine hesitancy is nothing new. Here’s the damage it’s done over centuries
  • Climate change disinformation is evolving. So are efforts to fight back

Compounding the problem is that people can process the facts of a message properly while misunderstanding its gist because of the influence of their emotions and values , psychologist Valerie Reyna of Cornell University wrote in 2020 in Proceedings of the National Academy of Sciences .

Thanks to new insights like these, psychologists and cognitive scientists are developing tools people can use to battle misinformation before it arrives — or that prompts them to think more deeply about the information they are consuming.

One such approach is to “prebunk” beforehand rather than debunk after the fact. In 2017, Sander van der Linden, a social psychologist at the University of Cambridge, and colleagues found that presenting information about a petition that denied the reality of climate science following true information about climate change canceled any benefit of receiving the true information . Simply mentioning the misinformation undermined people’s understanding of what was true.

That got van der Linden thinking: Would giving people other relevant information before giving them the misinformation be helpful? In the climate change example, this meant telling people ahead of time that “Charles Darwin” and “members of the Spice Girls” were among the false signatories to the petition. This advance knowledge helped people resist the bad information they were then exposed to and retain the message of the scientific consensus on climate change.

Here’s a very 2021 metaphor: Think of misinformation as a virus, and prebunking as a weakened dose of that virus. Prebunking becomes a vaccine that allows people to build up antibodies to bad information. To broaden this beyond climate change, and to give people tools to recognize and battle misinformation more broadly, van der Linden and colleagues came up with a game, Bad News , to test the effectiveness of prebunking (see Page 36). The results were so promising that the team developed a COVID-19 version of the game, called GO VIRAL! Early results suggest that playing it helps people better recognize pandemic-related misinformation.

Take a breath

Sometimes it doesn’t take very much of an intervention to make a difference. Sometimes it’s just a matter of getting people to stop and think for a moment about what they’re doing, says Gordon Pennycook, a social psychologist at the University of Regina in Canada.

In one 2019 study, Pennycook and David Rand, a cognitive scientist now at MIT, tested real news headlines and partisan fake headlines, such as “Pennsylvania federal court grants legal authority to REMOVE TRUMP after Russian meddling,” with nearly 3,500 participants. The researchers also tested participants’ analytical reasoning skills. People who scored higher on the analytical tests were less likely to identify fake news headlines as accurate, no matter their political affiliation. In other words, lazy thinking rather than political bias may drive people’s susceptibility to fake news, Pennycook and Rand reported in Cognition .

When it comes to COVID-19, however, political polarization does spill over into people’s behavior. In a working paper first posted online April 14, 2020, at PsyArXiv.org, Pennycook and colleagues describe findings that political polarization, especially in the United States with its contrasting media ecosystems, can overwhelm people’s reasoning skills when it comes to taking protective actions, such as wearing masks.

Inattention plays a major role in the spread of misinformation, Pennycook argues. Fortunately, that suggests some simple ways to intervene, to “nudge” the concept of accuracy into people’s minds, helping them resist misinformation. “It’s basically critical thinking training, but in a very light form,” he says. “We have to stop shutting off our brains so much.”

Push in the right direction

Nudging Twitter users to think about the accuracy of a nonpolitical headline resulted in users temporarily sharing more information from more trustworthy media outlets (blue dots toward the right) and less from less trustworthy outlets (blue dots toward the left). Dot size is proportional to the number of tweets that link to that website prior to the accuracy nudge.

Effect of an accuracy nudge on news sharing

detecting fake news assignment vaccines answer key

With nearly 5,400 people who previously tweeted links to articles from two sites known for posting misinformation — Breitbart and InfoWars — Pennycook, Rand and colleagues used innocuous-sounding Twitter accounts to send direct messages with a seemingly random question about the accuracy of a nonpolitical news headline. Then the scientists tracked how often the people shared links from sites of high-quality information versus those known for low-quality information, as rated by professional fact-checkers, for the next 24 hours.

On average, people shared higher-quality information after the intervention than before. It’s a simple nudge with simple results, Pennycook acknowledges — but the work, reported online March 17 in Nature , suggests that very basic reminders about accuracy can have a subtle but noticeable effect .

For debunking, timing can be everything. Tagging headlines as “true” or “false” after presenting them helped people remember whether the information was accurate a week later, compared with tagging before or at the moment the information was presented, Nadia Brashier, a cognitive psychologist at Harvard University, reported with Pennycook, Rand and political scientist Adam Berinsky of MIT in February in Proceedings of the National Academy of Sciences .

detecting fake news assignment vaccines answer key

How to debunk

Debunking bad information is challenging, especially if you’re fighting with a cranky family member on Facebook. Here are some tips from misinformation researchers:

  • Arm yourself with media-literacy skills, at sites such as the News Literacy Project (newslit.org), to better understand how to spot hoax videos and stories.
  • Don’t stigmatize people for holding inaccurate beliefs. Show empathy and respect, or you’re more likely to alienate your audience than successfully share accurate information.
  • Translate complicated but true ideas into simple messages that are easy to grasp. Videos, graphics and other visual aids can help.
  • When possible, once you provide a factual alternative to the misinformation, explain the underlying fallacies (such as cherry-picking information, a common tactic of climate change deniers).
  • Mobilize when you see misinformation being shared on social media as soon as possible. If you see something, say something.

Prebunking still has value, they note. But providing a quick and simple fact-check after someone reads a headline can be helpful , particularly on social media platforms where people often mindlessly scroll through posts.

Social media companies have taken some steps to fight misinformation spread on their platforms, with mixed results. Twitter’s crowdsourced fact-checking program, Birdwatch, launched as a beta test in January, has already run into trouble with the poor quality of user-flagging . And Facebook has struggled to effectively combat misinformation about COVID-19 vaccines on its platform.

Misinformation researchers have recently called for social media companies to share more of their data so that scientists can better track the spread of online misinformation. Such research can be done without violating users’ privacy, for instance by aggregating information or asking users to actively consent to research studies.

Much of the work to date on misinformation’s spread has used public data from Twitter because it is easily searchable, but platforms such as Facebook have many more users and much more data. Some social media companies do collaborate with outside researchers to study the dynamics of fake news, but much more remains to be done to inoculate the public against false information.

“Ultimately,” van der Linden says, “we’re trying to answer the question: What percentage of the population needs to be vaccinated in order to have herd immunity against misinformation?”

detecting fake news assignment vaccines answer key

Trustworthy journalism comes at a price.

Scientists and journalists share a core belief in questioning, observing and verifying to reach the truth. Science News reports on crucial research and discovery across science disciplines. We need your financial support to make it happen – every contribution makes a difference.

More Stories from Science News on Science & Society

A hand manikin rests on a strip of yellow plastic caution tape, to highlight the need to proceed with caution when using or implementing Generative Artificial Intelligence

A new book tackles AI hype – and how to spot it

An artsy food shot shows a white bowl on a gray counter. A spatter of orange coats the bottom of the bowl while a device drips a syrupy dot on top. The orange is a fungus that gave this rice custard a fruity taste.

A fluffy, orange fungus could transform food waste into tasty dishes

Norwegian archipelago of Svalbard

‘Turning to Stone’ paints rocks as storytellers and mentors

A Victorian-era book titled Mohun is propped up to show it's deep yellow cover, which is decorated by a paler flower with green leaves and vines.

Old books can have unsafe levels of chromium, but readers’ risk is low

Astronauts Sunita Williams and Butch Wilmore float in the International Space Station.

Astronauts actually get stuck in space all the time

digital art of an unexplained anomalous phenomena (UAP)

Scientists are getting serious about UFOs. Here’s why

abstract person with wavy colors flowing in and out of brain

‘Then I Am Myself the World’ ponders what it means to be conscious

A horizontal still from the movie 'Twisters' a man and a woman stand next to each other in a field, backs to the camera, and share a look while an active tornado is nearby.

Twisters asks if you can 'tame' a tornado. We have the answer

Subscribers, enter your e-mail address for full access to the Science News archives and digital editions.

Not a subscriber? Become one now .

Reexamining Misinformation: How Unflagged, Factual Content Drives Vaccine Hesitancy

Research from the Computational Social Science Lab finds that factual, vaccine-skeptical content on Facebook has a greater overall effect than “fake news,” discouraging millions from the COVID-19 shot.

By Ian Scheffler, Penn Engineering 

A person with gloved hands puts a needle into a vaccination vial

What threatens public health more, a deliberately false Facebook post about tracking microchips in the COVID-19 vaccine that is flagged as misinformation, or an unflagged, factual article about the rare case of a young, healthy person who died after receiving the vaccine?

According to Duncan J. Watts, Stevens University Professor in Computer and Information Science at Penn Engineering and Director of the Computational Social Science (CSS) Lab , along with David G. Rand, Erwin H. Schell Professor at MIT Sloan School of Management, and Jennifer Allen, 2024 MIT Sloan School of Management Ph.D. graduate and incoming CSS postdoctoral fellow, the latter is much more damaging. “The misinformation flagged by fact-checkers was 46 times less impactful than the unflagged content that nonetheless encouraged vaccine skepticism,” they conclude in a new paper in Science. 

Historically, research on “fake news” has focused almost exclusively on deliberately false or misleading content, on the theory that such content is much more likely to shape human behavior. But, as Allen points out, “When you actually look at the stories people encounter in their day-to-day information diets, fake news is a miniscule percentage. What people are seeing is either no news at all or mainstream media.” 

Duncan Watts Headshot

“Since the 2016 U.S. presidential election, many thousands of papers have been published about the dangers of false information propagating on social media,” says Watts. “But what this literature has almost universally overlooked is the related danger of information that is merely biased. That’s what we look at here in the context of COVID vaccines.” 

In the study, Watts, one of the paper’s senior authors, and Allen, the paper’s first author, used thousands of survey results and AI to estimate the impact of more than 13,000 individual Facebook posts. “Our methodology allows us to estimate the effect of each piece of content on Facebook,” says Allen. “What makes our paper really unique is that it allows us to break open Facebook and actually understand what types of content are driving misinformed-ness.” 

One of the paper’s key findings is that “fake news,” or articles flagged as misinformation by professional fact-checkers, has a much smaller overall effect on vaccine hesitancy than unflagged stories that the researchers describe as “vaccine-skeptical,” many of which focus on statistical anomalies that suggest that COVID-19 vaccines are dangerous. 

“Obviously, people are misinformed,” says Allen, pointing to the low vaccination rates among U.S. adults, in particular for the COVID-19 booster vaccine, “but it doesn’t seem like fake news is doing it.” One of the most viewed URLs on Facebook during the time period covered by the study, at the height of the pandemic, for instance, was a true story in a reputable newspaper about a doctor who happened to die shortly after receiving the COVID-19 vaccine. 

That story racked up tens of millions of views on the platform, multiples of the combined number of views of all COVID-19-related URLs that Facebook flagged as misinformation during the time period covered by the study. “Vaccine-skeptical content that’s not being flagged by Facebook is potentially lowering users’ intentions to get vaccinated by 2.3 percentage points,” Allen says. “A back-of-the-envelope estimate suggests that translates to approximately 3 million people who might have gotten vaccinated had they not seen this content.”

Despite the fact that, in the survey results, fake news identified by fact-checkers proved more persuasive on an individual basis, so many more users were exposed to the factual, vaccine-skeptical articles with clickbait-style headlines that the overall impact of the latter outstripped that of the former. 

“Even though misinformation, when people see it, can be more persuasive than factual content in the context of vaccine hesitancy,” says Allen, “it is seen so little that these accurate, ‘vaccine-skeptical’ stories dwarf the impact of outright false claims.” 

As the researchers point out, being able to quantify the impact of misleading but factual stories points to a fundamental tension between free expression and combating misinformation, as Facebook would be unlikely to shut down mainstream publications. “Deciding how to weigh these competing values is an extremely challenging normative question with no straightforward solution,” the authors write in the paper. 

Allen points to content moderation that involves the user community as one possible means to address this challenge. “Crowdsourcing fact-checking and moderation works surprisingly well,” she says. “That’s a potential, more democratic solution.” 

With the 2024 U.S. Presidential election on the horizon, Allen emphasizes the need for Americans to seriously consider these tradeoffs. “The most popular story on Facebook in the lead-up to the 2020 election was about military ballots found in the trash that were mostly votes for Donald Trump,” she notes. “That was a real story, but the headline did not mention that there were nine votes total, seven of them for Trump.” 

This study was conducted at the University of Pennsylvania’s School of Engineering and Applied Science, the Annenberg School for Communication and the Wharton School, along with the Massachusetts Institute of Technology Sloan School of Management, and was supported by funding from Alain Rossmann.

This article originally appeared on the Penn Engineering Blog.

  • Public Health
  • Social Media

Research Areas

  • Health Communication
  • Science Communication

Related News

COVID-19 Vaccine Bottles with Branded Labels Move on Pharmaceutical Conveyor Belt in Research Lab

New APPC Survey Finds Belief in COVID-19 Vaccination Misinformation Has Grown

Members of the Health Communication & Equity Lab pose in front of Penn's LOVE statue

Exploring Research With Real World Impact

Exterior of the Annenberg School 3620 Walnut Street

A Major NIH Grant Will Help Researchers Study Messaging About Risks of Tobacco Products

  • Publications
  • Conferences & Events
  • Professional Learning
  • Science Standards
  • Awards & Competitions
  • Instructional Materials
  • Free Resources
  • For Preservice Teachers
  • NCCSTS Case Collection
  • Science and STEM Education Jobs
  • Interactive eBooks+
  • Digital Catalog
  • Regional Product Representatives
  • e-Newsletters
  • Browse All Titles
  • Bestselling Books
  • Latest Books
  • Popular Book Series
  • Submit Book Proposal
  • Web Seminars
  • National Conference • New Orleans 24
  • Leaders Institute • New Orleans 24
  • National Conference • Philadelphia 25
  • Exhibits & Sponsorship
  • Submit a Proposal
  • Conference Reviewers
  • Past Conferences
  • Latest Resources
  • Professional Learning Units & Courses
  • For Districts
  • Online Course Providers
  • Schools & Districts
  • College Professors & Students
  • The Standards
  • Teachers and Admin
  • eCYBERMISSION
  • Toshiba/NSTA ExploraVision
  • Junior Science & Humanities Symposium
  • Teaching Awards
  • Climate Change
  • Earth & Space Science
  • New Science Teachers
  • Early Childhood
  • Middle School
  • High School
  • Postsecondary
  • Informal Education
  • Journal Articles
  • Lesson Plans
  • e-newsletters
  • Science & Children
  • Science Scope
  • The Science Teacher
  • Journal of College Sci. Teaching
  • Connected Science Learning
  • NSTA Reports
  • Next-Gen Navigator
  • Science Update
  • Teacher Tip Tuesday
  • Trans. Sci. Learning

MyNSTA Community

  • My Collections

Fact-Checking in an Era of Fake News

A Template for a Lesson on Lateral Reading of Social Media Posts

Connected Science Learning May-June 2021 (Volume 3, Issue 3)

By Troy E. Hall, Jay Well, and Elizabeth Emery

Share Start a Discussion

Fact-Checking in an Era of Fake News

As all science teachers know, the rate of scientific advancement is accelerating , far outpacing the ability of teachers or students to master. Nevertheless, scien tific understanding is crucial to address contemporary social and environmental challenges, from climate change to food supply to vaccines. Citizens must be able to interpret scientific claims presented in the media and online to make informed personal and political decisions . Informed decision-making requires scientifi c litera cy, the ability to decipher fact from fiction, and a willingness to engage in open-minded, productive discussions around contentious issues . Scientific literacy does not come naturally for most people; these skills need to be taught, practiced, and honed (Hodgin and Kahne 2018). Such scientific literacy skills are recognized specifically in the science and engineering practice of Obtaining, Evaluating, and Communicating Information described in the Next Generation Science Standards (NGSS; NGSS Lead States 2013) . However, these skills can be difficult to integrate into lessons because—while these practices have been identified as important—it is not well understood how to teach them in the digital age.   

This article describes a biology lesson we developed that incorporates a relatively new approach to teaching middle and high school students how to fact-check online information. This lesson emerged out of a partnership between school science teachers, an academic unit at Oregon State University (OSU), and OSU’s Science and Math Investigative Learning Experiences (SMILE) program. SMILE is a longstanding precollege program that increases underrepresented students’ access to and success in STEM (science, technology, engineering, and math) education and careers. For more than 30 years, the program has provided a range of educational activities, predominantly in rural areas, to help broaden underrepresented student groups’ participation in STEM and provide professional development resources to support teachers in meeting their students’ needs. Our lesson focuses on social media posts about genetic engineering (GE) of plants, but this promising approach to digital literacy can be adopted for other scientific topics and internet information sources.

Challenges in Curating Online Information Sources

Internet sites have expanded as sources of all types of information, including scien tific research. Today, digital sources outcompete traditional news and information sources among the general public, including U.S. adolescents, who rely heavily on the internet for information (McGrew et al. 2019) and school assignments ( Hinostroza et al. 2018). However, many internet sites have no gatekeepers to monitor the integrity of their content, and sophisticated producers can mask false or misleading claims as factual analysis (Wineburg and McGrew 2019). Many studies have shown that students lack the ability to adequately search for and assess online information (Hinostroza et al. 2018), which can leave them unprepared to think critically about issues raised in their classes and societally.

In the case of controversial socio-scientific issues like GE, social media serves as an outlet for both ardent proponents and objectors, and it can be difficult for even trained fact-checkers to locate and evaluate the quality of sources. In such cases, students can encounter biased or partial information, depending on how they search online. It is well-known that people gravitate toward information that resonates with their preexisting attitudes, a phenomenon known as confirmation bias (Sinatra and Lombardi 2020). Indeed, some research has shown that students judge claims that align with their prior views to be true, regardless of the actual validity of the claims (Kahne and Bowyer 2017). Adolescents trust favored search engines, often clicking on the first links that appear, unaware that such sites may contain sponsored content and unable to separate such content from objective facts (Breakstone et al. 2018; Hargittai et al. 2010; Walsh-Moorman, Pytash, and Ausperk 2020). Unless prompted to be more critical, students may rely on intuitive assessments of online information to judge content validity or the credibility of the source (Tandoc et al. 2018). Unfortunately, students may be unwilling to make an effort to critically evaluate content if they judge the task as having low stakes or being about something that does not personally interest them (Hinostroza et al. 2018).

Although partisan internet sites often portray an issue or technology like GE in black-or-white terms, it is rare for any significant issue of public concern to be such a simple matter. For example, GE has been used to create vaccines and insulin, which few would argue are bad. On the other hand, using GE technology to improve agricultural crops involves many complex environmental and ethical considerations; for example, GE can be used in applications to increase the global food supply but may promote herbicide resistance in weeds or allow modified genes to flow to non-GE crops. Given myriad controversial considerations, it is important to assess GE agricultural products individually and resist the tendency to engage students in lessons tasking them to evaluate GE agriculture as a single, monolithic entity.

Teaching Students to Become Fact-Checkers

In the face of such multifaceted issues, educators strive to shape students to become citizens and consumers who can carefully consider different dimensions of an issue, locate scientifically credible information, and critically evaluate both sources and content (Hodgin and Kahne 2018). Science teach ers must spend adequate time cultivating these lifelong learning skills that today’s students will need in their adult lives. This is as much about learning how to learn (i.e., obtain, evaluate, and communicate information ) as it is about mastering scientific content.

Considerable scientific study has identified the types of cues that signal the quality of an information source, such as the author’s credentials (Stadtler et al. 2016). However, the source is only one cue to the quality of information provided. In addition, one should evaluate the recency of information, look at the URL, evaluate the language used, and investigate the sponsorship of the site (Breakstone et al. 2018). Such items are often included in “checklist” approaches used to teach students how to evaluate online information. However, scholars have questioned the efficacy of checklists because internet sites promoting misinformation are becoming so prevalent and convincing that they pass these checklist tests (Fielding 2019; Sinatra and Lombardi 2020). Moreover, checklists do not teach students broader digital literacy skills, such as how different search terms generate different results. For example, “genetically modified organism” is a common search term that links to polarized information. However, “genetically engineered agriculture,” a related but less-common search term, links to less polarized information. This type of nuance is problematic for students who may rely on common search terms. Teachers need new strategies to address NGSS practices and give students a strong ability to evaluate the information they find when searching.

One promising alternative to assessing digital credibility with checklists is “lateral reading” (Wineburg and McGrew 2019; Walsh-Moorman et al. 2020). This technique, pioneered by the Stanford History Education Group (SHEG), is modeled after professional fact-checkers’ source evaluation strategies. Lateral reading involves validating unfamiliar sites by looking outside the site itself—using the power of web searches to cross-reference information in the unfamiliar site until its trustworthiness and credibility can be established. SHEG’s materials and evaluation have been developed for university students with a focus on social issues, as opposed to science, so there is an opportunity to refine them for different audiences and materials. Recently, Walsh-Moorman et al. (2020) called for more exploration of how educators could use lateral reading in middle schools, especially in ways that might be efficiently embedded in the curriculum. Our lesson adapts the process of lateral reading for science education with a K–12 audience.

In the following section, we describe this lesson, which was developed as one part of a larger curriculum for middle and high school science students on the science and social issues associated with GE agricultural applications . Our overall goal was to develop educational materials that encourage open-minded thinking about the breadth of social issues surrounding specific GE agricultural products, rather than understanding the basic science of GE or debating whether GE as a whole is good or bad. This particular lesson focuses on fact-checking information presented in social media using lateral reading .

The Fact-Checking Lesson

Lesson development.

Knowing that GE is controversial— with multiple social , environmental, and economic dimensions—we invited middle and high school SMILE club teachers to participate in focus groups and surveys intended to understand their interest and knowledge in teaching about GE, their self-assessed capacity and comfort to deliver this type of material, their students’ interests and prior attitudes about the topic, and the need for our material to connect to NGSS . Thirty-nine teachers from 26 schools completed surveys, and 16 high school teachers participated in four focus groups lasting approximately one hour each.

To our surprise, these assessments showed that teachers did not consider the controversial nature of GE to be a barrier to teaching this type of material. Additionally, their interest in teaching about the environmental, food, and economic aspects of GE was relatively high. However, based on their self-reports and a quizlike assessment, their knowledge about these associated aspects was moderate to low, suggesting that they would benefit from a tool to help students assess validity of information they find when studying these topics.

In addition to teacher surveys and focus groups, we examined the GE curriculum teachers were referencing and using at the time. Teachers were generally teaching about GE as an applied lesson connected to an introductory genetics unit. Commonly, GE curriculum involved a class debate where students argued for or against GE technology over one or two class periods. As preparation, students developed lists of pros and cons from digital media sources. Because students are generally not skilled at fact-checking, this often resulted in their lists containing misinformation that might not be validated by teachers until the actual debate, if at all. This had the effect of confirming misinformation in students’ minds.

We identified two main concerns about these existing lessons. First, they assume GE can productively be discussed as good or bad. However, some of the many agricultural applications have been shown to have few adverse impacts, while others are more significant. Students need to understand the differences among specific GE agricultural products—including their various environmental, societal, scientific, and economical considerations—to be able to have a productive debate about whether a specific GE product should be used. Students should not be encouraged to think about all GE products as involving the same considerations.

Second, many existing lessons are outdated and do not reflect current understandings of the nuances of GE agriculture. Links embedded in curricular materials quickly become outdated, as advances in GE are so rapid. Therefore, rather than using static materials that contain old links, students need to be able to identify contemporary information resources regarding specific GE agricultural products. In addition to building digital searching skills, this will enable them to develop meaningful lists of the pros and cons associated with a specific GE product.

Thus, through our review and consultation with teachers, it became clear that students need the skills to curate information about GE agriculture and teachers need tools to formatively assess their students’ information-gathering skills. This led us to build upon SHEG’s work on lateral reading in our lesson.

Traditionally, fact-checking lessons involved what is called “vertical” reading, in which students systematically explore and critique elements within a source. Many students are familiar with and have used checklists based on vertical reading to determine whether a piece of information is credible, such as assessing the source’s authority, purpose, accuracy, currency, and relevance. However, in the digital age, verifying a source through vertical reading can be quite difficult, as some internet sites are deliberately designed to be misleading. Additionally, now that most students can access information via the internet, they should not be restricted to vertical reading. Instead, current recommendations suggest teaching to read laterally, that is, by examining other sources and triangulating findings. SHEG’s approach to this involves validating a target source using six steps: investigate the source’s author, perform keyword searches, verify information and quotations, research citations, look up organizations cited, and analyze sponsorship or ads (Walsh-Moorman et al. 2020).

Lesson overview

Our lesson promoted the basics of lateral reading in the context of topic-specific social media posts about GE agriculture. We drew from tweets, YouTube videos, and Facebook posts, because historical posts and pages are publicly available, easily accessible, and contain the types of elements students should learn to assess, such as the domain, the numbers of likes or retweets, and links to primary sources. The lesson has been designed to be completed in a single 60-minute class period but provides students lateral-reading skills they can build on in future units and lessons, regardless of the content area.

The lesson ( see Supplemental Resources ) begins with a classroom discussion about the term “fake news,” including where it comes from, its purpose, and its role in society. To aid this discussion, we provide resources for teachers to highlight how the process of developing news has changed over time, how advances in technology have made it easier for anyone to develop “news” or “user-created content,” and how fake news can quickly proliferate through social media networks.

This productive discussion about “fake news” encourages students to think about the importance of validating information and reflect on how they currently do this. At this point in the lesson, teachers introduce lateral reading as a strategy to validate sources of information. This introduction focuses on three key components: (1) the source, (2) the evidence, and (3) whether other reputable sources agree with the claims under scrutiny. Additionally, teachers highlight the importance of searching to confirm the information outside of the original source.

After introducing the basics of lateral reading, teachers use case studies we developed about specific GE agricultural products ( Figure 1 ). In small groups, students review a case study, discuss the three lateral reading components, and determine whether the post is credible. If the students have access to the internet, they are encouraged to base their reasoning on information they found regarding each of the components, such as information about the author of the material. If students do not have access to an internet-connected device, they can describe the types of searches they would perform. Reconvening as a large group, teachers capture the students’ reasoning and categorize it into the key components of lateral reading. By doing this as a class, students can see the variety of ways the example source could be validated using the lateral reading strategy.

Figure 1

While lateral reading is often more accurate than vertical reading, it is less straightforward and often more cumbersome, requiring students to engage in complex reasoning and nuanced appraisals. Teachers and students need a way to assess their understanding and application of the skill of lateral reading as it develops. To assist in this, each case study has an associated rubric to evaluate competency as beginning, emerging, or mastery ( Figure 2 ). For each skill level, an example of source validation reasoning is provided as a quick reference for teachers and students to assess and improve their lateral reading skills. These rubrics enable students to get quick feedback and provide opportunities to practice skills in an authentic way before searching for information on their own.

figure 2

Once students have been introduced to lateral reading and the rubrics as a class, they work on their own in small groups to complete similar evaluations of new topic-specific case studies from a variety of social media sources. Students assess each source’s validity and provide a short written analysis. Afterward, groups pair up and share their reasoning for each case study. At this time, teachers provide students the rubrics specific to these cases, so students can determine how they can improve their lateral-reading skills.

To debrief the lesson, groups are provided a list of discussion questions that focus on how they consume and produce information, their responsibilities in checking the validity of the information they use and share, and when they would use lateral reading. During this debrief students reflect on their personal goals for validating sources and their skills around doing so.

Assessment of the lesson

Using SMILE’s statewide network of teachers, we provided professional development about this lesson to 17 middle school and 11 high school teachers, who then piloted it in their afterschool STEM clubs. This provided teachers a low-risk environment to experiment with the lesson and provide authentic feedback via teacher logs, ongoing professional development sessions, and personal conversations.  

In a follow-up teacher workshop five months later, teachers reported that the lesson was well-received by their students. When paired with additional lessons about GE agriculture that required students to collect online information, teachers said that the lateral reading exercises led to more productive classroom discussions. This lesson was the most highly rated among the seven we provided in the GE agriculture unit; teachers reported that they planned to use it with other discussion-based science lessons in their classrooms. Further, teachers described how the lesson highlighted their students’ struggle to assess the accuracy of online information.

Despite the overall positive reaction to the lesson, some aspects of lateral reading were difficult for both students and teachers. Many of the teachers reported that they normally use vertical reading checklists to teach digital literacy skills—the same checklists that SHEG reports as problematic. Teachers were apprehensive about shifting away from them because the vertical reading checklists provide a concrete, straightforward approach to analyze a source. They tend to be easier for novice students to follow than the less-defined, more-nuanced skill of lateral reading. Teachers reported that lateral reading required a higher degree of critical reasoning and, if not scaffolded properly, some students became confused, frustrated, and gave up on the process.

The most success was obtained with tweets that were clearly true or false. In this context, students were able to demonstrate the validity of the source and provide reasoning to support their determination using one or two quick internet searches. However, it was more difficult for students to assess the validity of other social media posts with more nuanced misinformation that was harder to check. Additionally, reading laterally sometimes took students to scientific journal articles or other dense sources that provided contradictory claims. To validate the source and provide accurate reasoning, students needed considerably more time for reading or searching for other references. In these cases, students were less successful and often gave up on the process. Thus, it was clear from the feedback that this lesson is not a panacea and students need to practice the skills of lateral reading to be able to use them effectively and efficiently.

Fake news and scientific misinformation are rampant on the internet. Science education must therefore expand from teaching primarily about scientific content to teaching how to obtain, evaluate, and use scientific findings. Existing approaches—such as vertical reading or lessons based around materials curated at a single point in time—are outdated and inadequate. Students do most of their information gathering and communicating in online environments using a wide variety of sources. The lateral reading approach we described builds on recent recommendations and may be more suitable for addressing NGSS recommendations in the dynamic digital landscape.

Using this lesson framework allows teachers to build fact-checking skills among their students that can be used in future ex ercises and provides teachers with a way to formatively assess students’ skills. This assessment can be valuable when conducted prior to students using information from the internet in a debate exercise and allows for a more productive, accurate debate .

Our partnership with teachers was critical to the development and refinement of the lesson. It also revealed some unforeseen challenges. First, some of our assumptions were incorrect, such as teachers’ reluctance to address a socially controversial topic. Other assumptions were more accurate; for example, teachers felt unprepared as content experts in this domain. The experienced STEM teachers provided feedback that was used to refine lessons over time.

While we see great benefit in teaching lateral reading, our experience suggests that it should be carefully planned, as it requires skills that many students have not yet developed. Thus, it is important to scaffold for students' needs. The initial lateral reading sources and assessments need to be content-specific, straightforward, and clear to help students understand the process and begin to build their skills. However, most socio-scientific issues are not straightforward, and lateral reading of online posts can prove challenging. This reveals a tension between the need to simplify a skill to teach to novices and the need to develop higher level, critical-thinking skills. Similar to how the NGSS recommends developing science and engineering practices among students over time, lateral reading is a skill that needs to be continually practiced for students to gain mastery.

To scaffold our lesson for science students, we recommend taking a long-term approach to skill progression over time. We suggest that teachers build students’ critical-thinking skills over an entire year through multiple applied lessons using lateral reading in which students curate information from a variety of internet sources. Initially, checklists could be used to introduce students to analyzing online sources. These early examples should be selected to illustrate checklists’ limitations and transition to using the lateral reading technique. We also suggest that teachers carefully select initial “case studies” that can be readily evaluated through lateral reading to help students self-assess their skill development. As students build this skill over the course of a school year, they can begin independently using lateral reading to critically evaluate online sources that they find on their own.

The lesson we developed can easily be applied to online information sources about many topics. Often, platforms such as Facebook, Youtube, or TikTok point to websites where additional information about the source or topic can be accessed. The goal of our lesson is for students to determine if these are quality sources. Only after making that determination should they be consuming any of the information provided.

Research has shown that media literacy education can be effective (Hodgin and Kahne 2018), specifically in improving students’ abilities to judge the accuracy of online posts (Kahne and Bowyer 2017). In this article we have described the development of a lesson and approach based in contemporary recommendations for increasing media literacy among youth by teaching the critical-thinking skills of lateral reading.

Acknowledgments

We thank the National Science Foundation Plant Genome Research Program (IOS # 1546900) for support of this project. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.

Troy E.  Hall ( [email protected] ) is Professor and Head of Oregon State University’s Forest Ecosystems and Society Department, Corvallis, Oregon.  Jay Well is Associate Director of Precollege Programs and Science and Math Investigative Learning Experiences (SMILE) at Oregon State University, Corvallis, Oregon. Elizabeth Emery is Environmental Campaign Manager at the Association of Northwest Steelheaders, Portland, Oregon.

citation:  Hall, T.E., J. Well and E. Emery. 2021. Fact-checking in an era of fake news: A template for a lesson on lateral reading of social media posts.  Connected Science Learning  3 (3).  https://www.nsta.org/connected-science-learning/connected-science-learning-may-june-2021/fact-checking-era-fake-news

Breakstone, J., S. McGrew, M. Smith, T. Ortega, and S. Wineburg. 2018. Teaching students to navigate the online landscape. Social Education 82 (4): 219–221.

Fielding, J.A. 2019. Rethinking CRAAP: Getting students thinking like fact-checkers in evaluating web sources. College and Research Libraries News 80 (11): 620 – 622.

Hargittai, E., L. Fullerton, E. Menchen-Trevino, and K.Y. Thomas. 2010. Trust online: Young adults' evaluation of web content. International Journal of Communication 4: 27.

Hinostroza, J.E., A. Ibieta, C. Labbé, and M.T. Soto. 2018. Browsing the internet to solve information problems: A study of students’ search actions and behaviours using a ‘think aloud’ protocol. Education and Information Technologies 23 (5): 1933–1953.

Hodgin, E., and J. Kahne. 2018. Misinformation in the information age: What teachers can do to support students. Social Education 82 (4): 208–212.

Kahne, J., B. Bowyer. 2017. Educating for democracy in a partisan age: Confronting the challenges of motivated reasoning and misinformation. American Educational Research Journal 54: 3 – 34.

McGrew, S., J. Breakstone, T. Ortega, M. Smith, and S. Wineburg. 2018. Can students evaluate online sources? Learning from assessments of civic online reasoning. Theory and Research in Social Education 46 (2): 165 – 193.

McGrew, S., M. Smith, J. Breakstone, T. Ortega, and S. Wineburg. 2019. Improving university students’ web savvy: An intervention study. British Journal of Educational Psychology 89 (3): 485 – 500.

NGSS Lead States. 2013.  Next Generation Science Standards: For States, By State s. Washington, DC: The National Academies Press. Retrieved from  http://www.nextgenscience.org/ 

Sinatra, G.M., and D. Lombardi. 2020. Evaluating sources of scientific evidence and claims in the post-truth era may require reappraising plausibility judgments. Educational Psychologist (online) 1–12.

Stadtler, M., L. Scharrer, M. Macedo-Rouet, J.F. Rouet, and R. Bromme. 2016. Improving vocational students’ consideration of source information when deciding about science controversies. Reading and Writing 29 (4): 705 – 729.

Tandoc Jr., E.C., R. Ling, O. Westlund, A. Duffy, D. Goh, and L. Zheng Wei. 2018. Audiences’ acts of authentication in the age of fake news: A conceptual framework. New Media and Society 20 (8): 2745–2763.

Walsh-Moorman, E.A., K.E. Pytash, and M. Ausperk. 2020. Naming the moves: Using lateral reading to support students’ evaluation of digital sources. Middle School Journal 51 (5): 29 – 34.

Wineburg, S., and S. McGrew. 2019. Lateral reading and the nature of expertise: Reading less and learning more when evaluating digital information. Teachers College Record 121 (11): 1–39.

Instructional Materials Lesson Plans STEM Teaching Strategies Middle School High School Informal Education

You may also like

NSTA Press Book

COMING SOON! AVAILABLE FOR PREORDER. Here’s a fresh way to help your students learn life science by determining how you can help them learn best...

Reports Article

Web Seminar

Join us on Thursday, January 30, 2025, from 7:00 PM to 8:00 PM ET, to learn about the search for planets located outside our Solar System....

Join us on Thursday, November 14, 2024, from 7:00 PM to 8:00 PM ET, to learn about the role that rocks and minerals play regarding climate change....

  • Corporate Relations
  • Future Students
  • Current Students
  • Faculty and Staff
  • Parents and Families
  • High School Counselors
  • Academics at Stevens
  • Find Your Program
  • Our Schools

Undergraduate Study

  • Majors and Minors
  • SUCCESS - The Stevens Core Curriculum
  • The Foundations Program
  • Special Programs
  • Undergraduate Research
  • Study Abroad
  • Academic Resources
  • Graduate Study
  • Stevens Online
  • Corporate Education
  • Samuel C. Williams Library

Discover Stevens

The innovation university.

  • Our History
  • Leadership & Vision
  • Strategic Plan
  • Stevens By the Numbers
  • Diversity, Equity and Inclusion
  • Sustainability

Student Life

New students.

  • Undergraduate New Students
  • Graduate New Students

The Stevens Experience

  • Living at Stevens
  • Student Groups and Activities
  • Arts and Culture

Supporting Your Journey

  • Counseling and Psychological Services
  • Office of Student Support
  • Student Health Services
  • Office of Disability Services
  • Other Support Resources
  • Undergraduate Student Life
  • Graduate Student Life
  • Building Your Career
  • Student Affairs
  • Commencement
  • Technology With Purpose
  • Research Pillars
  • Faculty Research
  • Student Research
  • Research Centers & Labs
  • Partner with Us

Admission & Aid

  • Why Stevens

Undergraduate Admissions

  • How to Apply
  • Dates and Deadlines
  • Visit Campus
  • Accepted Students
  • Meet Your Counselor

Graduate Admissions

  • Apply to a Graduate Program
  • Costs and Funding
  • Visits and Events
  • Chat with a Student

Tuition and Financial Aid

  • How to Apply for Aid
  • FAFSA Simplification
  • Undergraduate Costs and Aid
  • Graduate Costs and Funding
  • Consumer Info
  • Contact Financial Aid
  • International Students

Veterans and Military

  • Military Education and Leadership Programs
  • Stevens ROTC Programs
  • Using Your GI Bill
  • Pre-College Programs

Defusing Fake News: New Stevens Research Points the Way

Computer keys spelling the words fake news and facts

To address the alarming rise in misinformation, new strategies and AI for truthfulness emerge from Stevens research

Social media is adrift in a daily sea of misinformation about health, elections, politics and war

Russian-government media and social media outlets have long saturated the airwaves with false claims, most recently about Ukrainian terrorism, ethnic cleansing and military aggression. U.S. infectious disease institute director Anthony Fauci laments the spread of  COVID-19 vaccine misinformation with "no basis," while musicians incuding Neil Young recently removed entire song catalogues from the streaming service Spotify when the platform would not remove a podcast spreading health misinformation.

But new research from Stevens Institute of Technology faculty, students and alumni — working with MIT, Penn and others to study Congress, analyze social media and develop fake news-spotting artificial intelligence — is giving new hope in the fight for facts.

Their work is pointing the way to novel technologies and strategies that can successfully defuse false information.

Repeating false claims can help disprove them

Smarter strategies when confronting false claims can make a real difference. That's the conclusion of a research team of communications, marketing and data science experts in the Harvard Kennedy School Misinformation Review .

Professor Jingyi Sun

Stevens business assistant professor Jingyi Sun and colleagues at institutions including the University of Pennsylvania, University of Southern California, Michigan State University and the University of Florida recently analyzed thousands of Facebook posts, from nearly 2,000 public accounts specifically focused on COVID vaccine information, published between March 2020 and March 2021.

Roughly half of the posts studied included false information about COVID vaccines, while the other half were chiefly efforts to fact-check, dispute or debunk false vaccine claims. The posts received millions of total engagements in the Facebook community.

There was a significant quantity of false vaccine information shared, discussed and debated, the team found — and the groups publishing the most misinformation held several commonalities, including being very well organized.

"The accounts with the largest number of connections, and that were connected with the most diverse contacts, were fake news accounts, Trump-supporting groups, and anti-vaccine groups," wrote the authors.

The team then examined the specific discussions, threads, interactions and reactions to identify strategies that seemed to make a difference in viewers' perceptions of and engagement with health misinformation.

Interestingly, when fact-checkers weighed in to discussions to dispute or debate false vaccine information, repeating that false information during the process of disputing it appeared to open readers' minds more effectively.

That stands in contrast to conventional wisdom that false claims should not be repeated when debunking them.

"Fact checkers’ posts that repeated the misinformation were significantly more likely to receive comments than the posts about misinformation," wrote the study authors. "This finding offers some evidence that fact-checking can be more effective in triggering engagement when it includes the original misinformation."

The absence of any reference to an actual false claim being discussed, on the other hand, produced negative emotions in audiences reacting to fact-checking posts.

"Fact-checking without repeating the original misinformation are most likely to trigger sad reactions," the authors wrote.

Fact-checks including repetition of false claims are therefore probably a more effective messaging strategy, the group concludes.

"The benefits [ of repeating a false claim while disputing it ] may outweigh the costs," they wrote.

Leveraging AI to spot false vaccine information

Another Stevens team is hard at work designing an experimental artificial intelligence-powered application that appears to detect false COVID-19 information dispersed via social media with a very high degree of accuracy.

Professor KP Subbalakshmi

In early tests, the system has been nearly 90% successful at separating COVID-19 vaccine fact from fiction on social media.

"We urgently need new tools to help people find information they can trust," explains electrical and computer engineering professor K.P. "Suba" Subbalakshmi, an AI expert in the Stevens Institute for Artificial Intelligence (SIAI).

To create one such experimental tool, Subbalakshmi and graduate students Mingxuan Chen and Xingqiao Chu first analyzed more than 2,500 public news stories about COVID-19 vaccines published over a period of 15 months during the initial stages of the pandemic, scoring each for credibility and truthfulness.

The team cross-indexed and analyzed nearly 25,000 social media posts discussing those same news stories, developing a so-called "stance detection" algorithm to quickly determine how each post supported or refuted news that was already known to be either truthful or deceptive.

"Using stance detection gives us a much richer perspective, and helps us detect fake news much more effectively," says Subbalakshmi.

Once the AI engine is trained, it is able to judge whether a hitherto unseen tweet, referencing a news article is fake or real.

"It’s possible to take any written sentence and turn it into a data point that represents the author’s use of language,” explains Subbalakshmi. "Our algorithm examines those data points to decide if an article is more or less likely to be fake news."

Bombastic, extreme or emotional language often correlated with false claims, the team found. But the AI also discovered that time of publication, article length, or the number of authors of a given article can be used to help determine truthfulness.

The team will continue its work, says Subbalakshmi, integrating video and image analysis into the algorithms being refined in an effort to increase accuracy further.

"Each time we take a step forward, bad actors are able to learn from our methods and build something even more sophisticated," she cautions. "It’s a constant battle."

Slowing the spread of fake news

Stevens alumnus Mohsen Mosleh Ph.D. '17 has also investigated the question of how to combat misinformation shared via social media.

Stevens alumnus Mohsen Mosleh

Mosleh, a researcher at MIT's Sloan School of Management and business professor at the University of Exeter Business School, recently co-authored an intriguing study in the prestigious journal Nature adding more credibility to the idea that thinking about the concept of accuracy can help deter the sharing of potential and likely lies.

"False COVID vaccine information on social media can affect vaccine confidence and be a threat to many people's lives," notes Mosleh. "Social media platforms should work with researchers to help immunize against such dangerous content."

With colleagues at MIT and the University of Regina, Mosleh conducted a large field experiment on approximately 5,000 Twitter users who had previously shared low-quality content — in particular, "fake news" and other content from lower-quality, hyper-partisan websites.

The team sent the Twitter users direct messages asking them to rate the accuracy of a single non-political headline, in order to remind them of the concept of accuracy. The researchers then collected the timelines of those users both before and after receiving the single accuracy-nudge message.

The researchers found that, even though very few replied to the message, simply reminding social media users of the concept of accuracy seemed to make them more discerning in subsequent sharing decisions. The users shared proportionally few links from lower-quality, hyper-partisan news websites — and proportionally more links to higher-quality, mainstream news websites (as rated by professional fact-checkers).

"These studies suggest that when deciding what to share on social media, people are often distracted from considering the accuracy of the content," concluded the team in Nature . "Therefore, shifting attention to the concept of accuracy can cause people to improve the quality of the news that they share."

'Follow-the-leader politics' also shape misinformation flow

As the COVID-19 pandemic has transformed American life, it has also revealed how ideological divides and partisan politics can influence public information and misinformation beyond social media — even in official government communications.

Professor Lindsey Cormack

That's the conclusion of Stevens political science professor Lindsey Cormack and recent graduate Kirsten Meidlinger M.S. '21, who conducted a data analysis of more than 10,000 congressional email communications to constituents between January and July 2020 — nearly 80% of which mentioned the pandemic in some fashion.

Before performing their analysis, Cormack and Meidlinger first constructed a dataset tabulating total COVID-19 deaths by congressional district during the same time period. Democrats and Republicans, they found, sent roughly the same numbers of COVID communications, and politics did not seem to have been a factor initially in the frequency of those communications.

Rather, members adhered to historic tendencies.

"More communicative members seemed to be more so in the face of crisis, as well," explains Cormack. "We found that legislators from both parties were quick to talk about COVID-19 with constituents, and that on-the-ground realities, not partisanship, drove much of the variation in communication volume."

However, partisanship did influence certain COVID-19 communications, and for an apparent reason.

The researchers discovered Republicans were much more likely to use derogatory ethnic terminology to refer to COVID-19 in official communications and also more likely to promote the use of an unproven and potentially harmful medication, hydroxychloroquine, following the lead of then-President Donald Trump in each case.

"This was evidence," says Cormack, "of what we call 'follow-the-leader-politics.' In the case of hydroxychloroquine, this was in spite of the fact that the FDA, NIH and WHO all did not find any evidence of its efficacy — and even found that detrimental effects outweighed its utility.

"When legislators are following a leader who is promoting something that can potentially kill people, that is a problem."

The research was reported in the journal Congress & the Presidency in September 2021.

Tone as important as truth to counter vaccine fake news

False assertions about Covid-19 vaccines have had a deadly impact – they are one reason why some people delayed being inoculated until it was too late. Some still refuse to be vaccinated.

More than two years after the start of the pandemic, false rumours continue to circulate that the vaccines do not work, cause illness and death, have not been properly tested and even contain microchips or toxic metals.

Now a study raises hopes of deflecting such falsehoods in future by changing the tone of official health messaging and building people’s trust.

In many countries, public confidence in government, media, the pharmaceutical industry and health experts was already on the wane before the pandemic. And in some cases, it deteriorated further during the rollout of Covid vaccines.

This was partly because some national campaigns said the jabs would protect people from falling ill.

Friends over facts

‘There was a lot of overpromising around the vaccine without really knowing what would happen,’ said Prof Dimitra Dimitrakopoulou, research scientist and Marie Curie Global Fellow at the Massachusetts Institute of Technology and the University of Zurich.

“ We have lived with fake news and misinformation long enough to understand that it cannot be debunked with facts. Prof Dimitra Dimitrakopoulou, FAKEOLOGY

‘Then people started getting sick, even though they were vaccinated. That created a lack of trust in the government issuing these policies, and in the scientific community.’

Prof Dimitrakopoulou studied public perceptions of Covid vaccines and obstacles to acceptance of reliable information as part of a project called FAKEOLOGY .

She found that, when people lose faith in institutional sources, they end up relying only on themselves, close friends and family.

‘They trust their instincts, they trust what resonates with them,’ Prof Dimitrakopoulou said. That means they will search the internet, social media and other sources until they find information that reinforces the beliefs they already hold.

‘We have lived with fake news and misinformation long enough to understand that it cannot be debunked with facts,’ she said. ‘People just raise these emotional blocks.’

For example, a story about a mother whose child fell sick after getting a Covid vaccination would likely be more influential than a message containing scientific facts.

Building trust

Prof Dimitrakopoulou surveyed 3 200 parents of children under 11 years old in the United States, and conducted focus groups with 54 of them, to discuss their views about Covid vaccines for kids.

Many parents felt confused by conflicting information about the shots and had a lot of questions about their effectiveness.

She gave the parents a selection of messages to assess. They were put off by the ones that were largely factual, rigid and prescriptive – the tone of many public health campaigns.

They were more persuaded by messages that addressed their concerns about the vaccines with empathy and compassion while acknowledging that they face a difficult decision.

‘We need to be ready to answer any questions they may have and be ready to have a conversation - without expecting the conversation to end with someone getting vaccinated,’ said Prof Dimitrakopoulou.

Those exchanges will ultimately help bolster public faith in health bodies and government institutions. ‘Covid is a great opportunity for us to start building this trust,’ she said.

While a lengthy process, building these bridges could enlighten people’s perceptions for the rest of their lives, she said.

Fake news filter

It is also important for journalists, researchers and the general public to be able to spot and filter out fake news.

Researchers on a project called SocialTruth have developed a tool to flag fake news content on the internet and social media.   The software, called a Digital Companion, can check the reliability of a piece of information. It analyses the text, images, source and author and, within two minutes, produces a credibility score – a rating of between one and five stars.

“ Fake news tries to manipulate our feelings and fears to get our ‘clicks’ to read their content. Dr Konstantinos Demestichas, SOCIALTRUTH

‘This is a computer-generated score that can give a red-flag warning if the content is very similar to other types of content that have been found to be false,’ said Dr Konstantinos Demestichas, researcher at the Institute of Communication and Computer Systems in Athens and coordinator of SocialTruth.

The Digital Companion uses computer algorithms that draw on a wide variety of verification services. These include non-governmental organisations, businesses and academic institutions – all with different interests, opinions and intentions.

Because of the diversity of verification-service providers, ‘We need to establish their trustworthiness by continuously evaluating their results,’ said Dr Demestichas.

To do this, the project uses blockchain to record all the scores and results produced by the verifiers. If the verifiers perform poorly, they lose their status – ensuring the Digital Companion can offer a quality assurance, he said.

Digital and human fact checkers

For now, the technology has been developed to scan health science and political content. In future, it could be developed for almost all areas.

Initially it will be for institutions that monitor fake news and disinformation, but the aim is to enable journalists and the general public to take advantage of the resource too.

The technology ‘could really make a difference in the daily use of the internet and social media,’ said Dr Demestichas.

Still, because it will never be able to spot all fake news, ‘We need journalists, fact checkers, and citizens to be well-trained to exercise their critical thinking,’ he said.

Manipulated feelings

The fight against misinformation is about more than protecting people’s health, important as that is. The well-being of democratic societies themselves is also at stake, said Dr Demestichas.

‘Fake news tries to manipulate our feelings and fears to get our “clicks” to read their content,’ he said.

Curbing it is critical ‘to defend our democracies and allow our societies to function better.’

Research in this article was funded by the EU. If you liked this article, please consider sharing it on social media.

  • SocialTruth

Recommended for you

EU-funded researchers are using groundbreaking technology to monitor brain activity during movement. © Tridsanu Thopet, Shutterstock.com

Share this page

Contact Horizon

COVID-19 misinformation: scientists create a ‘psychological vaccine’ to protect against fake news

detecting fake news assignment vaccines answer key

Professor of Social Psychology in Society and Director, Cambridge Social Decision-Making Lab, University of Cambridge

detecting fake news assignment vaccines answer key

Postdoctoral Fellow, Psychology, University of Cambridge

Disclosure statement

Sander van der Linden consults for or receives funding from Google, Facebook, WhatsApp, The UK Government and the US Government, ESRC, Nuffield and Gates foundations.

Jon Roozenbeek consults for or receives funding from Google, WhatsApp, The UK Government, the US Government and the ESRC.

University of Cambridge provides funding as a member of The Conversation UK.

View all partners

Vaccnie bottles on their sides with 'fact' and 'fake' spelt in dice

Anti-vaccination groups are projected to dominate social media in the next decade if left unchallenged. To counter their viral misinformation at a time when COVID-19 vaccines are being rolled out, our research team has produced a “psychological vaccine” that helps people detect and resist the lies and hoaxes they encounter online.

The World Health Organization (WHO) expressed concern about a global misinformation “ infodemic ” in February 2020, recognising that the COVID-19 pandemic would be fought both on the ground and on social media. That’s because an effective vaccine roll out will rely on high vaccine confidence, and viral misinformation can adversely affect that confidence, leading to vaccine hesitancy.

We recently published a large study which found that higher belief in misinformation about the virus was consistently associated with a reduced willingness to get vaccinated. These findings were later reaffirmed in a subsequent study which found a significant relationship between disinformation campaigns and declining vaccination coverage.

The spread of false information about COVID-19 poses a serious risk to not only the success of vaccination campaigns but to public health in general. Our solution is to inoculate people against false information – and we’ve borrowed from the logic of real-life vaccines to inform our approach.

When looking for ways to mitigate misinformation, scientists are confronted with several challenges: first, rumours have been shown to spread faster, further and deeper in social networks than other news, making it difficult for corrections (such as fact-checks) to consistently reach the same number of people as the original misinformation.

Read more: There's no such thing as 'alternative facts'. 5 ways to spot misinformation and stop sharing it online

Second, even when someone is exposed to a fact-check, research has shown that corrections are unlikely to entirely undo the damage done by misinformation – a phenomenon known as the “continued influence effect”. In other words, approaches to combating misinformation “post-exposure” are probably insufficient.

Our work in recent years has therefore focused on how to prevent people from falling for misinformation in the first place, building on a framework from social psychology known as inoculation theory .

Man in medical face mask holds head and looks at phone in confusion

Mental resistance

Psychological inoculations are similar to medical vaccines. Exposing someone to a severely weakened dose of the “virus” (in this case misinformation) triggers the production of mental “antibodies”, thus conferring psychological resistance against future unwanted persuasion attempts.

However, rather than only “vaccinating” people against individual examples of misinformation, we instead focus on the more general ways in which people are misled – manipulation techniques such as the use of excessively emotional language, the construction of conspiracy theories, and the false testimony of fake experts.

To do so, we developed a series of online games in which players learn how misinformation works from the inside by being encouraged to create their own fake news: Bad News (about misinformation in general), Harmony Square (about political misinformation) and Go Viral! , which is specifically about misinformation around COVID-19.

Research has shown that a powerful way to induce resistance to persuasion is to make people aware of their own vulnerabilities. In our games, players are forewarned about the dangers of fake news and encouraged to actively generate their own antibodies through gradual exposure to weakened examples of misinformation in a simulated social media environment.

When we assessed the success of these projects, we found that playing a misinformation game reduces the perceived reliability of misinformation (even if participants had never seen the misinformation before); increases people’s confidence in their ability to assess the reliability of misinformation on their feed; and reduces their self-reported willingness to share misinformation with other people in their network. We also found that similar inoculation effects are conferred across cultures and languages.

An image from an app showing how an app works

We then looked at how long the games’ inoculation effect lasted and found that people remained significantly better at spotting manipulation techniques in social media content for at least one week after playing our game Bad News. This “immunity” lasted up to three months when participants were assessed at regular intervals each week. We see these prompts as motivational “booster shots”, topping up people’s immunity to misinformation by staying engaged.

Herd Immunity

Of course, our work is not without its limitations. Although these games have been played over a million times around the world and have been shared by governments , the WHO , and the United Nations , not everyone is interested in playing an online game.

But the game itself functions as just one kind of “virtual needle”. A global “vaccination programme” against misinformation will require a suite of different interventions. For example, we’re working with Google’s technology incubator “Jigsaw”, and our colleague Professor Stephan Lewandowsky , to develop and test a series of short animated inoculation videos.

Like the game, these videos forewarn and administer a micro-dose of a manipulation technique, which primes the watcher to spot similar techniques in the information they subsequently consume online. We intend to publish our study on the efficacy of video vaccines later this year.

As the pandemic continues to wreak havoc worldwide, a successful vaccine rollout is of vital interest to the global community. Preventing the spread of misinformation about the virus and the vaccines that have been developed against it is a crucial component of this effort.

Although it is not possible to inoculate everyone against misinformation on a permanent basis, if enough people have gained a sufficient level of psychological immunity to misinformation, fake news won’t have a chance to spread as far and as wide as it does currently. This will help arrest the alarming growth of anti-vaccination sentiment on the internet.

  • Conspiracy theories
  • Misinformation
  • Inoculation theory
  • Anti-vaccination
  • Anti-vaxxers
  • Coronavirus insights

detecting fake news assignment vaccines answer key

Admissions Officer

detecting fake news assignment vaccines answer key

Director of STEM

detecting fake news assignment vaccines answer key

Community member - Training Delivery and Development Committee (Volunteer part-time)

detecting fake news assignment vaccines answer key

Chief Executive Officer

detecting fake news assignment vaccines answer key

Head of Evidence to Action

Credible Sources as a Vaccine against Fake News on COVID-19

detecting fake news assignment vaccines answer key

The dis-misinformation and conspiracy theories on COVID-19 pandemic have continued to present great challenges for medical professionals, social media platforms, journalists, government and concerned citizens. Strongly, the uncertainty induced by Covid-19, has stimulated fear, anxiety, hatred and hate speech. Under this circumstance, there is need for the right narrative; with credible and quality communication and information intervention.  

In addressing this menace, UNESCO Abuja Office organized a Webinar on the theme, “Overriding influence of Dis- Misinformation on the COVID- 19 Pandemic” on April 28, in collaboration with UNIC, WHO and UN Women. According to Macaulay Olushola, the UNESCO Abuja Communication and Information Officer, “The objective of the webinar is to x-ray the various information circulating on Covid-19 and build competencies to empower peole with critical thinking capacity for making informed decisions.” 

In his opening remarks, the UNESCO Regional Director, Ydo Yao reiterated the role played by UNESCO in combating fake news and promoting freedom of expression and access to information under the framework of the UN Plan of Action on the Safety of Journalists and the Issue of Impunity. According to him, the webinar was organised to bring together the efforts from communication experts and international community to build resilience in the face of misinformation and ‘fake news’ that proceeded the Covid-19 pandemic.

In her presentation on the truth about COVID-19, Dhamari Naidoo, the Technical Officer for Laboratory Strengthening from WHO indicated that, disseminating correct medical and professional messages is very critical. She said that WHO has partnered with social media platforms like Facebook, TikTok and others to ensure that the right information is shared via these platforms.

In the same view, Patience Ekeoba from UN Women pointed out an increase rate in gender-based violence during the pandemic. She advised that women, especially in rural communities, should partner with local influencers and networks to access the right information.

Speaking on the main theme of the webinar, “Overriding Influence of dis- misinformation on the COVID- 19 Pandemic”, Edward Kargbo from BBC Media Action (Ethiopia) stated that Covid-19 is the first pandemic to occur in the digital and information age, and fake news, rumors, conspiracies and misleading information have become dominant today. “Since the outbreak of the disease, the explosion of dis/misinformation has been accompanied along. It became more difficult for people to get and detect accurate messages,” he stated.  He added that the ‘infodemic’ should not be treated as a ‘seasonal’ issue. He believes that joint efforts and support to empower media development are needed at a time like this.

In his contribution, Jide Atta, a media and gender consultant emphasized that the negative impact induced by dis/misinformation should receive serious attention. Atta reminded the audience on the havoc that misinformation could cause, namely, mental health and stigma of health services. He proposed that, a corresponding and harmonizing framework should be put in place to combat fake news, with cultural sensitivity. Efforts should also be made to reach out to the community level to curb the spread of fake news.

Discussing on another sub-theme, “Engaging the Media and Information Literacy (Educated) Citizens in the COVID-19 pandemic Narrative,” Dr. Olunifesi Suraj, a senior lecturer at the University of Lagos concentrate on the sources and contents of information itself. Suraj’s presentation, calls for the reflection and check on the current narrative of Covid-19, including the channels and publishers, spreading the information. In his point of view, knowledge must guide information, and so that, it will be the light to guide humanity. In his point of view, people should always keep questioning on the terms of ‘truth’ and ‘reality’.

On his part, Mr. Oluwamayowa Tijani, a youth leader reflected on the power influence. He pointed out that forces behind dis/misinformation include political powers, financial powers and soft power (fun, comedy) which is to achieve various gains. Aside from this, Tijani charges the youth who are the majority in the digital era to make best of their time in promoting good values. “Young people have great potential in forcing solidarity to combat dis-misinformation together”, he said.

While reflecting on the UN interventions over the pandemic, Mr. Oluseyi Soremekun introduced the work that has been and are being done on combating the dis/misinformation on COVID 19. Aside from that, he emphasized that people should validate the information before sharing it with anyone else.

At the end of the meeting, it was agreed that, only joint efforts and solidarity among the universal would help the society to find the beacon showing the way forward out of uncertainty, and finally, lead everyone to the shore.

The webinar has in attendance over 100 participants across Africa.

Notes: Video record of the webinar is available .

Related items

  • Country page: Nigeria
  • UNESCO Office in Abuja
  • SDG: SDG 16 - Promote peaceful and inclusive societies for sustainable development, provide access to justice for all and build effective, accountable and inclusive institutions at all levels

This article is related to the United Nation’s Sustainable Development Goals .

More on this subject

Earth, Home: A Melodic Call for Environmental Action

Other recent news

Strengthening Design Governance for Safer Online Spaces in Kenya

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 18 May 2024

Emotions unveiled: detecting COVID-19 fake news on social media

  • Bahareh Farhoudinia   ORCID: orcid.org/0000-0002-2294-8885 1 ,
  • Selcen Ozturkcan   ORCID: orcid.org/0000-0003-2248-0802 1 , 2 &
  • Nihat Kasap   ORCID: orcid.org/0000-0001-5435-6633 1  

Humanities and Social Sciences Communications volume  11 , Article number:  640 ( 2024 ) Cite this article

2368 Accesses

1 Citations

43 Altmetric

Metrics details

  • Business and management
  • Science, technology and society

The COVID-19 pandemic has highlighted the pernicious effects of fake news, underscoring the critical need for researchers and practitioners to detect and mitigate its spread. In this paper, we examined the importance of detecting fake news and incorporated sentiment and emotional features to detect this type of news. Specifically, we compared the sentiments and emotions associated with fake and real news using a COVID-19 Twitter dataset with labeled categories. By utilizing different sentiment and emotion lexicons, we extracted sentiments categorized as positive, negative, and neutral and eight basic emotions, anticipation, anger, joy, sadness, surprise, fear, trust, and disgust. Our analysis revealed that fake news tends to elicit more negative emotions than real news. Therefore, we propose that negative emotions could serve as vital features in developing fake news detection models. To test this hypothesis, we compared the performance metrics of three machine learning models: random forest, support vector machine (SVM), and Naïve Bayes. We evaluated the models’ effectiveness with and without emotional features. Our results demonstrated that integrating emotional features into these models substantially improved the detection performance, resulting in a more robust and reliable ability to detect fake news on social media. In this paper, we propose the use of novel features and methods that enhance the field of fake news detection. Our findings underscore the crucial role of emotions in detecting fake news and provide valuable insights into how machine-learning models can be trained to recognize these features.

Similar content being viewed by others

detecting fake news assignment vaccines answer key

ANN: adversarial news net for robust fake news classification

detecting fake news assignment vaccines answer key

Emotions explain differences in the diffusion of true vs. false social media rumors

detecting fake news assignment vaccines answer key

Machine learning-based guilt detection in text

Introduction.

Social media has changed human life in multiple ways. People from all around the world are connected via social media. Seeking information, entertainment, communicatory utility, convenience utility, expressing opinions, and sharing information are some of the gratifications of social media (Whiting and Williams, 2013 ). Social media is also beneficial for political parties or companies since they can better connect with their audience through social media (Kumar et al., 2016 ). Despite all the benefits that social media adds to our lives, there are also disadvantages to its use. The emergence of fake news is one of the most important and dangerous consequences of social media (Baccarella et al., 2018 , 2020 ). Zhou et al. ( 2019 ) suggested that fake news threatens public trust, democracy, justice, freedom of expression, and the economy. In the 2016 United States (US) presidential election, fake news engagement outperformed mainstream news engagement and significantly impacted the election results (Silverman, 2016 ). In addition to political issues, fake news can cause irrecoverable damage to companies. For instance, Pepsi stock fell by 4% in 2016 when a fake story about the company’s CEO spread on social media (Berthon and Pitt, 2018 ). During the COVID-19 pandemic, fake news caused serious problems, e.g., people in Europe burned 5G towers because of a rumor claiming that these towers damaged the immune system of humans (Mourad et al., 2020 ). The World Health Organization (WHO) asserted that misinformation and propaganda propagated more rapidly than the COVID-19 pandemic, leading to psychological panic, the circulation of misleading medical advice, and an economic crisis.

This study, which is a part of a completed PhD thesis (Farhoundinia, 2023 ), focuses on analyzing the emotions and sentiments elicited by fake news in the context of COVID-19. The purpose of this paper is to investigate how emotions can help detect fake news. This study aims to address the following research questions: 1. How do the sentiments associated with real news and fake news differ? 2. How do the emotions elicited by fake news differ from those elicited by real news? 3. What particular emotions are most prevalent in fake news? 4. How can these feelings be used to recognize fake news on social media?

This paper is arranged into six sections: Section “Related studies” reviews the related studies; Section “Methods” explains the proposed methodology; and Section “Results and analysis” presents the implemented models, analysis, and related results in detail. Section “Discussion and limitations” discusses the research limitations, and the conclusion of the study is presented in Section “Conclusion”.

Related studies

Research in the field of fake news began following the 2016 US election (Carlson, 2020 ; Wang et al., 2019 ). Fake news has been a popular topic in multiple disciplines, such as journalism, psychology, marketing, management, health care, political science, information science, and computer science (Farhoudinia et al., 2023 ). Therefore, fake news has not been defined in a single way; according to Berthon and Pitt ( 2018 ), misinformation is the term used to describe the unintentional spread of fake news. Disinformation is the term used to describe the intentional spread of fake news to mislead people or attack an idea, a person, or a company (Allcott and Gentzkow, 2017 ). Digital assets such as images and videos could be used to spread fake news (Rajamma et al., 2019 ). Advancements in computer graphics, computer vision, and machine learning have made it feasible to create fake images or movies by merging them together (Agarwal et al., 2020 ). Additionally, deep fake videos pose a risk to public figures, businesses, and individuals in the media. Detecting deep fakes is challenging, if not impossible, for humans.

The reasons for believing and sharing fake news have attracted the attention of several researchers (e.g., Al-Rawi et al., 2019 ; Apuke and Omar, 2020 ; Talwar, Dhir et al., 2019 ). Studies have shown that people have a tendency to favor news that reinforces their existing beliefs, a cognitive phenomenon known as confirmation bias. This inclination can lead individuals to embrace misinformation that aligns with their preconceived notions (Kim and Dennis, 2019 ; Meel and Vishwakarma, 2020 ). Although earlier research focused significantly on the factors that lead people to believe and spread fake news, it is equally important to understand the cognitive mechanisms involved in this process. These cognitive mechanisms, as proposed by Kahneman ( 2011 ), center on two distinct systems of thinking. In system-one cognition, conclusions are made without deep or conscious thoughts; however, in system-two cognition, there is a deeper analysis before decisions are made. Based on Moravec et al. ( 2020 ), social media users evaluate news using ‘system-one’ cognition; therefore, they believe and share fake news without deep thinking. It is essential to delve deeper into the structural aspects of social media platforms that enable the rapid spread of fake news. Social media platforms are structured to show that posts and news are aligned with users’ ideas and beliefs, which is known as the root cause of the echo chamber effect (Cinelli et al., 2021 ). The echo chamber effect has been introduced as an aspect that causes people to believe and share fake news on social media (e.g., Allcott and Gentzkow, 2017 ; Berthon and Pitt, 2018 ; Chua and Banerjee, 2018 ; Peterson, 2019 ).

In the context of our study, we emphasize the existing body of research that specifically addresses the detection of fake news (Al-Rawi et al., 2019 ; Faustini and Covões, 2020 ; Ozbay and Alatas, 2020 ; Raza and Ding, 2022 ). Numerous studies that are closely aligned with the themes of our present investigation have delved into methodological approaches for identifying fake news (Er and Yılmaz, 2023 ; Hamed et al., 2023 ; Iwendi et al., 2022 ). Fake news detection methods are classified into three categories: (i) content-based, (ii) social context, and (iii) propagation-based methods. (i) Content-based fake news detection models are based on the content and linguistic features of the news rather than user and propagation characteristics (Zhou and Zafarani, 2019 , p. 49). (ii) Fake news detection based on social context employs user demographics such as age, gender, education, and follower–followee relationships of the fake news publishers as features to recognize fake news (Jarrahi and Safari, 2023 ). (iii) Propagation-based approaches are based on the spread of news on social media. The input of the propagation-based fake news detection model is a cascade of news, not text or user profiles. Cascade size, cascade depth, cascade breadth, and node degree are common features of detection models (Giglietto et al., 2019 ; de Regt et al., 2020 ; Vosoughi et al., 2018 ).

Machine learning methods are widely used in the literature because they enable researchers to handle and process large datasets (Ongsulee, 2017 ). The use of machine learning in fake news research has been extremely beneficial, especially in the domains of content-based, social context-based, and propagation-based fake news identification. These methods leverage the advantages of a range of characteristics, including sentiment-related, propagation, temporal, visual, linguistic, and user/account aspects. Fake news detection frequently makes use of machine learning techniques such as logistic regressions, decision trees, random forests, naïve Bayes, and support vector machine (SVM). Studies on the identification of fake news also include deep learning models, such as convolutional neural networks (CNN) and long short-term memory (LSTM) networks, which can provide better accuracy in certain situations. Even with a small amount of training data, pretrained language models such as bidirectional encoder representations from transformers (BERT) show potential for identifying fake news (Kaliyar et al., 2021 ). Amer et al. ( 2022 ) investigated the usefulness of these models in benchmark studies covering different topics.

The role of emotions in identifying fake news within academic communities remains an area with considerable potential for additional research. Despite many theoretical and empirical studies, this topic remains inadequately investigated. Ainapure et al. ( 2023 ) analyzed the sentiments elicited by tweets in India during the COVID-19 pandemic with deep learning and lexicon-based techniques using the valence-aware dictionary and sentiment reasoner (Vader) and National Research Council (NRC) lexicons to understand the public’s concerns. Dey et al. ( 2018 ) applied several natural language processing (NLP) methods, such as sentiment analysis, to a dataset of tweets about the 2016 U.S. presidential election. They found that fake news had a strong tendency toward negative sentiment; however, their dataset was too limited (200 tweets) to provide a general understanding. Cui et al. ( 2019 ) found that sentiment analysis was the best-performing component in their fake news detection framework. Ajao et al. ( 2019 ) studied the hypothesis that a relationship exists between fake news and the sentiments elicited by such news. The authors tested hypotheses with different machine learning classifiers. The best results were obtained by sentiment-aware classifiers. Pennycook and Rand ( 2020 ) argued that reasoning and analytical thinking help uncover news credibility; therefore, individuals who engage in reasoning are less likely to believe fake news. Prior psychology research suggests that an increase in the use of reason implies a decrease in the use of emotions (Mercer, 2010 ).

In this study, we apply sentiment analysis to the more general topic of fake news detection. The focus of this study is on the tweets that were shared during the COVID-19 pandemic. Many scholars focused on the effects of media reports, providing comprehensive information and explanations about the virus. However, there is still a gap in the literature on the characteristics and spread of fake news during the COVID-19 pandemic. A comprehensive study can enhance preparedness efforts for any similar future crisis. The aim of this study is to answer the question of how emotions aid in fake news detection during the COVID-19 pandemic. Our hypothesis is that fake news carries negative emotions and is written with different emotions and sentiments than those of real news. We expect to extract more negative sentiments and emotions from fake news than from real news. Existing works on fake news detection have focused mainly on news content and social context. However, emotional information has been underutilized in previous studies (Ajao et al., 2019 ). We extract sentiments and eight basic emotions from every tweet in the COVID-19 Twitter dataset and use these features to classify fake and real news. The results indicate how emotions can be used in differentiating and detecting fake and real news.

With our methodology, we employed a multifaceted approach to analyze tweet text and discern sentiment and emotion. The steps involved were as follows: (a) Lexicons such as Vader, TextBlob, and SentiWordNet were used to identify sentiments embedded in the tweet content. (b) The NRC emotion lexicon was utilized to recognize the range of different emotions expressed in the tweets. (c) Machine learning models, including the random forest, naïve Bayes, and SVM classifiers, as well as a deep learning model, BERT, were integrated. These models were strategically applied to the data for fake news detection, both with and without considering emotions. This comprehensive approach allowed us to capture nuanced patterns and dependencies within the tweet data, contributing to a more effective and nuanced analysis of the fake news content on social media.

An open, science-based, publicly available dataset was utilized. The dataset comprises 10,700 English tweets with hashtags relevant to COVID-19, categorized with real and fake labels. Previously used by Vasist and Sebastian ( 2022 ) and Suter et al. ( 2022 ), the manually annotated dataset was compiled by Patwa et al. ( 2021 ) in September 2020 and includes tweets posted in August and September 2020. According to their classification, the dataset is balanced, with 5600 real news stories and 5100 fake news stories. The dataset used for the study was generated by sourcing fake news data from public fact-checking websites and social media outlets, with manual verification against the original documents. Web-based resources, including social media posts and fact-checking websites such as PolitiFact and Snopes, played a key role in collecting and adjudicating details on the veracity of claims related to COVID-19. For real news, tweets from official and verified sources were gathered, and each tweet was assessed by human reviewers based on its contribution of relevant information about COVID-19 (Patwa et al., 2021 ; Table 2 on p. 4 of Suter et al., 2022 , which is excerpted from Patwa et al. ( 2021 ), also provides an illustrative overview).

Preprocessing is an essential step in any data analysis, especially when dealing with textual data. Appropriate preprocessing steps can significantly enhance the performance of the models. The following preprocessing steps were applied to the dataset: removing any characters other than alphabets, change the letters to lower-case, deleting stop words such as “a,” “the,” “is,” and “are,” which carry very little helpful information, and performing lemmatization. The text data were transformed into quantitative data by the scikit-learn ordinal encoder class.

The stages involved in this research are depicted in a high-level schematic that is shown in Fig. 1 . First, the sentiments and emotions elicited by the tweets were extracted, and then, after studying the differences between fake and real news in terms of sentiments and emotions, these characteristics were utilized to construct fake news detection models.

figure 1

The figure depicts the stages involved in this research in a high-level schematic.

Sentiment analysis

Sentiment analysis is the process of deriving the sentiment of a piece of text from its content (Vinodhini and Chandrasekaran, 2012 ). Sentiment analysis, as a subfield of natural language processing, is widely used in analyzing the reviews of a product or service and social media posts related to different topics, events, products, or companies (Wankhade et al., 2022 ). One major application of sentiment analysis is in strategic marketing. Păvăloaia et al. ( 2019 ), in a comprehensive study on two companies, Coca-Cola and PepsiCo, confirmed that the activity of these two brands on social media has an emotional impact on existing or future customers and the emotional reactions of customers on social media can influence purchasing decisions. There are two methods for sentiment analysis: lexicon-based and machine-learning methods. Lexicon-based sentiment analysis uses a collection of known sentiments that can be divided into dictionary-based lexicons or corpus-based lexicons (Pawar et al., 2015 ). These lexicons help researchers derive the sentiments generated from a text document. Numerous dictionaries, such as Vader (Hutto and Gilbert, 2014 ), SentiWordNet (Esuli and Sebastiani, 2006 ), and TextBlob (Loria, 2018 ), can be used for scholarly research.

In this research, Vader, TextBlob, and SentiWordNet are the three lexicons used to extract the sentiments generated from tweets. The Vader lexicon is an open-source lexicon attuned specifically to social media (Hutto and Gilbert, 2014 ). TextBlob is a Python library that processes text specifically designed for natural language analysis (Loria, 2018 ), and SentiWordNet is an opinion lexicon adapted from the WordNet database (Esuli and Sebastiani, 2006 ). Figure 2 shows the steps for the sentiment analysis of tweets.

figure 2

The figure illustrates the steps for the sentiment analysis of tweets.

Different methods and steps were used to choose the best lexicon. First, a random partition of the dataset was manually labeled as positive, negative, or neutral. The results of every lexicon were compared with the manually labeled sentiments, and the performance metrics for every lexicon are reported in Table 1 . Second, assuming that misclassifying negative and positive tweets as neutral is not as crucial as misclassifying negative tweets as classifying positive tweets, the neutral tweets were ignored, and a comparison was made on only positive and negative tweets. The three-class and two-class classification metrics are compared in Table 1 .

Third, this study’s primary goal was to identify the precise distinctions between fake and real tweets to improve the detection algorithm. We addressed how well fake news was detected with the three sentiment lexicons, as different results were obtained. This finding means that a fake news detection model was trained with the dataset using the outputs from three lexicons: Vader, TextBlob, and SentiWordNet. As previously indicated, the dataset includes labels for fake and real news, which allows for the application of supervised machine learning detection models and the evaluation of how well various models performed. The Random Forest algorithm is a supervised machine learning method that has achieved good performance in the classification of text data. The dataset contains many tweets and numerical data reporting the numbers of hospitalized, deceased, and recovered individuals who do not carry any sentiment. During this phase, tweets containing numerical data were excluded; this portion of the tweets constituted 20% of the total. Table 2 provides information on the classification power using the three lexicons with nonnumerical data. The models were more accurate when using sentiments drawn from Vader. This finding means the Vader lexicon may include better classifications of fake and real news. Vader was selected as the superior sentiment lexicon after evaluating all three processes. The steps for choosing the best lexicon are presented in Fig. 3 (also see Appendix A in Supplementary Information for further details on the procedure). Based on the results achieved when using Vader, the tweets that are labeled as fake include more negative sentiments than those of real tweets. Conversely, real tweets include more positive sentiments.

figure 3

The figure exhibits the steps for choosing the best lexicon.

Emotion extraction

Emotions elicited in tweets were extracted using the NRC emotion lexicon. This lexicon measures emotional effects from a body of text, contains ~27,000 words, and is based on the National Research Council Canada’s affect lexicon and the natural language toolkit (NLTK) library’s WordNet synonym sets (Mohammad and Turney, 2013 ). The lexicon includes eight scores for eight emotions based on Plutchick’s model of emotion (Plutchik, 1980 ): joy, trust, fear, surprise, sadness, anticipation, anger, and disgust. These emotions can be classified into four opposing pairs: joy–sadness, anger–fear, trust–disgust, and anticipation–surprise. The NRC lexicon assigns each text the emotion with the highest score. Emotion scores from the NRC lexicon for every tweet in the dataset were extracted and used as features for the fake news detection model. The features of the model include the text of the tweet, sentiment, and eight emotions. The model was trained with 80% of the data and tested with 20%. Fake news had a greater prevalence of negative emotions, such as fear, disgust, and anger, than did real news, and real news had a greater prevalence of positive emotions, such as anticipation, joy, and surprise, than did fake news.

Fake news detection

In the present study, the dataset was divided into a training set (80%) and a test set (20%). The dataset was analyzed using three machine learning models: random forest, SVM, and naïve Bayes. Appendices A and B provide information on how the results were obtained and how they correlate with the research corpus.

Random forest : An ensemble learning approach that fits several decision trees to random data subsets. This classifier is popular for text classification, high-dimensional data, and feature importance since it overfits less than decision trees. The Random Forest classifier in scikit-learn was used in this study (Breiman, 2001 ).

Naïve Bayes : This model uses Bayes’ theorem to solve classification problems, such as sorting documents into groups and blocking spam. This approach works well with text data and is easy to use, strong, and good for problems with more than one label. The Naïve Bayes classifier from scikit-learn was used in this study (Zhang, 2004 ).

Support vector machines (SVMs) : Supervised learning methods that are used to find outliers, classify data, and perform regression. These methods work well with data involving many dimensions. SVMs find the best hyperplanes for dividing classes. In this study, the SVM model from scikit-learn was used (Cortes and Vapnik, 1995 ).

Deep learning models can learn how to automatically describe data in a hierarchical way, making them useful for tasks such as identifying fake news (Salakhutdinov et al., 2012 ). A language model named bidirectional encoder representations from transformers (BERT) was used in this study to help discover fake news more easily.

BERT : A cutting-edge NLP model that uses deep neural networks and bidirectional learning and can distinguish patterns on both sides of a word in a sentence, which helps it understand the context and meaning of text. BERT has been pretrained with large datasets and can be fine-tuned for specific applications to capture unique data patterns and contexts (Devlin et al., 2018 ).

In summary, we applied machine learning models (random forest, naïve Bayes, and SVM) and a deep learning model (BERT) to analyze text data for fake news detection. The impact of emotion features on detecting fake news was compared between models that include these features and models that do not include these features. We found that adding emotion scores as features to machine learning and deep learning models for fake news detection can improve the model’s accuracy. A more detailed analysis of the results is given in the section “Results and analysis”.

Results and analysis

In the sentiment analysis using tweets from the dataset, positive and negative sentiment tweets were categorized into two classes: fake and real. Figure 4 shows a visual representation of the differences, while the percentages of the included categories are presented in Table 3 . In fake news, the number of negative sentiments is greater than the number of positive sentiments (39.31% vs. 31.15%), confirming our initial hypothesis that fake news disseminators use extreme negative emotions to attract readers’ attention.

figure 4

The figure displays a visual representation of the differences of sentiments in each class.

Fake news disseminators aim to attack or satirize an idea, a person, or a brand using negative words and emotions. Baumeister et al. ( 2001 ) suggested that negative events are stronger than positive events and that negative events have a more significant impact on individuals than positive events. Accordingly, individuals sharing fake news tend to express more negativity for increased impressiveness. The specific topics of the COVID-19 pandemic, such as the source of the virus, the cure for the illness, the strategy the government is using against the spread of the virus, and the spread of vaccines, are controversial topics. These topics, known for their resilience against strong opposition, have become targets of fake news featuring negative sentiments (Frenkel et al., 2020 ; Pennycook et al., 2020 ). In real news, the pattern is reversed, and positive sentiments are much more frequent than negative sentiments (46.45% vs. 35.20%). Considering that real news is spread among reliable news channels, we can conclude that reliable news channels express news with positive sentiments so as not to hurt their audience psychologically and mentally.

The eight scores for the eight emotions of anger, anticipation, disgust, fear, joy, sadness, surprise, and trust were extracted from the NRC emotion lexicon for every tweet. Each text was assigned the emotion with the highest score. Table 4 and Fig. 5 include more detailed information about the emotion distribution.

figure 5

The figure depicts more detailed information about the emotion distribution.

The NRC lexicon provides scores for each emotion. Therefore, the intensities of emotions can also be compared. Table 5 shows the average score of each emotion for the two classes, fake and real news.

A two-sample t -test was performed using the pingouin (PyPI) statistical package in Python (Vallat, 2018 ) to determine whether the difference between the two groups was significant (Tables 6 and 7 ).

As shown in Table 6 , the P values indicate that the differences in fear, anger, trust, surprise, disgust, and anticipation were significant; however, for sadness and joy, the difference between the two groups of fake and real news was not significant. Considering the statistics provided in Tables 4 , 5 , and Fig. 5 , the following conclusions can be drawn:

Anger, disgust, and fear are more commonly elicited in fake news than in real news.

Anticipation and surprise are more commonly elicited in real news than in fake news.

Fear is the most commonly elicited emotion elicited in both fake and real news.

Trust is the second most commonly elicited emotion in fake and real news.

The most significant differences were observed for trust, fear, and anticipation (5.92%, 5.33%, and 3.05%, respectively). The differences between fake and real news in terms of joy and sadness were not significant.

In terms of intensity, based on Table 5 ,

Fear is the mainly elicited emotion in both fake and real news; however, fake news has a higher fear intensity score than does real news.

Trust is the second most commonly elicited emotion in two categories—real and fake—but is more powerful in real news.

Positive emotions, such as anticipation, surprise, and trust, are more strongly elicited in real news than in fake news.

Anger, disgust, and fear are among the stronger emotions elicited by fake news. Joy and sadness are elicited in both classes almost equally.

During the COVID-19 pandemic, fake news disseminators seized the opportunity to create fearful messages aligned with their objectives. The existence of fear in real news is also not surprising because of the extraordinary circumstances of the pandemic. The most crucial point of the analysis is the significant presence of negative emotions elicited by fake news. This observation confirms our hypothesis that fake news elicits extremely negative emotions. Positive emotions such as anticipation, joy, and surprise are elicited more often in real news than in fake news, which also aligns with our hypothesis. The largest differences in elicited emotions are as follows: trust, fear, and anticipation.

We used nine features for every tweet in the dataset: sentiment and eight scores for every emotion and sentiment in every tweet. These features were utilized for supervised machine learning fake news detection models. A schematic explanation of the models is given in Fig. 6 . The dataset was divided into training and test sets, with an 80%–20% split. The scikit-learn random forest, SVM, and Naïve Bayes machine learning models with default hyperparameters were implemented using emotion features to detect fake news in nonnumerical data. Then, we compared the prediction power of the models with that of models without these features. The performance metrics of the models, such as accuracy, precision, recall, and F1-score, are given in Table 7 .

figure 6

The figure exhibits a schematic explanation of the model.

When joy and sadness were removed from the models, the accuracy decreased. Thus, the models performed better when all the features were included (see Table C.1. Feature correlation scores in Supplementary Information). The results confirmed that elicited emotions can help identify fake and real news. Adding emotion features to the detection models significantly increased the performance metrics. Figure 7 presents the importance of the emotion features used in the random forest model.

figure 7

The figure illustrates the importance of the emotion features used in the Random Forest model.

In the random forest classifier, the predominant attributes were anticipation, trust, and fear. The difference in the emotion distribution between the two classes of fake and real news was also more considerable for anticipation, trust, and fear. It can be claimed that fear, trust, and anticipation emotions have good differentiating power between fake and real news.

BERT was the other model that was employed for the task of fake news detection using emotion features. The BERT model includes a number of preprocessing stages. The text input is segmented using the BERT tokenizer, with sequence truncation and padding ensuring that the length does not exceed 128 tokens, a reduction from the usual 512 tokens due to constraints on computing resources. The optimization process utilized the AdamW optimizer with a set learning rate of 0.00001. To ascertain the best number of training cycles, a 5-fold cross-validation method was applied, which established that three epochs were optimal. The training phase consisted of three unique epochs. The model was executed on Google Colab using Python, a popular programming language. The model was evaluated with the test set after training. Table 8 shows the performance of the BERT model with and without using emotions as features.

The results indicate that adding emotion features had a positive impact on the performance of the random forest, SVM, and BERT models; however, the naïve Bayes model achieved better performance without adding emotion features.

Discussion and limitations

This research makes a substantial impact on the domain of detecting fake news. The goal was to explore the range of sentiments and emotional responses linked to both real and fake news in pursuit of fulfilling the research aims and addressing the posed inquiries. By identifying the emotions provoked as key indicators of fake news, this study adds valuable insights to the existing corpus of related scholarly work.

Our research revealed that fake news triggers a higher incidence of negative emotions compared to real news. Sentiment analysis indicated that creators of fake news on social media platforms tend to invoke more negative sentiments than positive ones, whereas real news generally elicits more positive sentiments than negative ones. We extracted eight emotions—anger, anticipation, disgust, fear, joy, sadness, surprise, and trust—from each tweet analyzed. Negative and potent emotions such as fear, disgust, and anger were more frequently found elicited in fake news, in contrast to real news, which was more likely to arouse lighter and positive emotions such as anticipation, joy, and surprise. The difference in emotional response extended beyond the range of emotions to their intensity, with negative feelings like fear, anger, and disgust being more pronounced in fake news. We suggest that the inclusion of emotional analysis in the development of automated fake news detection algorithms could improve the effectiveness of the machine learning and deep learning models designed for fake news detection in this study.

Due to negativity bias (Baumeister et al., 2001 ), bad news, emotions, and feedback tend to have a more outsized influence than positive experiences. This suggests that humans are more likely to assign greater weight to negative events over positive ones (Lewicka et al., 1992 ). Our findings indicate that similar effects are included in social media user behavior, such as sharing and retweeting. Furthermore, the addition of emotional features to the fake news detection models was found to improve their performance, providing an opportunity to investigate their moderating effects on fake news dissemination in future research.

The majority of the current research on identifying fake news involves analyzing the social environment and news content (Amer et al., 2022 ; Jarrahi and Safari, 2023 ; Raza and Ding, 2022 ). Despite its possible importance, the investigation of emotional data has not received sufficient attention in the past (Ajao et al., 2019 ). Although sentiment in fake news has been studied in the literature, earlier studies mostly neglected a detailed examination of certain emotions. Dey et al. ( 2018 ) contributed to this field by revealing a general tendency toward negativity in fake news. Their results support our research and offer evidence for the persistent predominance of negative emotions elicited by fake news. Dey et al. ( 2018 ) also found that trustworthy tweets, on the other hand, tended to be neutral or positive in sentiment, highlighting the significance of sentiment polarity in identifying trustworthy information.

Expanding upon this sentiment-focused perspective, Cui et al. ( 2019 ) observed a significant disparity in the sentiment polarity of comments on fake news as opposed to real news. Their research emphasized the clear emotional undertones in user reactions to false material, highlighting the importance of elicited emotions in the context of fake news. Similarly, Dai et al. ( 2020 ) analyzed false health news and revealed a tendency for social media replies to real news to be marked by a more upbeat tone. These comparative findings highlight how elicited emotions play a complex role in influencing how people engage with real and fake news.

Our analysis revealed that the emotions conveyed in fake tweets during the COVID-19 pandemic are in line with the more general trends found in other studies on fake news. However, our research extends beyond that of current studies by offering detailed insights into the precise distribution and strength of emotions elicited by fake tweets. This detailed research closes a significant gap in the body of literature by adding a fresh perspective on our knowledge of emotional dynamics in the context of disseminating false information. Our research contributes significantly to the current discussion on fake news identification by highlighting these comparative aspects and illuminating both recurring themes and previously undiscovered aspects of emotional data in the age of misleading information.

The present analysis was performed with a COVID-19 Twitter dataset, which does not cover the whole period of the pandemic. A complementary study on a dataset that covers a wider time interval might yield more generalizable findings, while our study represents a new effort in the field. In this research, the elicited emotions of fake and real news were compared, and the emotion with the highest score was assigned to each tweet, while an alternative method could be to compare the emotion score intervals for fake and real news. The performance of detection models could be further improved by using pretrained emotion models and adding additional emotion features to the models. In a future study, our hypothesis that “fake news and real news are different in terms of elicited emotions, and fake news elicits more negative emotions” could be examined in an experimental field study. Additionally, the premises and suppositions underlying this study could be tested in emergency scenarios beyond the COVID-19 context to enhance the breadth of crisis readiness.

The field of fake news research is interdisciplinary, drawing on the expertise of scholars from various domains who can contribute significantly by formulating pertinent research questions. Psychologists and social scientists have the opportunity to delve into the motivations and objectives behind the creators of fake news. Scholars in management can offer strategic insights for organizations to deploy in countering the spread of fake news. Legislators are in a position to draft laws that effectively stem the flow of fake news across social media channels. In addition, the combined efforts of researchers from other academic backgrounds can make substantial additions to the existing literature on fake news.

The aim of this research was to propose novel attributes for current fake news identification techniques and to explore the emotional and sentiment distinctions between fake news and real news. This study was designed to tackle the subsequent research questions: 1. How do the sentiments associated with real news and fake news differ? 2. How do the emotions elicited by fake news differ from those elicited by real news? 3. What particular elicited emotions are most prevalent in fake news? 4. How could these elicited emotions be used to recognize fake news on social media? To answer these research questions, we thoroughly examined tweets related to COVID-19. We employed a comprehensive strategy, integrating lexicons such as Vader, TextBlob, and SentiWordNet together with machine learning models, including random forest, naïve Bayes, and SVM, as well as a deep learning model named BERT. We first performed sentiment analysis using the lexicons. Fake news elicited more negative sentiments, supporting the idea that disseminators use extreme negativity to attract attention. Real news elicited more positive sentiments, as expected from trustworthy news channels. For fake news, there was a greater prevalence of negative emotions, including fear, disgust, and anger, while for real news, there was a greater frequency of positive emotions, such as anticipation, joy, and surprise. The intensity of these emotions further differentiated fake and real news, with fear being the most dominant emotion in both categories. We applied machine learning models (random forest, naïve Bayes, SVM) and a deep learning model (BERT) to detect fake news using sentiment and emotion features. The models demonstrated improved accuracy when incorporating emotion features. Anticipation, trust, and fear emerged as significant differentiators between fake and real news, according to the random forest feature importance analysis.

The findings of this research could lead to reliable resources for communicators, managers, marketers, psychologists, sociologists, and crisis and social media researchers to further explain social media behavior and contribute to the existing fake news detection approaches. The main contribution of this study is the introduction of emotions as a role-playing feature in fake news detection and the explanation of how specific elicited emotions differ between fake and real news. The elicited emotions extracted from social media during a crisis such as the COVID-19 pandemic could not only be an important variable for detecting fake news but also provide a general overview of the dominant emotions among individuals and the mental health of society during such a crisis. Investigating and extracting further features of fake news has the potential to improve the identification of fake news and may allow for the implementation of preventive measures. Furthermore, the suggested methodology could be applied to detecting fake news in fields such as politics, sports, and advertising. We expect to observe a similar impact of emotions on other topics as well.

Data availability

The datasets analyzed during the current study are available in the Zenodo repository: https://doi.org/10.5281/zenodo.10951346 .

Agarwal S, Farid H, El-Gaaly T, Lim S-N (2020) Detecting Deep-Fake Videos from Appearance and Behavior. 2020 IEEE International Workshop on Information Forensics and Security (WIFS), 1–6. https://doi.org/10.1109/WIFS49906.2020.9360904

Ainapure BS, Pise RN, Reddy P, Appasani B, Srinivasulu A, Khan MS, Bizon N (2023) Sentiment analysis of COVID-19 tweets using deep learning and lexicon-based approaches. Sustainability 15(3):2573. https://doi.org/10.3390/su15032573

Article   Google Scholar  

Ajao O, Bhowmik D, Zargari S (2019) Sentiment Aware Fake News Detection on Online Social Networks. ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2507–2511. https://doi.org/10.1109/ICASSP.2019.8683170

Al-Rawi A, Groshek J, Zhang L (2019) What the fake? Assessing the extent of networked political spamming and bots in the propagation of# fakenews on Twitter. Online Inf Rev 43(1):53–71. https://doi.org/10.1108/OIR-02-2018-0065

Allcott H, Gentzkow M (2017) Social media and fake news in the 2016 election. J Econ Perspect 31(2):211–236. https://doi.org/10.1257/jep.31.2.211

Amer E, Kwak K-S, El-Sappagh S (2022) Context-based fake news detection model relying on deep learning models. Electronics (Basel) 11(8):1255. https://doi.org/10.3390/electronics11081255

Apuke OD, Omar B (2020) User motivation in fake news sharing during the COVID-19 pandemic: an application of the uses and gratification theory. Online Inf Rev 45(1):220–239. https://doi.org/10.1108/OIR-03-2020-0116

Baccarella CV, Wagner TF, Kietzmann JH, McCarthy IP (2018) Social media? It’s serious! Understanding the dark side of social media. Eur Manag J 36(4):431–438. https://doi.org/10.1016/j.emj.2018.07.002

Baccarella CV, Wagner TF, Kietzmann JH, McCarthy IP (2020) Averting the rise of the dark side of social media: the role of sensitization and regulation. Eur Manag J 38(1):3–6. https://doi.org/10.1016/j.emj.2019.12.011

Baumeister RF, Bratslavsky E, Finkenauer C, Vohs KD (2001) Bad is stronger than good. Rev Gen Psychol 5(4):323–370. https://doi.org/10.1037/1089-2680.5.4.323

Berthon PR, Pitt LF (2018) Brands, truthiness and post-fact: managing brands in a post-rational world. J Macromark 38(2):218–227. https://doi.org/10.1177/0276146718755869

Breiman L (2001) Random forests. Mach Learn 45:5–32. https://doi.org/10.1023/A:1010933404324

Carlson M (2020) Fake news as an informational moral panic: the symbolic deviancy of social media during the 2016 US presidential election. Inf Commun Soc 23(3):374–388. https://doi.org/10.1080/1369118X.2018.1505934

Chua AYK, Banerjee S (2018) Intentions to trust and share online health rumors: an experiment with medical professionals. Comput Hum Behav 87:1–9. https://doi.org/10.1016/j.chb.2018.05.021

Cinelli M, De Francisci Morales G, Galeazzi A, Quattrociocchi W, Starnini M (2021) The echo chamber effect on social media. Proc Natl Acad Sci USA 118(9). https://doi.org/10.1073/pnas.2023301118

Cortes C, Vapnik V (1995) Support-vector networks. Mach Learn 20:273–297. https://doi.org/10.1007/BF00994018

Cui L, Wang S, Lee D (2019) SAME: sentiment-aware multi-modal embedding for detecting fake news. 2019 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), 41–48. https://doi.org/10.1145/3341161.3342894

Dai E, Sun Y, Wang S (2020) Ginger cannot cure cancer: Battling fake health news with a comprehensive data repository. In Proceedings of the 14th International AAAI Conference on Web and Social Media, ICWSM 2020 (pp. 853–862). (Proceedings of the 14th International AAAI Conference on Web and Social Media, ICWSM 2020). AAAI press

de Regt A, Montecchi M, Lord Ferguson S (2020) A false image of health: how fake news and pseudo-facts spread in the health and beauty industry. J Product Brand Manag 29(2):168–179. https://doi.org/10.1108/JPBM-12-2018-2180

Devlin J, Chang M-W, Lee K, Toutanova K (2018) Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint. https://doi.org/10.48550/arXiv.1810.04805

Dey A, Rafi RZ, Parash SH, Arko SK, Chakrabarty A (2018) Fake news pattern recognition using linguistic analysis. Paper presented at the 2018 joint 7th International Conference on Informatics, Electronics & Vision (ICIEV) and 2018 2nd International Conference on Imaging, Vision & Pattern Recognition (icIVPR), Kitakyushu, Japan. pp. 305–309

Er MF, Yılmaz YB (2023) Which emotions of social media users lead to dissemination of fake news: sentiment analysis towards Covid-19 vaccine. J Adv Res Nat Appl Sci 9(1):107–126. https://doi.org/10.28979/jarnas.1087772

Esuli A, Sebastiani F (2006) Sentiwordnet: A publicly available lexical resource for opinion mining. Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)

Farhoundinia B (2023). Analyzing effects of emotions on fake news detection: a COVID-19 case study. PhD Thesis, Sabanci Graduate Business School, Sabanci University

Farhoudinia B, Ozturkcan S, Kasap N (2023) Fake news in business and management literature: a systematic review of definitions, theories, methods and implications. Aslib J Inf Manag https://doi.org/10.1108/AJIM-09-2022-0418

Faustini PHA, Covões TF (2020) Fake news detection in multiple platforms and languages. Expert Syst Appl 158:113503. https://doi.org/10.1016/j.eswa.2020.113503

Frenkel S, Davey A, Zhong R (2020) Surge of virus misinformation stumps Facebook and Twitter. N Y Times (Online) https://www.nytimes.com/2020/03/08/technology/coronavirus-misinformation-social-media.html

Giglietto F, Iannelli L, Valeriani A, Rossi L (2019) ‘Fake news’ is the invention of a liar: how false information circulates within the hybrid news system. Curr Sociol 67(4):625–642. https://doi.org/10.1177/0011392119837536

Hamed SK, Ab Aziz MJ, Yaakub MR (2023) Fake news detection model on social media by leveraging sentiment analysis of news content and emotion analysis of users’ comments. Sensors (Basel, Switzerland) 23(4):1748. https://doi.org/10.3390/s23041748

Article   ADS   PubMed   Google Scholar  

Hutto C, Gilbert E (2014) VADER: A Parsimonious Rule-Based Model for Sentiment Analysis of Social Media Text. Proceedings of the International AAAI Conference on Web and Social Media, 8(1), 216–225. https://doi.org/10.1609/icwsm.v8i1.14550

Iwendi C, Mohan S, khan S, Ibeke E, Ahmadian A, Ciano T (2022) Covid-19 fake news sentiment analysis. Comput Electr Eng 101:107967–107967. https://doi.org/10.1016/j.compeleceng.2022.107967

Article   PubMed   PubMed Central   Google Scholar  

Jarrahi A, Safari L (2023) Evaluating the effectiveness of publishers’ features in fake news detection on social media. Multimed Tools Appl 82(2):2913–2939. https://doi.org/10.1007/s11042-022-12668-8

Article   PubMed   Google Scholar  

Kahneman D (2011) Thinking, fast and slow, 1st edn. Farrar, Straus and Giroux

Kaliyar RK, Goswami A, Narang P (2021) FakeBERT: fake news detection in social media with a BERT-based deep learning approach. Multimed Tools Appl 80(8):11765–11788. https://doi.org/10.1007/s11042-020-10183-2

Kim A, Dennis AR (2019) Says who? The effects of presentation format and source rating on fake news in social media. MIS Q 43(3):1025–1039. https://doi.org/10.25300/MISQ/2019/15188

Kumar A, Bezawada R, Rishika R, Janakiraman R, Kannan PK (2016) From social to sale: the effects of firm-generated content in social media on customer behavior. J Mark 80(1):7–25. https://doi.org/10.1509/jm.14.0249

Lewicka M, Czapinski J, Peeters G (1992) Positive-negative asymmetry or when the heart needs a reason. Eur J Soc Psychol 22(5):425–434. https://doi.org/10.1002/ejsp.2420220502

Loria S (2018) Textblob documentation. Release 0.15, 2 accessible at https://readthedocs.org/projects/textblob/downloads/pdf/latest/ . available at http://citebay.com/how-to-cite/textblob/

Meel P, Vishwakarma DK (2020) Fake news, rumor, information pollution in social media and web: a contemporary survey of state-of-the-arts, challenges and opportunities. Expert Syst Appl 153:112986. https://doi.org/10.1016/j.eswa.2019.112986

Mercer J (2010) Emotional beliefs. Int Organ 64(1):1–31. https://www.jstor.org/stable/40607979

Mohammad SM, Turney PD (2013) Crowdsourcing a word–emotion association lexicon. Comput Intell 29(3):436–465. https://doi.org/10.1111/j.1467-8640.2012.00460.x

Article   MathSciNet   Google Scholar  

Moravec PL, Kim A, Dennis AR (2020) Appealing to sense and sensibility: system 1 and system 2 interventions for fake news on social media. Inf Syst Res 31(3):987–1006. https://doi.org/10.1287/isre.2020.0927

Mourad A, Srour A, Harmanai H, Jenainati C, Arafeh M (2020) Critical impact of social networks infodemic on defeating coronavirus COVID-19 pandemic: Twitter-based study and research directions. IEEE Trans Netw Serv Manag 17(4):2145–2155. https://doi.org/10.1109/TNSM.2020.3031034

Ongsulee P (2017) Artificial intelligence, machine learning and deep learning. Paper presented at the 2017 15th International Conference on ICT and Knowledge Engineering (ICT&KE)

Ozbay FA, Alatas B (2020) Fake news detection within online social media using supervised artificial intelligence algorithms. Physica A 540:123174. https://doi.org/10.1016/j.physa.2019.123174

Patwa P, Sharma S, Pykl S, Guptha V, Kumari G, Akhtar MS, Ekbal A, Das A, Chakraborty T (2021) Fighting an Infodemic: COVID-19 fake news dataset. In: Combating online hostile posts in regional languages during emergency situation. Cham, Springer International Publishing

Păvăloaia V-D, Teodor E-M, Fotache D, Danileţ M (2019) Opinion mining on social media data: sentiment analysis of user preferences. Sustainability 11(16):4459. https://doi.org/10.3390/su11164459

Pawar KK, Shrishrimal PP, Deshmukh RR (2015) Twitter sentiment analysis: a review. Int J Sci Eng Res 6(4):957–964

Google Scholar  

Pennycook G, McPhetres J, Zhang Y, Lu JG, Rand DG (2020) Fighting COVID-19 misinformation on social media: experimental evidence for a scalable accuracy-nudge intervention. Psychol Sci 31(7):770–780. https://doi.org/10.1177/0956797620939054

Pennycook G, Rand DG (2020) Who falls for fake news? The roles of bullshit receptivity, overclaiming, familiarity, and analytic thinking. J Personal 88(2):185–200. https://doi.org/10.1111/jopy.12476

Peterson M (2019) A high-speed world with fake news: brand managers take warning. J Product Brand Manag 29(2):234–245. https://doi.org/10.1108/JPBM-12-2018-2163

Plutchik R (1980) A general psychoevolutionary theory of emotion. In: Plutchik R, Kellerman H (eds) Theories of emotion (3–33): Elsevier. https://doi.org/10.1016/B978-0-12-558701-3.50007-7

Rajamma RK, Paswan A, Spears N (2019) User-generated content (UGC) misclassification and its effects. J Consum Mark 37(2):125–138. https://doi.org/10.1108/JCM-08-2018-2819

Raza S, Ding C (2022) Fake news detection based on news content and social contexts: a transformer-based approach. Int J Data Sci Anal 13(4):335–362. https://doi.org/10.1007/s41060-021-00302-z

Salakhutdinov R, Tenenbaum JB, Torralba A (2012) Learning with hierarchical-deep models. IEEE Trans Pattern Anal Mach Intell 35(8):1958–1971. https://doi.org/10.1109/TPAMI.2012.269

Silverman C (2016) This Analysis Shows How Viral Fake Election News Stories Outperformed Real News On Facebook. BuzzFeed News 16. https://www.buzzfeednews.com/article/craigsilverman/viral-fake-election-news-outperformed-real-news-on-facebook

Suter V, Shahrezaye M, Meckel M (2022) COVID-19 Induced misinformation on YouTube: an analysis of user commentary. Front Political Sci 4:849763. https://doi.org/10.3389/fpos.2022.849763

Talwar S, Dhir A, Kaur P, Zafar N, Alrasheedy M (2019) Why do people share fake news? Associations between the dark side of social media use and fake news sharing behavior. J Retail Consum Serv 51:72–82. https://doi.org/10.1016/j.jretconser.2019.05.026

Vallat R (2018) Pingouin: statistics in Python. J Open Source Softw 3(31):1026. https://doi.org/10.21105/joss.01026

Article   ADS   Google Scholar  

Vasist PN, Sebastian M (2022) Tackling the infodemic during a pandemic: A comparative study on algorithms to deal with thematically heterogeneous fake news. Int J Inf Manag Data Insights 2(2):100133. https://doi.org/10.1016/j.jjimei.2022.100133

Vinodhini G, Chandrasekaran R (2012) Sentiment analysis and opinion mining: a survey. Int J Adv Res Comput Sci Softw Eng 2(6):282–292

Vosoughi S, Roy D, Aral S (2018) The spread of true and false news online. Science 359(6380):1146–1151. https://doi.org/10.1126/science.aap9559

Article   ADS   CAS   PubMed   Google Scholar  

Wang Y, McKee M, Torbica A, Stuckler D (2019) Systematic literature review on the spread of health-related misinformation on social media. Soc Sci Med 240:112552. https://doi.org/10.1016/j.socscimed.2019.112552

Wankhade M, Rao ACS, Kulkarni C (2022) A survey on sentiment analysis methods, applications, and challenges. Artif Intell Rev 55(7):5731–5780. https://doi.org/10.1007/s10462-022-10144-1

Whiting A, Williams D (2013) Why people use social media: a uses and gratifications approach. Qual Mark Res 16(4):362–369. https://doi.org/10.1108/QMR-06-2013-0041

Zhang H (2004) The optimality of naive Bayes. Aa 1(2):3

Zhou X, Zafarani R (2019) Network-based fake news detection: A pattern-driven approach. ACM SIGKDD Explor Newsl 21(2):48–60. https://doi.org/10.1145/3373464.3373473

Zhou X, Zafarani R, Shu K, Liu H (2019) Fake news: Fundamental theories, detection strategies and challenges. Paper presented at the Proceedings of the twelfth ACM international conference on web search and data mining. https://doi.org/10.1145/3289600.3291382

Download references

Open access funding provided by Linnaeus University.

Author information

Authors and affiliations.

Sabancı Business School, Sabancı University, Istanbul, Turkey

Bahareh Farhoudinia, Selcen Ozturkcan & Nihat Kasap

School of Business and Economics, Linnaeus University, Växjö, Sweden

Selcen Ozturkcan

You can also search for this author in PubMed   Google Scholar

Contributions

Bahareh Farhoudinia (first author) conducted the research, retrieved the open access data collected by other researchers, conducted the analysis, and drafted the manuscript as part of her PhD thesis successfully completed at Sabancı University in the year 2023. Selcen Ozturkcan (second author and PhD co-advisor) provided extensive guidance throughout the research process, co-wrote sections of the manuscript, and offered critical feedback on the manuscript. Nihat Kasap (third author and PhD main advisor) oversaw the overall project and provided valuable feedback on the manuscript.

Corresponding author

Correspondence to Selcen Ozturkcan .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Ethical approval

This article does not contain any studies with human participants performed by any of the authors.

Informed consent

Informed consent was not required as the study did not involve a design that requires consent.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Farhoudinia, B., Ozturkcan, S. & Kasap, N. Emotions unveiled: detecting COVID-19 fake news on social media. Humanit Soc Sci Commun 11 , 640 (2024). https://doi.org/10.1057/s41599-024-03083-5

Download citation

Received : 02 June 2023

Accepted : 22 April 2024

Published : 18 May 2024

DOI : https://doi.org/10.1057/s41599-024-03083-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Redaffectivelm: leveraging affect enriched embedding and transformer-based neural language model for readers’ emotion detection.

  • Anoop Kadan
  • V. L. Lajish

Knowledge and Information Systems (2024)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

detecting fake news assignment vaccines answer key

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

The Correlation Among COVID-19 Vaccine Acceptance, the Ability to Detect Fake News, and e-Health Literacy

  • PMID: 37463291
  • PMCID: PMC10351963
  • DOI: 10.3928/24748307-20230621-01

Background: The coronavirus disease 2019 (COVID-19) pandemic has seen a rise in the spread of misleading and deceptive information, leading to a negative impact on the acceptance of the COVID-19 vaccine and public opinion. To address this issue, the importance of public e-Health literacy cannot be overstated. It empowers individuals to effectively utilize information technology and combat the dissemination of inaccurate narratives.

Objective: This study aimed to investigate the relationship between the ability to identify disingenuous news, electronic health literacy, and the inclination to receive the COVID-19 immunization.

Methods: In this descriptive-analytical cross-sectional study conducted during summer 2021 in Isfahan, Iran, 522 individuals older than age 18 years, seeking medical attention at health centers, were surveyed. The participants were selected through a meticulous multistage cluster sampling process from the pool of individuals referred to these health centers. Along with demographic information, data collection instruments included the standard e-Health literacy questionnaire and a researcher-developed questionnaire designed to identify misinformation. The collected questionnaires were entered into SPSS 24 for statistical analysis, which included the Kruskal-Wallis test, the Chi-square test, the Spearman test, and logistic regression models.

Key results: The study findings revealed a statistically significant relationship between acceptance of the COVID-19 vaccine and the ability to identify deceptive news. An increase of one unit in the score for recognizing misinformation led to a 24% and 32% reduction in vaccine hesitancy and the intention to remain unvaccinated, respectively. Furthermore, a significant correlation was found between the intention to receive the vaccine and e-Health literacy, where an increase of one unit in e-Health literacy score corresponded to a 6% decrease in the intention to remain unvaccinated. Additionally, the study found a notable association between the ability to detect false and misleading information and e-Health literacy. Each additional point in e-Health literacy was associated with a 0.33% increase in the capacity to identify fake news (Spearman's R ho = 0.333, p < .001).

Conclusion: The study outcomes demonstrate a positive correlation between the COVID-19 vaccine acceptance, the ability to identify counterfeit news, and proficiency in electronic health literacy. These findings provide a strong foundation for policymakers and health care practitioners to develop and implement strategies that counter the dissemination of spurious and deceitful information related to COVID-19 and COVID-19 immunization. [ HLRP: Health Literacy Research and Practice . 2023;7(3):e130-e138. ].

PubMed Disclaimer

Similar articles

  • Acceptance of a Covid-19 vaccine is associated with ability to detect fake news and health literacy. Montagni I, Ouazzani-Touhami K, Mebarki A, Texier N, Schück S, Tzourio C; CONFINS group. Montagni I, et al. J Public Health (Oxf). 2021 Dec 10;43(4):695-702. doi: 10.1093/pubmed/fdab028. J Public Health (Oxf). 2021. PMID: 33693905 Free PMC article.
  • Fake news in the age of COVID-19: evolutional and psychobiological considerations. Giotakos O. Giotakos O. Psychiatriki. 2022 Sep 19;33(3):183-186. doi: 10.22365/jpsych.2022.087. Epub 2022 Jul 19. Psychiatriki. 2022. PMID: 35947862 English, Greek, Modern.
  • Ingraining Polio Vaccine Acceptance through Public Service Advertisements in the Digital Era: The Moderating Role of Misinformation, Disinformation, Fake News, and Religious Fatalism. Jin Q, Raza SH, Yousaf M, Munawar R, Shah AA, Hassan S, Shaikh RS, Ogadimma EC. Jin Q, et al. Vaccines (Basel). 2022 Oct 17;10(10):1733. doi: 10.3390/vaccines10101733. Vaccines (Basel). 2022. PMID: 36298598 Free PMC article.
  • COVID-19 vaccination challenges: from fake news to vaccine hesitancy. Silva GM, Sousa AAR, Almeida SMC, Sá IC, Barros FR, Sousa Filho JES, Graça JMBD, Maciel NS, Araujo AS, Nascimento CEMD. Silva GM, et al. Cien Saude Colet. 2023 Mar;28(3):739-748. doi: 10.1590/1413-81232023283.09862022. Epub 2022 Sep 24. Cien Saude Colet. 2023. PMID: 36888858 Review. English, Portuguese.
  • Knowledge about, attitude and acceptance towards, and predictors of intention to receive the COVID-19 vaccine among cancer patients in Eastern China: A cross-sectional survey. Hong J, Xu XW, Yang J, Zheng J, Dai SM, Zhou J, Zhang QM, Ruan Y, Ling CQ. Hong J, et al. J Integr Med. 2022 Jan;20(1):34-44. doi: 10.1016/j.joim.2021.10.004. Epub 2021 Oct 26. J Integr Med. 2022. PMID: 34774463 Free PMC article. Review.
  • Ahmed , W. , Vidal-Alaball , J. , Downing , J. , & , López Seguí , F. ( 2020. ). COVID-19 and the 5G conspiracy theory: Social network analysis of Twitter data . Journal of Medical Internet Research , 22 ( 5 ), e19458 10.2196/19458 PMID: - DOI - PMC - PubMed
  • An , L. , Bacon , E. , Hawley , S. , Yang , P. , Russell , D. , Huffman , S. , & , Resnicow , K. ( 2021. ). Relationship between coronavirus-related ehealth literacy and COVID-19 knowledge, attitudes, and practices among US adults: Web-based survey study . Journal of Medical Internet Research , 23 ( 3 ), e25042 10.2196/25042 PMID: - DOI - PMC - PubMed
  • Aquino , F. , Donzelli , G. , De Franco , E. , Privitera , G. , Lopalco , P. L. , & , Carducci , A. ( 2017. ). The web and public confidence in MMR vaccination in Italy . Vaccine , 35 ( 35 Pt B ), 4494 – 4498 10.1016/j.vaccine.2017.07.029 PMID: - DOI - PubMed
  • Baptista , J. P. , & , Gradim , A. ( 2022. ). A working definition of fake news . Encyclopedia , 2 ( 1 ), 632 – 645 10.3390/encyclopedia2010043 - DOI
  • Bazm , S. , Mirzaei , M. , Fallahzadeh , H. , & , Bazm , R. ( 2016. ). Validity and reliability of Iranian version of eHealth literacy scale . Journal of Community Health Research , 5 ( 2 ), 121 – 130 .
  • Search in MeSH

Related information

Linkout - more resources, full text sources.

  • Europe PubMed Central
  • Ovid Technologies, Inc.
  • PubMed Central
  • MedlinePlus Health Information

full text provider logo

  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

Detecting fake news and disinformation using artificial intelligence and machine learning to avoid supply chain disruptions

  • Original Research
  • Open access
  • Published: 01 November 2022
  • Volume 327 , pages 633–657, ( 2023 )

Cite this article

You have full access to this open access article

detecting fake news assignment vaccines answer key

  • Pervaiz Akhtar   ORCID: orcid.org/0000-0002-7896-4438 1 , 2 ,
  • Arsalan Mujahid Ghouri 3 ,
  • Haseeb Ur Rehman Khan 4 ,
  • Mirza Amin ul Haq 5 ,
  • Usama Awan 6 ,
  • Nadia Zahoor 7 ,
  • Zaheer Khan 1 , 9 &
  • Aniqa Ashraf 8  

26k Accesses

38 Citations

10 Altmetric

Explore all metrics

Fake news and disinformation (FNaD) are increasingly being circulated through various online and social networking platforms, causing widespread disruptions and influencing decision-making perceptions. Despite the growing importance of detecting fake news in politics, relatively limited research efforts have been made to develop artificial intelligence (AI) and machine learning (ML) oriented FNaD detection models suited to minimize supply chain disruptions (SCDs). Using a combination of AI and ML, and case studies based on data collected from Indonesia, Malaysia, and Pakistan, we developed a FNaD detection model aimed at preventing SCDs. This model based on multiple data sources has shown evidence of its effectiveness in managerial decision-making. Our study further contributes to the supply chain and AI-ML literature, provides practical insights, and points to future research directions.

Similar content being viewed by others

detecting fake news assignment vaccines answer key

A Study for Identifying Fake News in the Information Society: The Case of the Logistics Sector

detecting fake news assignment vaccines answer key

Artificial Intelligence Techniques to Restrain Fake Information

detecting fake news assignment vaccines answer key

Advancements in Fake News Detection: A Comprehensive Machine Learning Approach Across Varied Datasets

Explore related subjects.

  • Artificial Intelligence

Avoid common mistakes on your manuscript.

1 Introduction

The increased scholarly focus has been directed to fake news detection given their widespread impact on supply chain disruptions, as was the case with the COVID-19 vaccine. Fake news and misinformation are highly disruptive, which create uncertainty and disruptions not only in society but also in business operations. Fake news and disinformation-related problems are exacerbated due to the rise of social media sites. Regarding this, using artificial intelligence (AI) to counteract the spread of false information is vital in acting against disruptive effects (Gupta et al., 2021 ). It has been observed that fake news and disinformation (FNaD) harm supply chains and make their operation unsustainable (Churchill, 2018 ). According to research, fake news can be classified into two distinct concepts of misinformation and disinformation (Petratos, 2021 ; Allcott & Gentzkow, 2017 ) defined fake news as “ news articles that are intentionally and verifiably false, and could mislead readers ” (p. 213). According to Wardle ( 2017 ), misinformation refers to “ the inadvertent sharing of false information ”, while disinformation can be defined as “ the deliberate creation and sharing of information known to be false ”. Among the negative consequences that fake news can have for companies are loss of sponsorships, reduced credibility, and loss of reputation which can adversely affect performance (Di Domenico et al., 2021 ). In such a context AI is shaping decision-making in an increasing range of sectors and could be used to improve the effectiveness of fake news timely detection and identification (Gupta et al., 2021 ). Whereas many new efforts to develop AI-based fake news detection systems have concentrated on the political process, the consequences of FNaD on supply chain operations have been relatively underexplored (Gupta et al., 2021 ).

Kaplan and Haenlein ( 2019 ) addressed AI “ as a system’s ability to interpret external data correctly, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation ” (p.17). Although emerging technologies such as AI may sometimes have negative effects, they can be utilized to combat disinformation. As scholarship is showing increasing interest in how AI can improve operationally and supply chain efficiencies (Brock & von Wangenheim, 2019 ), researchers have recently called for more studies on how organizational strengths and the use of AI influence the outcomes for decision-making structures (Shrestha et al., 2019 ). Fake news has considerable negative effects on firms’ operations, such as repeated disruptions of supply chains (Churchill, 2018 ). FNaD influence the use of a company’s product or services (Zhang et al., 2019 ; Sohrabpour et al., 2021 ) argued that leveraging AI to improve supply chain operations will likely improve firms’ planning, strategy, marketing, logistics, warehousing, and resource management in the presence of any environmental uncertainty, including that caused by FNaD.

Scholars have called for research to attain an in-depth understanding of AI and of how to tailor it to enhance business efficiencies and minimize supply chain disruptions (SCDs) (e.g., Grewal et al., 2021 ; Churchill, 2018 ). The extant literature has drawn mixed conclusions on whether AI-driven or hybrid AI decision-making benefits a firm’s supply chain (Shrestha et al., 2019 ). The question of why some firms are more effective than others in using AI to manage SCDs has largely been overlooked (Toorajipour et al., 2021 ). Increased research efforts are being made to identify and manage fake news risk in supply chain operations (Reisach, 2021 ). In today’s digital media landscape, the term ‘fake news’ has gained relevance following the 2016 US presidential elections (Allcott & Gentzkow, 2017 ). People have been observed to be unable to clearly distinguish between fake and real news and to tend to perceive ‘fake news’ as a more significant issue within the current information landscape (Tong et al., 2020 ). Therefore, decision-makers are often influenced by FNaD, thus ending up making erroneous decisions and drawing inaccurate conclusions regarding current scenarios (e.g., Lewandowsky et al., 2012 ; Di Domenico et al., 2021 ). From a supply chain perspective, researchers have highlighted how FNaD can lead to SCDs (e.g., Gupta et al., 2021 ; Kovács & Sigala, 2021 ; Sodhi & Tang, 2021 ), which can have a far-reaching impact on the functioning of global supply chains.

Additionally, the United Nations ( 2020 ) have suggested that, despite the measures put in place to build confidence in people, businesses, and supply chain operations, SCDs have remained a problematic area for businesses in recent years. Resilinc ( 2021 ) revealed that SCDs have been increasing by 67% year-over-year, with 83% of such disruptive events being caused by human activity—not natural disasters. EverStream Analytics (2020) found that 40.5% and 33.4% of businesses are respectively getting their information and intelligence relating to supply chain issues from their customers and social media. The detection of fraudulent information is thus critical to avoid such consequences (Kim & Ko, 2021 ), and businesses need to set up specific processes or routines to filter incoming business-related information and mitigate any possible related harm to their operations (Kim & Ko, 2021 ; Kim & Dennis, 2019 ) emphasized research underpinning emerging technologies such as AI suited to tackle FNaD. As FNaD have become increasingly relevant in the field of operations management, and given their effects on decision-making, there is a need to understand what business processes require to be implemented to contain their spread and minimize SCDs.

However, there is still a limited understanding of how AI techniques can help in eliminating FNaD. We, therefore, sought to define an AI-oriented business process suited to remove the effects of FNaD on decision-making and set our research question as: “ How can firms integrate AI in their operations to reduce the impact of FNaD regarding SCDs ?” In answering this question, our study makes three contributions to the literature. First, it develops a new theoretical framework suited to mitigate the impacts of FNaD on SCDs and it analyses the relationship using a specific dataset and support-vector machine. The resulting business process manages the dissemination of information, accurately mitigating FNaD and enabling correct decision-making in regard to tackling complex issues (e.g., Jayawickrama et al., 2019 ). Second, by presenting key findings gleaned by interviewing senior managers from three different countries (Indonesia, Malaysia, and Pakistan) with expertise in supply chains, our study provides new theoretical evidence regarding how firms can avoid SCDs in emerging economies. To the best of our knowledge, our study is the first to focus on the implications and integration of AI in business processes to the end of mitigating the effects of FNaD on SCDs. Our framework thus links the supply chain and AI literature and explicates their utility in mitigating SCDs against the backdrop of fake news and disinformation campaigns. In our study, we adopted a qualitative method that involved integrating the AI literature with research on fake news to reveal how the effectiveness of decision-making can be ensured within supply chain operations. Much previous research has advanced our understanding of fake news detection mechanisms using graphs and summarization techniques (Kim & Ko, 2021 ). Furthermore, a recent study has proposed an AI-based, real-time fake news detection system by conducting a systematic literature review (Gupta et al., 2021 ). Third, our study fills a gap in the literature by providing a practical solution aimed at eliminating or reducing FNaD in business scenarios, specifically acting to minimize SCDs. The extant literature is somewhat scattered and fragmented that has not helped researchers to address many questions about FNaD (Di Domenico & Visentin, 2020 ). Our study proposes an AI-oriented business process that flags/reduces/eliminates FNaD before it can reach decision-makers and allows authentic news and information to filter through to supply chain operation resilience and prevent SCDs.

This paper is structured as follows. Section2 presents a discussion of the related literature, which is followed by an illustration of our research methodology in Sect.  3 . In Sect.  4 , the implementation details, findings, and proposed model are provided. In Sect.  5 , the implications of our model are discussed and, to conclude, future research directions are suggested.

2 Literature review

2.1 theoretical background.

Organizational Information Processing Theory (OIPT) proposes a systematic comprehension of processing and exchanging of information to increase capacities. OIPT reasons that firms need a stabilizing mechanism by possessing resources and capacities in operations to cope with uncertainties and manage unforeseen events that disturb normal business and supply chain operations (Wong et al., 2020 ). Scholarship suggests that SCDs could be caused by disinformation (e.g., Konstantakis et al., 2022 ; Xu et al., 2020 ). It is ultimately inevitable for supply chains to cultivate the capability and capacity to proactively engage the filtration of the information and news to improve supply chain operations. Firms could either opt to rely on mechanistic organizational resources for reducing their reliance on information or enhance their information processing capabilities. The more environmental uncertainty facing firms, the more information they need to gather and process to achieve better performance (Bode et al., 2011 ). OIPT proposing the primary goal of organizational-related process designs is linked with uncertainty by acquiring, analyzing, and sharing information from the business environment (Swink & Schoenherr, 2015 ; Yu et al., 2019 ). OIPT addresses the development of organizational capabilities to fill their information processing requirements (Wamba et al., 2020 ). SCDs can be avoided by the filtration of receiving accurate and timely information. Di Domenico et al., ( 2021 ) suggested that FNaD during disruptions i.e., the supply chain may cause the loss of preventable lives, misguiding information on business activities and innovation. Fact-checking measures like “know why”, “know how”, “know what”, and “know when” could be checked by emerging technologies and information processing capabilities (Jayawickrama et al., 2016 ; Swanson & Wang, 2005 ). In this perspective, AI and Machine Learning (ML) could manage the dissemination of real information by accurately detecting and mitigating false information and making correct decisions when tackling difficult issues (Endsley, 2018 ; Jayawickrama et al., 2019 ; Roozenbeek and van der Linden, 2019 ). OIPT thus focuses on linking uncertainty with information needs and information processing capacities and prescribes organizational designs to reduce uncertainty. Our study thus seeks to provide a holistic theoretical framework (integrated with AI and ML) built based on OIPT to minimize the chances of SCDs.

2.2 Artificial intelligence and supply chain operations

In academia, the concept of AI was first established in the 1950s (Haenlein & Kaplan, 2019 ). However, McCulloch & Pitts ( 1943 ) ideas on logical expression represent a notable landmark, as they led to the development of a neurocomputer design (Milner, 2003 ). While the exact year is unknown, the origins of AI can thus be dated to the 1940s; notably, to Isaac Asimov’s 1942 short tale ‘Runaround’, published in ‘Science Fiction’ magazine. In it, Asimov formalized his three laws of robotics: first, a robot cannot harm a human being; second, a robot must follow human commands; and third, a robot must defend itself (Haenlein & Kaplan, 2019 ). In 1955, in a research project on AI (McCarthy et al., 1955) Dartmouth college defined it as “ making a machine behave in ways that would be called intelligent if a human were so behaving ” (p.11). Since 1955, AI has evoked the idea of relevant human intuition and artificial machines that could stimulate the human brain and come up with environmental abstractions to work on difficult problems. During the following decade, in 1966, Joseph Weizenbaum created the famous ELIZA computer program, a ‘natural language processing (NLP) tool that was capable of holding a conversation with a human being and maintaining the illusion of comprehension. This was labelled heuristic programming and AI (Weizenbaum, 1966 ). In the 1980s, research on backpropagation in neural networks saw rapid development (Zhang & Lu, 2021 ). Under Ernst Dickmanns, Mercedes-Benz developed and commercialized a driverless vehicle fitted with cameras and sensors and an onboard computer system controlling the steering (Delcker, 2018 ). With the continuous development of AI tools, the success of IBM’s ‘Deep Blue’ chess-playing supercomputer laid the foundations for research on and the application of expert systems (Haenlein & Kaplan, 2019 ).

AI is viewed as a game-changer and as being able to facilitate both the “ abilities to self learn and a race to improve decision quality ” (Vincent, 2021 , p. 425). Kaplan and Haenlein ( 2019 ) defined AI “ as a system’s ability to interpret external data correctly, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation ” (p.17). In supply chain management and the manufacturing industry, there has been an upsurge in AI (Kumar et al., 2019 ) that has significantly impacted operations and human roles in firms (Vincent, 2021 ; Awan et al., 2021 ) suggested that AI initiatives in firm supply chain operations can improve knowledge of the processes used to generate business performance. AI is a complex and multifaceted construct with profound implications for firm operations management (Zeba et al., 2021 ). The supply chain literature has recently emphasized the link between the application of AI and process improvement (Toorajipour et al., 2021 ). Although several AI-based supply chain applications have appeared in recent years, little research has explored their use (Riahi et al., 2021 ). While the debate on the operational outcomes of AI is still ongoing, there is little evidence in the operations management literature of how the adoption of AI may improve supply chain operations (Raisch & Krakowski, 2020 ).

Recent advancements in material and production technologies hold great possibilities for a better understanding of how to improve other manufacturing and supply chain operations (Grewal et al., 2021 ). AI-based models provide near-optimal solutions to a wide variety of routing challenges, ensuring on-time deliveries and optimizing warehouse transport (Riahi et al., 2021 ). However, little attention has been devoted to how the use of AI techniques may affect the reverse auctioning that involves supply chain partners and planning for vehicle routing and volume discount acquisition (Toorajipour et al., 2021 ). By affecting decision-making and increasing effective knowledge creation aimed at developing products customized for specific situations, AI technologies may have significant implications for a firm’s production capabilities (Awan et al., 2021 ). As a creative and frequently disruptive technology, AI facilitates the design of new products, services, industrial processes, and organizational structures that meet client needs. Further, product, service, manufacturing, or organizational processes can be designed using AI (Wamba-Taguimdje et al., 2020 ). For B2B companies, customer understanding is critical to boost products or services (Paschen et al., 2019 ). The integration of AI with the industrial Internet of Things holds significant potential for solving production-process problems and making better-informed decisions (Zeba et al., 2021 ). Early adopters of AI have created new and improved goods, which has enabled them to outperform the competition (Behl et al., 2021 ). By analyzing market intelligence, AI can uncover themes and patterns in data and may provide insights into how users creatively alter products and services (Paschen et al., 2019 ). A growing number of scholars are maximizing the influence of AI on supply chain risk management and monitoring systems to avoid SCDs (Toorajipour et al., 2021 ). However, little is known about its role in shaping monitoring, and controlling supply chain operations (Pournader et al., 2021 ). Although research has found that AI is used to improve supply chain performance, just a few AI approaches and algorithms have been explored and are used in supply chain processes (Riahi et al., 2021 ).

AI is linked to analytical, self-learning, and predictive machine learning approaches (Shrestha et al., 2019 ). These methods offer a variety of answers and prescriptive inputs to choose from when deciding how to proceed with complicated scenarios (Belhadi et al., 2021 ). Even though researchers have focused on the use of AI in different fields of study, it is important to note that very few studies have looked at how AI can be used in enhancing supply chain operations. However, the importance of AI in predicting and mitigating supply chain risk has been well established in the literature (Riahi et al., 2021 ). AI can accurately and rapidly detect relevant supply chain information by using analytics produced through AI techniques and models. They give managers a greater understanding of how each system operates and help them to discover areas in which they can improve those operations. The development of AI has made it possible to deploy predictive algorithms that allow for faster evaluations and more effective risk minimization across supply chains (Ni et al., 2020 ). The extant literature on AI argues that applying different machine learning approaches with AI can substantially decrease SCDs (Riahi et al., 2021 ). AI and ML enhance operations in many domains, including supply chain management, logistics, and inventory management (Belhadi et al., 2021 ; Ni et al., 2020 ) showed that supply chain managers can use AI to watch for and avoid incidents interrupting supply chain operations. This includes everything from the most prevalent occurrences to unknown factors such as delivery delays, quality defects, among others (Belhadi et al., 2021 ).

AI provides the opportunities and promises to move toward data-driven decision support systems. Despite the integration of AI in many firm processes, there are still challenges regarding the design of a firm supply chain that depends heavily on human contributions (Kumar et al., 2019 ). However, it has been established in the operations management (OM) literature that AI has a positive impact on various supply chain management activities (Dubey et al., 2021 ). Still, it rarely addresses how AI is applied in the OM field, such as in manufacturing, production, warehousing and logistics, and robot dynamics (Toorajipour et al., 2021 ). Even though the supply chain literature has acknowledged that many AI applications include production forecasting, supplier selection, material consumption forecasting, and customer segmentation (Toorajipour et al., 2021 ), the AI literature typically revolves around understanding effective ways to combine human intuition and decision-making (Vincent, 2021 ). The use of AI technologies gives marketers a competitive edge that reflects marketing tactics and customer behaviors (Jabbar et al., 2020 ). Customer order processing can be automated with AI, and chatbots can handle any follow-up chores (Paschen et al., 2019 ), which can increase supply chain effectiveness. It is possible to take proactive measures to combat supply chain risks by uncovering new trends in the data; this is expected to assist in achieving adaptability and higher levels of supply chain maturity (Riahi et al., 2021 ). Multiple courses of action are open to firms confronted with the risk linked to investing in AI and its positive impacts on supply chain activities. The proliferation of evolving AI technology has led to premature and conflicting conclusions regarding specific outcomes.

Scholars increasingly recognize the importance of AI in lowering downtime costs, better utilizing real-time data, better scheduling, and preserving firm operations from risks (Chen et al., 2021 ). Additionally, Chen et al., ( 2021 ) suggested a predictive maintenance framework for the management of assets under pandemic conditions, including new technologies, such as AI, for pandemic preparedness and the avoidance of business disruptions. The implementation of AI-based systems influences supply chain inventory management, “ for instance performance analysis, resilience analysis or demand forecasting ” (Riahi et al., 2021 , p.13). This raises the question of whether the use of AI systems to determine short-order policies and mitigate any bullwhip effects has been adequately addressed in the literature (Preil & Krapp, 2021 ). A review found that the adoption of AI in supply chains improves performance, lowers costs, minimizes losses, and makes such chains more flexible, agile, and robust (Riahi et al., 2021 ). Recent advances in AI help supply chain firms to enhance their analytics capabilities, leading to improved operational performance (Dubey et al., 2021 ). AI-enabled supply chain performance is becoming increasingly important to enhance financial performance; yet, no studies have been hitherto conducted to the end of gaining a better understanding of the critical antecedents of AI in driving supply chain analytics (Dubey et al., 2021 ). The direct and positive impact of AI-based relational decision-making on firm performance has been established (e.g., Bag et al., 2021 ; Behl et al., 2021 ).

2.3 Fake news, disinformation, and supply chain disruptions

Fake news (Oxford English Dictionary, 2021a) is defined as: “ false reports of events, written and read on websites. ” Furthermore, disinformation (Oxford English Dictionary, 2021b) is construed as: “ false information that is given deliberately ”. The impact of FNaD is substantial, disrupting economic operations and societal activities. FNaD also threaten brand names and potentially affect the consumption of products and services, ultimately impacting supply chain operations and demand (Zhang et al., 2019 ; Petratos, 2021 ). These are affected by panic-driven or bad decisions based on disinformation (e.g. Ahmad et al., 2019 ; Matheus et al., 2020 ; Zheng et al., 2021 ). SCD is defined as a disturbance in the flows of material, financial, and information resources between firms and their major stakeholders—e.g., suppliers, manufacturers, distributors, retailers, and customers. Disruption may affect supply chain operations for random periods (Mehrotra & Schmidt, 2021 ). Supply chains encompass the activities needed for firms to deliver products and services to their final consumers, and accurate information is an integral part of such chains, as it enables decision-makers to make decisions on future demand, supply, cash flows, returns, among other supply chain operations. There are historical examples of how FNaD can affect the supply chain and business operations. From the 1950s to 1990, the tobacco industry constantly shared disinformation on the adverse effects of active and secondhand smoke exposure by manipulating research, data, and the media (Bero, 2005 ; Dearlove et al., 2002 ). In September 2006, the Royal Society, Britain’s premier scientific academy, wrote to ExxonMobil urging it to stop funding the dozens of groups spreading disinformation on global warming and claiming that the global temperature rise was not related to increases in carbon dioxide levels in the atmosphere (Adam, 2006 ). In 2013, the Associated Press official Twitter account was hacked and a tweet was made about two explosions injuring President Barack Obama; within hours, this wiped US$130billion from the stock market (Parsons, 2020 ; Tandoc et al., 2018 ), which affected stock supply chain operations. In 2017, six UK Indian restaurants fell victim to fake news stories claiming that they were serving human flesh (Barns, 2017 ; Mccallum, 2017 ). One restaurant had to cut staff hours and saw its revenue fall by half (National Crime Agency, 2018 ). Such events could also have indirect effects on supply chain operations, ultimately being conducive to SCDs.

The persuasive power of fake news could continuously damage global supply chains, such as meat, vegetables, fresh food, and fruits in different parts of the world (Xu et al., 2020 ). All businesses have felt the rapid dissemination of false information and propaganda among suppliers and distributors. Many countries such as France, Germany, India, and the US imposed restrictions on products from entering and leaving different countries due to disinformation and fake news about Covid-19 (Xu et al., 2020 ). Moreover, several fake social media posts or misinformation about the U.S. food plant fire caused the food supply disruption. The USDA told Reuters via email that it is not true that these fires were started on purpose (Reuters, 2022 ). Due to the widespread false information about COVID-19, there was an epidemic of methanol poisoning. It is claimed that 796 Iranians lost their lives to the alcohol intoxication after reading online claims that alcohol may treat their illnesses (Mahdavi et al., 2021 ). This echoed the rapid dissemination of false information regarding COVID-19 on social media at the outbreak’s onset and disrupted the supply of many food items (Mahdavi et al., 2021 ). Some people falsely claim that using alcohol to rinse the mouth and avoid COVID-19 infection works (Delirrad & Mohammadi, 2020; Soltaninejad, 2020). The global supply chains have been shaken by the widespread of fake news causing widespread disruptions and affecting firms’ reputations. The effects of fake news on COVID-19 are still being felt by supply chains across many sectors, and it has irrevocably affected long-term supply chain strategies.

In more recent times, the COVID-19 pandemic has spawned high volumes of FNaD. One example of disinformation pertaining to a COVID-19 remedy involved a herb named ‘ Senna Makki ’ in Pakistan., Someone started sharing on social media as a cure for the virus, which caused escalating demand and an increase in price from 1.71USD to 8.57-11.43USD per kg within two months (The News, 2020 ). This kind of fake news could affect vaccine supply chains. In the stock market, Clarke et al., ( 2020 ) revealed that a well-known crowd-sourced content service for financial market websites had been generating fake news stories due to the editors’ lack of ability to detect them. This attempted use of fake news had ‘widespread short-term implications for the financial markets. The Kroll global fraud and risk report of 2019/20 shared an incident of fake news in the banking industry. The rival institution purchased an African bank, the purchaser was confronted with a negative social media campaign, fabricated news, and stories, and manipulated closed-circuit television footage (Booth et al., 2019 ). These examples demonstrate how FNaD can disrupt supply chains and business operations.

Our review of past studies yielded Table  1 , which summarizes key studies interlinking AI, SCDs, and FNaD. These studies are selected from top-ranked journals (e.g., CABS 3 ranked and above) published in the last three years. The lack of research on all three aspects is very clear, with no studies emphasizing their combination. Our study significantly contributes to bridging this gap.

3 Methodology

We used mixed methods, the AI and ML-driven method, and our case study interviews to further validate our model. The following procedure was used to execute the AI and ML-driven method for data analysis. (1) The Dataset Enrichment was based on two techniques—i.e., Porter stemmer (PS) and Term Frequency-Inverse Document Frequency (TFIDF). (2) Query Expansion was utilized for natural language processing (NLP) to precisely predict the accuracy of fake news. (3) The Support Vector Machine (SVM) classifier was utilized to train the model and then finally evaluate fake and real news outcomes for effective decision-making SCDs. Table  2 provides further justification for this approach. The studies also used other measures such as precision, recall, and accuracy and we also integrated similar measures in our analysis.

Secondly, the involved cases for interviews were from Indonesia, Malaysia, and Pakistan. These countries share similarities. Furthermore, these are emerging countries with common economic and political ties as well as government-to-government contacts. Similarly, these nations are making strides toward digitization and AI. Numerous previous studies (e.g., Atkin et al., 2017 ; Ghazali et al., 2018 ; Rahi et al., 2019 ; Siew et al., 2020 ) also used targeted populations from these countries to conduct studies on comparable issues. The case study method was best suited to achieve the objectives of our study, considering the explanatory nature of the research question (Eisenhardt & Graebner, 2007 ; Yin, 2014 ) and the fact that this is an emergent research area to be combined with modern methodological innovations such as ML (Gupta et al., 2021 ; Kovács & Sigala, 2021 ; Sodhi & Tang, 2021 ; Sheng et al., 2020 ). Further, multiple case studies are assumed to be more reliable because they enable phenomena to be observed and studied in many contexts—thereby helping to provide replication logic for particular cases, which would otherwise be viewed as independent (Yin, 2014 )—and because they are useful for theory development (Eisenhardt, 1989 ).

A goal-directed sampling technique—i.e., an incremental selection method (Denzin & Lincoln, 2005 )—was employed to investigate the flagging/reducing/eliminating process of FNaD. This technique is effective for the collection of both qualitative and quantitative data because the sample is purposely chosen based on the project’s unique requirements and the evaluator’s judgment (Polit & Beck, 2012 ; Vos et al., 2011). The data acquired using this approach and technique tend to be of a high standard only if the participants are willing and able to provide accurate information that will enable the researcher to gain a thorough knowledge of an experience. We thus adopted this sampling technique because our target key management functionaries would be the participants best suited to furnish the essential data for our study (Creswell, 2014 ), providing useful insight into what works and what does not in terms of its theoretical and technical components. Any case variations were scrutinized based on the industry to achieve a better understanding of FNaD impacts.

To conduct the interviews, we approached 33 firms—36.36% Malaysian, 36.36% Pakistani, and 27.28% Indonesian ones. Eventually, a total of 16 firm representatives participated in our study—six each from Malaysia and Pakistan, and four from Indonesia. According to Teddlie & Yu ( 2007 ), this was a sample size sufficient to produce narrative records adequate to provide viewpoints directly relevant to the topic under study. The sample firms were small and medium-sized, with staff numbers ranging from 18 to 183. Small businesses have learning systems that are complex and dynamic and geared to generate the efficiency needed to sustain such firms in the market (Zhang et al., 2006 ). Therefore, an in-depth investigation was needed to understand the stances of small and medium-sized firms about FNaD. Fifteen semi-structured in-depth interviews were prepared, based on the interviewees’ understanding of their respective firms and the inflow and outflow information features such as sources, routines, systems, and processes. The interviewees were owners, chief executive officers, directors, and associated top managers. The participants’ varied perspectives reduced dependency on a single participant’s perspective, enriching the data obtained. We also used project reports, operational policies, and other relevant documentation to identify and triangulate themes during the data analysis (Yin, 2014 ). Due to the COVID-19 pandemic, 90% of the interviews were conducted over the internet (e.g., GoogleMeet) and any observations were recorded and reviewed later.

We followed the interview protocol to integrate the philosophy, processes, and questions of the study to attain reliability (Frost et al., 2020 ; Ponterotto, 2005 ). To delve deep into the situation about the impacts of fake news and any remedial actions, we used relevant prompts in open-ended questions. The extensive conversations provided key findings regarding the solutions adopted to counter the effects of fake news on business and supply chain operations by reflecting on three industry and technology developments, gathering din depth and valuable narratives in the process. All interviews were audio-recorded, converted into MSWORD, and thematically analyzed via NVIVO 12. To ensure validity and reliability, each researcher independently coded the responses given to the open-ended questions to fully grasp any concepts that were not readily provided by existing theories or field research. The answers were discreetly coded to fully understand any new sentiments, knowledge, and opinions that may not be available in the literature on the selected industries and countries. This practice provided an AI-based solution to fake news issues in business. Furthermore, other consistency checks were carried out, whereby the data and preliminary interpretations were presented to the interviewees from whom they had been sourced to determine their credibility and incorporate any necessary changes, and the scripts were then finalized following their approval (Merriam, 2009 ).

4 Implementation procedures, findings, and proposed model

4.1 ai and ml implementation, 4.1.1 obtaining the dataset.

The dataset for the AI and ML approach was drawn from four Pakistani major online news sources—i.e., ‘Geo News’, ‘The Dawn’, ‘Express Tribune’, and ‘The News’. Approximately 500 pages from each source were scrutinized to extract the relevant affairs and topics from January to April 2021. SCD data were divided into natural, human-caused, maritime, and mass disruptions related to FNaD. Table  3 provides some examples.

The words were used in different contexts. For example, Words such as health, vaccine, Covid-19, and pandemic were mostly used in the main text in articles related to health supply chains and their disruptions, and words like political, freedom, rights, democracy, and military were applied in political articles. We followed a step-by-step procedure for the implementation.

4.1.2 Pre-processing dataset

• PS (Porter Stemmer) was used to index each news page/article to filter out any stop, repeated, and common words to avoid noise in the dataset. The algorithm was used, over several rounds, to remove any non-relevant words from the datasets/textual scripts before considering all criteria or defined rules (Zhang et al., 2020 ). Such an algorithm has been proven to be one of the best techniques in terms of performance (Joshi et al., 2016 ).

• TDIF (Term Frequency - Inverse Document Frequency) was utilized for the classical ML models. TDIF is a text classification technique utilized for organizing textual documents from raw datasets into predefined categories to obtain useful information. This is done by representing textual documents into feature vectors consisting of weights that indicate the contribution of each term in text classification (Deng et al., 2004 ; Dogan & Uysal, 2019 ). The effectiveness of TDIF has also been proven to be significant in the weighting process (Dogan & Uysal, 2019 ).

4.1.3 Dimension reduction and features engineering

The Query Expansion.

In the field of natural language processing (NLP) and information retrieval such as metrics of text semantic similarity are the most used techniques (Zhu et al., 2018 , 2020 ; Gao et al., 2015 ). The query expansion approach was utilized for natural language processing to precisely compute the relevancy of keywords related to FNaD by calculating the semantic distance between keywords related to SCDs. This phase explained the keywords used in SCDs to train the classifier to predict the appropriateness of SCDs in the news. As a feature, we analyzed the significance of each SCD keyword to a news category. We had the news articles in the business category and we wanted to evaluate whether it is appropriate for SCDs. The degree of appropriateness was based on the feature’s relevancy score; +1 perfectly appropriate and − 1 vice versa. The outputs of fake or real were decided based on the scores. Each keyword pair of news category and supply chain disruptions were considered appropriate if the relevancy score is + 0.65. As a result, we trained the SCD keywords appropriateness to predict based on the remaining dataset. We trained various frequently used classifiers and reported SVM results as SVM outperformed others.

Query Expansion: The query expansion approach was utilized for natural language processing (NLP) to precisely compute the relevancy of keywords related to FNaD by calculating the semantic distance between keywords related to SCDs, queries, and news articles. This phase explains the keywords used in SCDs to train a classifier to predict the appropriateness of SCDs in the news. We analyze the significance of each SCD keyword to a news category. We have news articles in the category “Business” and we wish to evaluate whether it is appropriate for a SCD example such keywords in the category; “natural disasters”, “man-made disasters”, “marine incidents”, and “mass trauma incidents”. We consider the degree of appropriateness of the news category with each of the SCD keywords based on the feature’s relevancy score. In this scenario, -1 represents that a news item with the category “sports” is utterly inappropriate to be examined with supply chain disruption keywords “offshore oil rig mishaps,“ but a + 1 score suggests that it is perfectly appropriate. Table  4 shows the examples of the relevancy score of the news category against supply chain disruption keywords present in our database.

The WordNet ontology (Leão et al., 2019 ) was utilized here for calculating semantic differences between the multiple keywords. The semantic similarity analysis is performed to determine the degree of semantic similarity between the texts. In the fields of NLP, natural language understanding (NLU), and information retrieval, such metrics of text semantic similarity are the most used such as WordNet ontology as a linguistic source because of its wide vocabulary and the explicit definite semantic hierarchy (Zhu et al., 2018 , 2020 ; Gao et al., 2015 ).

4.1.4 Determine the ML task and position the dataset

Classifier Training: The problem of SCD appropriateness to a news article is formulated into two possible outcomes “fake” or “real” as a binary classification. The training set was created from the randomly selected news articles from the dataset. To interpret the training dataset, a supervised learning task was performed with human assistance to make machine learned if a news category “world” is appropriate in supply chain disruption keywords such as “Wildfires”, “Political Crises”, and “Mass migration”. Each keyword pair of news category and SCDs were considered appropriate if the relevancy score is + 0.65. We trained the SCD keywords appropriateness on the selected articles in the training set to predict the appropriateness scores in the test set. We trained numerous frequently used classifiers to train the proposed model, however, because Support Vector Machines (SVM) produced the best results, therefore, we only provide the findings of the SVM classifier in this study.

4.1.5 Machine learning technique and interpreting results

Support Vector Machine: We develop a model for each news item by training a binary classifier using two possible outcomes positive (real) and negative (false) news with the appropriateness of SCDs. We chose a binary classification because it is assumed that a responsible user will want to verify the news before spreading it. In order to determine whether the news is real or fake, the user would most likely check online sources (databases/publishers), and in the process, the user may be able to verify or refute the information depending on the source. Subsequently, if the model is trained using a supervised learning technique with only two possible outcomes, the model is forced to make a binary decision, this increases the model’s accuracy significantly. Therefore, the Support Vector Machine (SVM) (Cortes & Vapnik, 1995 ) classifier was utilized to train the model. SVM is based on a binary classification model, which divides the training samples into two further classes based on multiple support vector hyperplanes in a vector space (Melki et al., 2017 ; Tharwat, 2019 ). The supervised learning approach such as SVM is a widely used machine learning method utilizing training examples or datasets to train the model that can be used to solve classification, and regression problems (Melki et al., 2018a , b ).

The metrics we utilized for evaluation were; Mean Reciprocal Rank (MRR) (Ghanbari & Shakery, 2019 ), Precision at 5 (P@5) (Sharma et al., 2020 ), and Normalized Discounted Cumulative Gain at 5 (NCDG@5) (Alqahtani et al., 2020 ). A threshold was used for the 5 best matches and the top match was given as the output. These evaluation measures account for the testing accuracy of the constructed model. The comparison between the training set and test set values for MRR, P@5, and NCDG@5 depicted slight differences, this validates the accuracy of the model in the test set with 0.647, 0.656, and 0.511 respectively.

4.2 Interview-based validation and proposed model

The impact of fake news on supply chain operations emerged as the first theme from the analysis. When asked about their knowledge and understanding of FNaD, the respondents gave detailed replies. Some argued that it is one of the most harmful aspects of the internet, with the potential to create SCDs. Respondent 4 said that “ Internet information makes us more attentive. ” Respondents 1, 5, 9, 11, and 13 shared the negative impacts of FNaD on quick, routine, and time-consuming decision making: “ Quick decisions based on the inclusion of the FNaD could be a transcendent disaster for any firm’s supply chain ” (Respondent 5). “ if FNaD is included in routine or prolonged decision making, it definitely agonizes the future result ” (Respondent 2), and “If any decision is based on lies, how can someone expect positive consequences? ” (Respondent 11). Two respondents shared that the purpose of spreading FNaD is to create a specific mindset and narrative in the economy/market to manipulate it: “ Misleading information supposed to build a specific narrative and sentiment in the market, enemies and indirect competitors usually involve in it ” (Respondent 1). The respondents indicated that fake news directly affects operations and has an indirect influence on supply chain operations, contributing to SCDs.

Dealing with FNaD emerged as the second theme of the analysis. The respondents provided numerous options in this regard, suggesting that global communities, social media sites, government, technology, and top management can play key roles in countering FNaD. FNaD should be dealt with smoothly and promptly; otherwise, it will negatively affect supply chain performance: “ As a business community, together we should deal with misleading information and news. Otherwise it could become a scar in business performance ” (Respondent 16). Respondents 12, 15, and 16 mentioned that FNaD should be dealt with as a pandemic like the COVID-19 one. Some respondents mentioned the names of the entities they considered to be primarily responsible for keeping FNaD under control, with Respondents 8, 2, and 11 pointing at the global internet community, social media sites, and the government. On the other hand, respondents 1, 3, 9, 10, 12, and 13 believed that the responsibility of curbing the effects of FNaD on supply chain performance and decision-making should fall on specific industries and businesses. Respondent 13 further explained that “as a business entity, we need to find a mechanism which guides us that specific news or information is legit or not ”, while Respondent 6 opined that “in today’s world, if your business isn’t data-driven, then you are definitely living in the jungle.”

FNaD filtering and counter modeling process suggestions and preparations became the third theme of the analysis. This enriched session contributed many insights into and inputs about countermeasures to FNaD in business and supply chain operations. When we asked Respondent 7 about this, he shared the following Bill Gates quote “ The world won’t care about your self-esteem. The world will expect you to accomplish something before you feel good about yourself ”, and further added “As a business caretaker, this is my responsibility is to shelter and protect my company from fake news, so that, at the end of the day, I will have no regrets.” Respondents 3, 5, 7, 10, 14, 15, and 16 suggested that AI will provide solutions suited to control and counter FNaD. Respondent 10 advised that “Data crawler integration with AI could provide a solution to FNaD”. Respondent 11 shared a similar thought “Each government should prepare AI-based processes according to specific society and economy to rectify the impact of fake news, and that process in the form of software should be provided free of cost to businesses”. The participants also highlighted the importance of using multiple sources to determine whether the news is fake or real, as a single source could be biased or politically driven. Based on the procedure applied for AI, the SVM, and the interview-based validation, we proposed the FNaD detection model shown in Fig.  1 , which encapsulates the key findings.

figure 1

A fake news and disinformation detection model that uses AI and ML

As depicted in Fig.  1 , the practical decision-making for SCDs is characterized by the predominant use of experiences, judgments, and multiple media resources. These can be categorized as real and fake news. Data demonstrates that the severity of the fake news impact is prompting businesses to invest in more robust, collaborative, and networked supply chains and should prepare AI-based processes according to specific societies and economies to rectify the impact of fake news. Datasets from multiple sources teach decision-makers about whether the particular news or information is legit or not. The data from multiple sources allows decision makers to apply the machine learning approaches and use artificial intelligence. They can therefore better select the appropriate mechanisms to detect fake and real news.

5 Contributions, implications, conclusion, and future research directions

5.1 contributions and theoretical implications.

Our study fills the knowledge gap about SCDs by utilizing AI and ML that assist to act against FNaD affecting supply chain operations. Loureiro et al., ( 2020 ) suggested that AI has diverse applications in several industrial domains. Dolgui & Ivanov ( 2021 ) hinted that AI could assist in improving resilience against and mitigation of SCDs. We combined a case qualitative method, AI, and SVM in order to reveal how effective decisions could be made within supply chain operations. The extant research advanced our understanding of fake news detection mechanisms using graph and summarization techniques (Kim & Ko, 2021 ). Furthermore, a recent study proposed an AI-based real-time fake news detection system by conducting a systematic literature review (Gupta et al., 2021 ). Our study is novel and distinct from the previous ones in that it developed an effective decision-making model for SC firms to avoid any disruptions caused by FNaD. As such, it contributes to the SCDs literature that will be of interest to scholars and practitioners.

Additionally, the study bridges a gap in the literature by providing a practical solution suited to eliminate FNaD in business scenarios affected by SCDs. The scattered and fragmented extant literature had left many questions about FNaD unanswered (Di Domenico & Visentin, 2020 ). Therefore, the main contribution of our study is to propose an AI- and ML-oriented process capable of flagging/reducing/eliminating FNaD before it reaches decision-makers and of identifying any authentic news and information, thus counteracting SCD-aimed news.

The United Nations ( 2020 ) has urged the implementation of actions against misinformation and cybercrime. Edwards et al., ( 2021 ) concluded that such ‘digital wildfire’ spreads faster than original and legit news. We propose a process, named FNaD integrated with AI that initiates when news or information is embedded in it. It then begins verification within defined sources (e.g., major newspapers’ websites) and, in the next step, it starts seeking similarities between news or information keywords. Once the AI process reaches a decision, it provides an output by classifying the news item as FNaD (rejection) or real/ authentic news or information (acceptance).

FNaD can be significant determinants of SCDs, as is highlighted in research (Kovács & Sigala, 2021 ). They adversely influence firms’ operations, import, and export, and alter purchasing behaviors (e.g., Di Domenico et al., 2021 ; Petit et al., 2019 ; Wang et al., 2021 ). The FNaD model shows the ability to control the inclusion of FNaD into firms’ activities. Our study contributes to the management and detection of FNaD in firms’ supply chain operations by proposing and testing a FNaD detection model that uses AI and ML. This model could help to control the potential digital wildfire before it damages firms’ operations. FNaD create unnatural phenomena that interrupt supply chain operations and enhance demand-supply loopholes (e.g., De Chenecey 2018 ; Dwivedi et al., 2020 ).

5.2 Managerial and policy implications

Our model detects FNaD early before they can affect firms or managerial decision-making. The current pandemic scenario has turned the attention of managers and governments toward FNaD and their impacts on supply chain operations, economy, and society. On the other hand, with AI and ML becoming an integral part of firms and operations, managers should consider their adoption to deal with FNaD, given their potential to detect and filter them out. Our model is executed and managed based on major local databases and news outlets to support supply chain operations. Should managers wish to integrate adding any further international data and news outlets, they could do so based on their requirements. The implementation of our model would depend on a willing and authoritative IT infrastructure, with even small and medium enterprises being able to invest in its application. We proposed a process capable of detecting and filtering out FNaD. This process protects firms from the impacts of FNaD, enabling managers to engage in decision-making based on legitimate and valid news or information.

From the perspective of specific industries, newsrooms could utilize the FNaD detection model to confirm a news item from different sources. In other words, the FNaD detection model can help in the timely development of a counter-strategy by detecting any fake news before it spreads and causes SCDs. The phenomenon has recently been seen in the context of the COVID-19 pandemic, with people sharing unverified news items on the virus and the side effects of vaccines over social media, thus causing SCDs in vaccine distribution. Moreover, pre-emptive fake news detection can be equally beneficial in avoiding financial market crashes. For government policymakers, the FNaD detection model can be a comprehensive tool to be used during pandemics or similar situations. Governments have been seen to regularly change their decisions, rules, and regulations. Therefore, at the government level, the FNaD detection model can ensure that accurate and on-time legitimate information is received to deal with any economic, social, and health conditions. Another implication for governments pertains to the provision of this process—for free or at a discount—to all business-related entities, especially micro, small, and medium firms. Such a decision would create trust between the government and those entities.

5.3 Conclusion and future research directions

SCDs are problematic for business operations. It is believed that SCDs could cause obstacles due to disinformation. Therefore, we proposed the FNaD model that filters the FNaD by utilizing AI and ML. This model takes help from different sources on internet to verify the received information. It then decides and notifies whether that received news is authentic or not. By using a mixed-method approach, we proposed a way to tackle SCD-creating FNaD using AI- and ML-based techniques. In this regard, future research could, first, focus on more specific FNaD and supply chain operation case studies, such as the detection of FNaD in humanitarian operations using AI and ML approaches. Additionally, they could integrate specific operational performance measures in these approaches, combining them with advanced visual methods. Also, given the fast pace of scientific development, any new and effective algorithm or technique could be used in the proposed model in the future. Furthering, testing the model based on longitudinal studies aimed at exploring and understanding the developments in SCDs linked with FNaD would make it more reliable and refined.

Adam, D. (2006). Royal Society tells Exxon: stop funding climate change denial.The Guardian, https://www.theguardian.com/environment/2006/sep/20/oilandpetrol.business

Ahmad, A., Webb, J., Desouza, K. C., & Boorman, J. (2019). Strategically-motivated advanced persistent threat: Definition, process, tactics and a disinformation model of counterattack. Computers & Security , 86 , 402–418

Google Scholar  

Allcott, H., & Gentzkow, M. (2017). Social media and fake news in the 2016 election. Journal of Economic Perspectives , 31 (2), 211–236

Alnaied, A., Elbendak, M., & Bulbul, A. (2020). An intelligent use of stemmer and morphology analysis for Arabic information retrieval. Egyptian Informatics Journal , 21 (4), 209–217

Alqahtani, A., Alnefaie, M., Alamri, N., & Khorsi, A. (2020). Enhancing the capabilities of solr information retrieval system: Arabic language. In 2020 3rd International Conference on Computer Applications & Information Security (ICCAIS ) (pp.1–5). IEEE

Awan, U., Kanwal, N., Alawi, S., Huiskonen, J., & Dahanayake, A. (2021). Artificial intelligence for supply chain success in the era of data analytics. Studies in Computational Intelligence , 935 , 3–21

Atkin, D., Chaudhry, A., Chaudry, S., Khandelwal, A. K., & Verhoogen, E. (2017). Organizational barriers to technology adoption: Evidence from soccer-ball producers in Pakistan. The Quarterly Journal of Economics , 132 (3), 1101–1164

Bag, S., Gupta, S., Kumar, A., & Sivarajah, U. (2021). An integrated artificial intelligence framework for knowledge creation and B2B marketing rational decision making for improving firm performance. Industrial Marketing Management , 92 , 178–189

Barns, S. (2017). Trolls show people how to create fake news stories and spread them on Facebook… as curry houses fall victim to false ‘human meat’ claims . The Scottish Sun

https://www.thescottishsun.co.uk/living/1077871/trolls-show-people-how-to-create-fake-news-stories-and-spread-them-on-facebook-as-curry-houses-fall-victim-to-false-human-meat-claims/

Behl, A., Dutta, P., Luo, Z., & Sheorey, P. (2021). Enabling artificial intelligence on a donation-based crowdfunding platform: a theoretical approach.Annals of Operations Research,1–29

Belhadi, A., Mani, V., Kamble, S. S., Khan, S. A. R., & Verma, S. (2021). Artificial intelligence-driven innovation for enhancing supply chain resilience and performance under the effect of supply chain dynamism: an empirical investigation . Annals of Operations Research

Bero, L. A. (2005). Tobacco industry manipulation of research. Public Health Reports , 120 (2), 200–208

Bode, C., Wagner, S. M., Petersen, K. J., & Ellram, L. M. (2011). Understanding responses to supply chain disruptions: Insights from information processing and resource dependence perspectives. Academy of Management Journal , 54 (4), 833–856

Booth, A., Hamilton, B., & Vintiadis, M. (2019). Fake news, real problems: combating social media disinformation. Global Fraud and Risk Report 2019/20 11th annual edition.

https:// www.kroll.com/-/media/kroll/pdfs/publications/global-fraud-and-risk-report-2019-20.pdf

Brock, J. K. U., & von Wangenheim, F. (2019). Demystifying AI: What digital transformation leaders can teach you about realistic artificial intelligence. California Management Review , 61 (4), 110–134

Cao, G., Duan, Y., Edwards, J. S., & Dwivedi, Y. K. (2021). Understanding managers’ attitudes and behavioural intentions towards using artificial intelligence for organizational decision-making. Technovation , 106 , 102312

Chen, J., Lim, C. P., Tan, K. H., Govindan, K., & Kumar, A. (2021). Artificial intelligence-based human-centric decision support framework: an application to predictive maintenance in asset management under pandemic environments.Annals of Operations Research.1–24

Churchill, F. (2018). Unilever says fake news makes digital supply chain unsustainable. https://www.cips.org/supply-management/news/2018/february/unilever-says-fake-news-makes-digital-supply-chain-unsustainable/ . Accessed 29 November 2021

Clarke, J., Chen, H., Du, D., & Hu, Y. J. (2020). Fake news, investor attention, and market reaction. Information Systems Research , 32 (1), 35–52

Cortes, C., & Vapnik, V. (1995). Support-vector networks. Machine Learning , 20 (3), 273–297

Creswell, J. W. (2014). Qualitative, quantitative and mixed methods approaches . Thousand Oaks, CA: Sage

Cui, L., Wu, H., Wu, L., Kumar, A., & Tan, K. H. (2022). Investigating the relationship between digital technologies, supply chain integration and firm resilience in the context of COVID-19.Annals of Operations Research,1–29

De Chenecey, S. P. (2018). The post-truth business: How to rebuild brand authenticity in a distrusting world . Kogan Page Publishers

Dearlove, J. V., Bialous, S. A., & Glantz, S. A. (2002). Tobacco industry manipulation of the hospitality industry to maintain smoking in public places. Tobacco Control , 11 (2), 94–104

Delcker, J. (2018). The man who invented the self-driving car (in 1986). https://www.politico.eu/article/delf-driving-car-born-1986-ernst-dickmanns-mercedes/ . Accessed 26 November 2021

Deng, Z. H., Tang, S. W., Yang, D. Q., Li, M. Z. L. Y., & Xie, K. Q. (2004). A comparative study on feature weight in text categorization. In Asia-Pacific Web Conference (pp.588–597). Springer, Berlin, Heidelberg

Denzin, N. K., & Lincoln, Y. S. (2005). The SAGE Handbook of qualitative research . Thousand Oaks, CA: Sage

Di Domenico, G., & Visentin, M. (2020). Fake news or true lies? Reflections about problematic contents in marketing. International Journal of Market Research , 62 (4), 409–417

Di Domenico, G., Sit, J., Ishizaka, A., & Nunan, D. (2021). Fake news, social media and marketing: A systematic review. Journal of Business Research , 124 , 329–341

Dogan, T., & Uysal, A. K. (2019). On term frequency factor in supervised term weighting schemes for text classification. Arabian Journal for Science and Engineering , 44 (11), 9545–9560

Dolgui, A., & Ivanov, D. (2021). Ripple effect and supply chain disruption management: new trends and research directions. International Journal of Production Research , 59 (1), 102–109

Dubey, R., Bryde, D. J., Blome, C., Roubaud, D., & Giannakis, M. (2021). Facilitating artificial intelligence-powered supply chain analytics through alliance management during the pandemic crises in the B2B context. Industrial Marketing Management , 96 , 135–146

Dwivedi, Y. K., Kelly, G., Janssen, M., Rana, N. P., Slade, E. L., & Clement, M. (2018). Social media: The good, the bad, and the ugly. Information Systems Frontiers , 20 (3), 419–423

Dwivedi, Y. K., Ismagilova, E., Hughes, D. L., Carlson, J., Filieri, R., Jacobson, J., Jain, V., Karjaluoto, H., Kefi, H., Krishen, A. S., Kumar, V., Rahman, M. M., Raman, R., Rauschnabel, P. A., Rowley, J., Salo, J., Tran, G. A., & Wang, Y. (2020). Setting the future of digital and social media marketing research: Perspectives and research propositions.International Journal of Information Management, 102168

Edwards, A., Webb, H., Housley, W., Beneito-Montagut, R., Procter, R., & Jirotka, M. (2021). Forecasting the governance of harmful social media communications: Findings from the digital wildfire policy Delphi. Policing and Society , 31 (1), 1–19

Eisenhardt, K. M. (1989). Building theories from case study research. Academy of Management Review , 14 (4), 532–550

Eisenhardt, K. M., & Graebner, M. E. (2007). Theory building from cases: Opportunities and challenges. Academy of Management Journal , 50 (1), 25–32

Endsley, M. R. (2018). Combating information attacks in the age of the Internet: new challenges for cognitive engineering. Human factors , 60 (8), 1081–1094

EverStream (2020). COVID-19: The future of supply chain. Retrieved from https://www.everstream.ai/risk-center/special-reports/covid-19-the-future-of-supply-chain/

Farrokhi, A., Shirazi, F., Hajli, N., & Tajvidi, M. (2020). Using artificial intelligence to detect crisis related to events: Decision making in B2B by artificial intelligence. Industrial Marketing Management , 91 , 257–273

Frost, D. M., Hammack, P. L., Wilson, B. D., Russell, S. T., Lightfoot, M., & Meyer, I. H. (2020). The qualitative interview in psychology and the study of social change: sexual identity development, minority stress, and health in the generations study. Qualitative Psychology , 7 (3), 245–266

Gadri, S., & Moussaoui, A. (2015, May). Information retrieval: A new multilingual stemmer based on a statistical approach. In 2015 3rd International Conference on Control, Engineering & Information Technology (CEIT) (pp.1–6). IEEE

Gao, J. B., Zhang, B. W., & Chen, X. H. (2015). A WordNet-based semantic similarity measurement combining edge-counting and information content theory. Engineering Applications of Artificial Intelligence , 39 , 80–88

Ghanbari, E., & Shakery, A. (2019). ERR. Rank: An algorithm based on learning to rank for direct optimization of Expected Reciprocal Rank. Applied Intelligence , 49 (3), 1185–1199

Ghazali, E. M., Mutum, D. S., Chong, J. H., & Nguyen, B. (2018). Do consumers want mobile commerce? A closer look at M-shopping and technology adoption in Malaysia. Asia Pacific Journal of Marketing and Logistics , 30 (4), 1064–1086

Grewal, D., Guha, A., Satornino, C. B., & Schweiger, E. B. (2021). Artificial intelligence: The light and the darkness. Journal of Business Research , 136 , 229–236

Grover, P., Kar, A. K., & Dwivedi, Y. K. (2020). Understanding artificial intelligence adoption in operations management: insights from the review of academic literature and social media discussions. Annals of Operations Research . Springer US

Guess, A., Nagler, J., & Tucker, J. (2019). Less than you think: Prevalence and predictors of fake news dissemination on Facebook. Science Advances , 5 (1), eaau4586

Gupta, A., Li, H., Farnoush, A., & Jiang, K. (2021). W. Understanding Patterns of COVID Infodemic: A Systematic and Pragmatic Approach to Curb Fake News.Journal of Business Research

Haenlein, M., & Kaplan, A. (2019). A brief history of artificial intelligence: On the past, present, and future of artificial intelligence. California Management Review , 61 (4), 5–14

Hopp, T., Ferrucci, P., & Vargo, C. J. (2020). Why do people share ideologically extreme, false, and misleading content on social media? A self-report and trace data–based analysis of countermedia content dissemination on Facebook and Twitter. Human Communication Research , 46 (4), 357–384

Ibrishimova, M. D., & Li, K. F. (2019). A machine learning approach to fake news detection using knowledge verification and natural language processing. In International Conference on Intelligent Networking and Collaborative Systems (pp.223–234). Springer, Cham

Ibrishimova, M. D., & Li, K. F. (2018). Automating incident classification using sentiment analysis and machine learning. In International Conference on Intelligent, Secure, and Dependable Systems in Distributed and Cloud Environments (pp.50–62). Springer, Cham

Jabbar, A., Akhtar, P., & Dani, S. (2020). Real-time big data processing for instantaneous marketing decisions: A problematization approach. Industrial Marketing Management , 90 , 558–569

Jayawickrama, U., Liu, S., Hudson Smith, M., Akhtar, P., & Bashir, A., M (2019). Knowledge retention in ERP implementations: the context of UK SMEs. Production Planning & Control , 30 (10–12), 1032–1047

Jayawickrama, U., Liu, S., & Smith, M. H. (2016). Empirical evidence of an integrative knowledge competence framework for ERP systems implementation in UK industries. Computers in Industry , 82 , 205–223

Jiang, T., Li, J. P., Haq, A. U., Saboor, A., & Ali, A. (2021). A Novel Stacking Approach for Accurate Detection of Fake News. Ieee Access : Practical Innovations, Open Solutions , 9 , 22626–22639

Joshi, A., Thomas, N., & Dabhade, M. (2016). Modified porter stemming algorithm. International Journal of Computer Science and Information Technologies , 7 (1), 266–269

Kampakis, S., & Adamides, A. (2014). Using Twitter to predict football outcomes.arXiv preprint arXiv:1411.1243

Kaplan, A., & Haenlein, M. (2019). Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Business Horizons , 62 (1), 15–25

Kareem, I., & Awan, S. M. (2019). Pakistani Media Fake News Classification using Machine Learning Classifiers. In 2019 International Conference on Innovative Computing (ICIC) (pp.1–6). IEEE

Katsaros, D., Stavropoulos, G., & Papakostas, D. (2019). Which machine learning paradigm for fake news detection?. In 2019 IEEE/WIC/ACM International Conference on Web Intelligence (WI) (pp.383–387). IEEE

Konstantakis, K. N., Cheilas, P. T., Melissaropoulos, I. G., Xidonas, P., & Michaelides, P. G. (2022). Supply chains and fake news: a novel input–output neural network approach for the US food sector.Annals of Operations Research,1–16

Kim, A., & Dennis, A. R. (2019). Says who? The effects of presentation format and source rating on fake news in social media. MIS Quarterly: Management Information Systems , 43 (3), 1025–1039

Kim, G., & Ko, Y. (2021). Effective fake news detection using graph and summarization techniques. Pattern Recognition Letters , 151 , 135–139

Kovács, G., & Sigala, I. F. (2021). Lessons learned from humanitarian logistics to manage supply chain disruptions. Journal of Supply Chain Management , 57 (1), 41–49

Kumar, V., Rajan, B., Venkatesan, R., & Lecinski, J. (2019). Understanding the role of artificial intelligence in personalized engagement marketing. California Management Review , 61 (4), 135–155

Leão, F., Revoredo, K., & Baião, F. (2019). Extending WordNet with UFO foundational ontology. Journal of Web Semantics , 57 , 100499

Lewandowsky, S., Ecker, U. K. H., Seifert, C. M., Schwarz, N., & Cook, J. (2012). Misinformation and its correction: Continued influence and successful debiasing. Psychological Science in the Public Interest , 13 (3), 106–131

Li, L., Zhang, Q., Wang, X., Zhang, J., Wang, T., Gao, T., Duan, W., Tsoi, K. K., & Wang, F. (2020). Characterizing the propagation of situational information in social media during COVID-19 epidemic: A case study on Weibo. IEEE Transactions on Computational Social Systems , 7 (2), 556–562

Loureiro, S. M. C., Guerreiro, J., & Tussyadiah, I. (2020). Artificial intelligence in business: State of the art and future research agenda. Journal of Business Research , 129 , 911–926

Mahdavi, S. A., Kolahi, A. A., Akhgari, M., Gheshlaghi, F., Gholami, N., Moshiri, M., Mohtasham, N., Ebrahimi, S., Ziaeefar, P., McDonald, R., Tas, B., Kazemifar, A. M., Amirabadizadeh, A., Ghadirzadeh, M., Jamshidi, F., Dadpour, B., Mirtorabi, S. D., Farnaghi, F., Zamani, N., & Hassanian-Moghaddam, H. (2021). COVID-19 pandemic and methanol poisoning outbreak in Iranian children and adolescents: A data linkage study. Alcoholism: Clinical and Experimental Research , 45 (9), 1853–1863

Matheus, R., Janssen, M., & Maheshwari, D. (2020). Data science empowering the public: Data-driven dashboards for transparent and accountable decision-making in smart cities. Government Information Quarterly , 37 (3), 101284

Mccallum, S. (2017). Restaurant hit by ‘human meat’ fake news claims, BBC . https://www.bbc.com/news/newsbeat-39966215

McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (2006). A proposal for the Dartmouth summer research project on artificial intelligence, august 31, 1955. AI Magazine , 27 (4), 12

McCulloch, W. S., & Pitts, W. (1943). A logical calculus of the ideas immanent in nervous activity. The Bulletin of Mathematical Biophysics , 5 (4), 115–133

Mehrotra, M., & Schmidt, W. (2021). The value of supply chain disruption duration information. Production and Operations Management , 30 (9), 3015–3035

Melki, G., Cano, A., & Ventura, S. (2018a). MIRSVM: multi-instance support vector machine with bag representatives. Pattern Recognition , 79 , 228–241

Melki, G., Kecman, V., Ventura, S., & Cano, A. (2018b). OLLAWV: online learning algorithm using worst-violators. Applied Soft Computing , 66 , 384–393

Melki, G., Cano, A., Kecman, V., & Ventura, S. (2017). Multi-target support vector regression via correlation regressor chains. Information Sciences , 415 , 53–69

Merriam, S. B. (2009). Qualitative research: A guide to design and implementation . San Francisco, CA: Jossey-Bass

Mikalef, P., Conboy, K., & Krogstie, J. (2021). Artificial intelligence as an enabler of B2B marketing: A dynamic capabilities micro-foundations approach. Industrial Marketing Management , 98 , 80–92

Milner, P. (2003). A brief history of the Hebbian learning rule. Canadian Psychology , 44 (1), 5–9

National Crime Agency (2018). UK national cyber security centre, the cyber threat to UK business, 2017–2018 Report, April 10, 2018. Unclassified, National Security Archive. https://nsarchive.gwu.edu/media/17676/ocr

Ni, D., Xiao, Z., & Lim, M. K. (2020). A systematic review of the research trends of machine learning in supply chain management. International Journal of Machine Learning and Cybernetics , 11 (7), 1463–1482

Niessner, M. (2018). Does fake news sway financial markets?”Yale Insights. https://insights.som.yale.edu/insights/does-fake-news-sway-financial-markets

Oxford English Dictionary (2020a). Oxford, UK:Oxford University Press. https://www.oxfordlearnersdictionaries.com/definition/english/fake-news

Oxford English Dictionary (2020). Oxford, UK:Oxford University Press. https://www.oxfordlearnersdictionaries.com/definition/english/disinformation

Parsons, D. D. (2020). The impact of fake news on company value: evidence from tesla and galena biopharma. Chancellor’s Honors Program Projects. https://trace.tennessee.edu/utk_chanhonoproj/2328

Paschen, J., Kietzmann, J., & Kietzmann, T. C. (2019). Artificial intelligence (AI) and its implications for market knowledge in B2B marketing. Journal of Business and Industrial Marketing , 34 (7), 1410–1419

Petratos, P. N. (2021). Misinformation, disinformation, and fake news: Cyber risks to business. Business Horizons , 64 (6), 763–774

Petit, T. J., Croxton, K. L., & Fiksel, J. (2019). The evolution of resilience in supply chain management: A retrospective on ensuring supply chain resilience. Journal of Business Logistics , 40 (1), 56–65

Poddar, K., & Umadevi, K. S. (2019). ). Comparison of various machine learning models for accurate detection of fake news. In 2019 Innovations in Power and Advanced Computing Technologies (i-PACT) (Vol.1, pp.1–5). IEEE

Polit, D. F., & Beck, C. T. (2012). Gender bias undermines evidence on gender and health. Qualitative Health Research , 22 (9), 1298

Ponterotto, J. G. (2005). Qualitative research in counseling psychology: A primer on research paradigms and philosophy of science. Journal of Counseling Psychology , 52 (2), 126–136

Pournader, M., Ghaderi, H., Hassanzadegan, A., & Fahimnia, B. (2021). Artificial intelligence applications in supply chain management. International Journal of Production Economics , 241 , 108250

Preil, D., & Krapp, M. (2021). Artificial intelligence-based inventory management: a Monte Carlo tree search approach.Annals of Operations Research,1–25

Rahi, S., Ghani, M. A., & Ngah, A. H. (2019). Integration of unified theory of acceptance and use of technology in internet banking adoption setting: Evidence from Pakistan. Technology in Society , 58 , 101120

Raisch, S., & Krakowski, S. (2020). Artificial Intelligence and Management: The Automation-Augmentation Paradox. Academy of Management Review , 46 (1), 1–48

Ransbotham, S., Kiron, D., Gerbert, P., & Reeves, M. (2017). Reshaping business with artificial intelligence: Closing the gap between ambition and action. MIT Sloan Management Review , 59 (1). Retrieved from https://www.proquest.com/docview/1950374030?pq-origsite=gscholar&fromopenview=true

Reisach, U. (2021). The responsibility of social media in times of societal and political manipulation. European Journal of Operational Research , 291 (3), 906–917

Resilinc (2021). Supply chain disruptions up 67% in 2020 with factory fires taking top spot for second year in a row. Retrieved from https://www.resilinc.com/press-release/supply-chain-disruptions-up-67-in-2020-with-factory-fires-taking-top-spot-for-second-year-in-a-row/

Reuters (2022). Fact check-Food processing plant fires in 2022 are not part of a conspiracy to trigger U.S. food shortages. Reuters . Retrieved from https://www.reuters.com/article/factcheck-processing-fire-idUSL2N2WW2CY

Riahi, Y., Saikouk, T., Gunasekaran, A., & Badraoui, I. (2021). Artificial intelligence applications in the supply chain: A descriptive bibliometric analysis and future research directions. Expert Systems with Applications , 173 , 114702

Roozenbeek, J., & Van Der Linden, S. (2019). The fake news game: actively inoculating against the risk of misinformation. Journal of Risk Research , 22 (5), 570–580

Roscoe, R. D., Grebitus, C., O’Brian, J., Johnson, A. C., & Kula, I. (2016). Online information search and decision making: Effects of web search stance. Computers in Human Behavior , 56 , 103–118

Sabeeh, V., Zohdy, M., & Al Bashaireh, R. (2019). Enhancing the Fake News Detection by Applying Effective Feature Selection Based on Semantic Sources. In 2019 International Conference on Computational Science and Computational Intelligence (CSCI) (pp.1365–1370). IEEE

Sharma, M., Luthra, S., Joshi, S., & Kumar, A. (2021). Implementing challenges of artificial intelligence: Evidence from the public manufacturing sector of an emerging economy.Government Information Quarterly,101624

Sharma, V. K., Mittal, N., & Vidyarthi, A. (2020). Context-based translation for the out of vocabulary words applied to Hindi-English cross-lingual information retrieval.IETE Technical Review,1–10

Sheng, J., Amankwah-Amoah, J., Khan, Z., & Wang, X. (2020). COVID-19 Pandemic in the New Era of Big Data Analytics: Methodological Innovations and Future Research Directions. British Journal of Management , 32 (4), 1164–1183

Shrestha, Y. R., Ben-Menahem, S. M., & von Krogh, G. (2019). Organizational Decision-Making Structures in the age of artificial intelligence. California Management Review , 61 (4), 66–83

Siew, E. G., Rosli, K., & Yeow, P. H. (2020). Organizational and environmental influences in the adoption of computer-assisted audit tools and techniques (CAATTs) by audit firms in Malaysia. International Journal of Accounting Information Systems , 36 , 100445

Sodhi, M., & Tang, C. (2021). Supply chain management for extreme conditions: Research opportunities. Journal of Supply Chain Management , 57 (1), 7–16

Sohrabpour, V., Oghazi, P., Toorajipour, R., & Nazarpour, A. (2021). Export sales forecasting using artificial intelligence. Technological Forecasting and Social Change , 163 , 120480

Swanson, E. B., & Wang, P. (2005). Knowing why and how to innovate with packaged business software. Journal of Information Technology , 20 (1), 20–31

Swink, M., & Schoenherr, T. (2015). The effects of cross-functional integration on profitability, process efficiency, and asset productivity. Journal of Business Logistics , 36 (1), 69–87

Talamo, A., Marocco, S., & Tricol, C. (2021). “The Flow in the funnel”: Modeling organizational and individual decision-making for designing financial AI-based systems. Frontiers in Psychology , 12 , 697101

Tandoc, E. C. Jr., Lim, Z. W., & Ling, R. (2018). Defining “fake news” A typology of scholarly definitions. Digital Journalism , 6 (2), 137–153

Teddlie, C., & Yu, F. (2007). Mixed methods sampling: A typology with examples. Journal of Mixed Methods Research , 1 (1), 77–100

The News (2020). Growing demand drives herb prices up, The News . https://www.thenews.com.pk/print/669097-growing-demand-drives-herb-prices-up

Tharwat, A. (2019). Parameter investigation of support vector machine classifier with kernel functions. Knowledge and Information Systems , 61 (3), 1269–1302

Tong, C., Gill, H., Li, J., Valenzuela, S., & Rojas, H. (2020). Fake news is anything they say!”—Conceptualization and weaponization of fake news among the American public. Mass Communication and Society , 23 (5), 755–778

Toorajipour, R., Sohrabpour, V., Nazarpour, A., Oghazi, P., & Fischl, M. (2021). Artificial intelligence in supply chain management: A systematic literature review. Journal of Business Research , 122 , 502–517

United Nations (2020). UN tackles ‘infodemic’ of misinformation and cybercrime in COVID-19 crisis. Retrieved from https://www.un.org/en/un-coronavirus-communications-team/un-tackling-%E2%80%98infodemic%E2%80%99-misinformation-and-cybercrime-covid-19

Vincent, V. U. (2021). Integrating intuition and artificial intelligence in organizational decision-making. Business Horizons , 64 (4), 425–438

Vos, A. D., Strydom, H., Fouche, C. B., & Delport, C. S. L. (2005). Research at grassroots. For the social sciences and human service professions . Pretoria: Van Schaik Publishers

Wamba, S. F., Dubey, R., Gunasekaran, A., & Akter, S. (2020). The performance effects of big data analytics and supply chain ambidexterity: The moderating effect of environmental dynamism. International Journal of Production Economics , 222 , 107498

Wamba-Taguimdje, S. L., Wamba, F., Kala Kamdjoug, S., J. R., & Wanko, T., C. E (2020). Influence of artificial intelligence (AI) on firm performance: the business value of AI-based transformation projects. Business Process Management Journal , 26 (7), 1893–1924

Wang, X., Reger, R. K., & Pfarrer, M. D. (2021). Faster, hotter, and more linked in: managing social disapproval in the social media era. Academy of Management Review , 46 (2), 275–298

Wang, Y., Qian, S., Hu, J., Fang, Q., & Xu, C. (2020). Fake news detection via knowledge-driven multimodal graph convolutional networks. In Proceedings of the 2020 International Conference on Multimedia Retrieval (pp.540–547)

Wardle, C. (2017). Fake news. It’s complicated. https://medium.com/1st-draft/fake-news-its-complicated-d0f773766c79 . Accessed 29 November 2021

Weizenbaum, J. (1966). ELIZA—A Computer Program For the Study of Natural Language Communication Between Man And Machine. Communications of the ACM , 9 (1), 36–45. https://doi.org/10.1145/357980.357991

Article   Google Scholar  

Wong, C. W., Lirn, T. C., Yang, C. C., & Shang, K. C. (2020). Supply chain and external conditions under which supply chain resilience pays: An organizational information processing theorization. International Journal of Production Economics , 226 , 107610

Xu, Z., Elomri, A., Kerbache, L., & Omri, E., A (2020). Impacts of COVID-19 on global supply chains: Facts and perspectives. IEEE Engineering Management Review , 48 (3), 153–166

Yin, R. K. (2014). Case study research: Design and methods (5th ed.). Thousand Oaks, CA: Sage Publications

Yu, W., Chavez, R., Jacobs, M., Wong, C. Y., & Yuan, C. (2019). Environmental scanning, supply chain integration, responsiveness, and operational performance: an integrative framework from an organizational information processing theory perspective. International Journal of Operations & Production Management , 39 (5), 787–814

Zeba, G., Dabić, M., Čičak, M., Daim, T., & Yalcin, H. (2021). Technology mining: Artificial intelligence in manufacturing . Technological Forecasting and Social Change , 171 , 120971

Zhang, C., Gupta, A., Kauten, C., Deokar, A. V., & Qin, X. (2019). Detecting fake news for reducing misinformation risks using analytics approaches. European Journal of Operational Research , 279 (3), 1036–1052

Zhang, C., & Lu, Y. (2021). Study on artificial intelligence: The state of the art and future prospects. Journal of Industrial Information Integration , 23 , 100224

Zhang, M., Li, X., Yue, S., & Yang, L. (2020). An empirical study of TextRank for keyword extraction. Ieee Access : Practical Innovations, Open Solutions , 8 , 178849–178858

Zhang, M., Macpherson, A., & Jones, O. (2006). Conceptualizing the learning process in SMEs: improving innovation through external orientation. International Small Business Journal , 24 (3), 299–323

Zheng, K., Zhang, Z., Chen, Y., & Wu, J. (2021). Blockchain adoption for information sharing: risk decision-making in spacecraft supply chain. Enterprise Information Systems , 15 (8), 1070–1091

Zhou, X., Jain, A., Phoha, V. V., & Zafarani, R. (2020). Fake news early detection: A theory-driven model. Digital Threats: Research and Practice , 1 (2), 1–25

Zhu, X., Li, F., Chen, H., & Peng, Q. (2018). An efficient path computing model for measuring semantic similarity using edge and density. Knowledge and Information Systems , 55 (1), 79–111

Zhu, X., Yang, X., Huang, Y., Guo, Q., & Zhang, B. (2020). Measuring similarity and relatedness using multiple semantic relations in WordNet. Knowledge and Information Systems , 62 (4), 1539–1569

Download references

Author information

Authors and affiliations.

University of Aberdeen Business School, University of Aberdeen, King’s College, AB24 5UA, Aberdeen, UK

Pervaiz Akhtar & Zaheer Khan

Imperial College London, SW7 2BU, London, UK

Pervaiz Akhtar

Faculty of Management and Economics, Universiti Pendidikan Sultan Idris, Tanjong Malim, Malaysia

Arsalan Mujahid Ghouri

Faculty of Art, Computing, and Creative Industry, Universiti Pendidikan Sultan Idris, Tanjong Malim, Malaysia

Haseeb Ur Rehman Khan

Department of Business Administration, Iqra University, Karachi, Pakistan

Mirza Amin ul Haq

Department of Business Administration, Inland School of Business and Social Sciences, Inland Norway University of Applied Sciences, Hamar, Norway

School of Business and Management, Queen Mary University of London, London, UK

Nadia Zahoor

CAS-Key Laboratory of Crust-Mantle Materials and the Environments, School of Earth and Space Sciences, University of Science and Technology of China, 230026, Hefei, PR China

Aniqa Ashraf

Innolab, University of Vaasa, Vaasa, Finland

Zaheer Khan

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Pervaiz Akhtar .

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Akhtar, P., Ghouri, A.M., Khan, H.U.R. et al. Detecting fake news and disinformation using artificial intelligence and machine learning to avoid supply chain disruptions. Ann Oper Res 327 , 633–657 (2023). https://doi.org/10.1007/s10479-022-05015-5

Download citation

Accepted : 27 September 2022

Published : 01 November 2022

Issue Date : August 2023

DOI : https://doi.org/10.1007/s10479-022-05015-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Disinformation
  • Misinformation
  • Artificial intelligence
  • Machine learning
  • Supply chain disruptions
  • Effective decision making
  • Find a journal
  • Publish with us
  • Track your research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Springer Nature - PMC COVID-19 Collection

Logo of phenaturepg

Detecting fake news and disinformation using artificial intelligence and machine learning to avoid supply chain disruptions

Pervaiz akhtar.

1 University of Aberdeen Business School, University of Aberdeen, King’s College, AB24 5UA Aberdeen, UK

2 Imperial College London, SW7 2BU London, UK

Arsalan Mujahid Ghouri

3 Faculty of Management and Economics, Universiti Pendidikan Sultan Idris, Tanjong Malim, Malaysia

Haseeb Ur Rehman Khan

4 Faculty of Art, Computing, and Creative Industry, Universiti Pendidikan Sultan Idris, Tanjong Malim, Malaysia

Mirza Amin ul Haq

5 Department of Business Administration, Iqra University, Karachi, Pakistan

6 Department of Business Administration, Inland School of Business and Social Sciences, Inland Norway University of Applied Sciences, Hamar, Norway

Nadia Zahoor

7 School of Business and Management, Queen Mary University of London, London, UK

Zaheer Khan

9 Innolab, University of Vaasa, Vaasa, Finland

Aniqa Ashraf

8 CAS-Key Laboratory of Crust-Mantle Materials and the Environments, School of Earth and Space Sciences, University of Science and Technology of China, 230026 Hefei, PR China

Fake news and disinformation (FNaD) are increasingly being circulated through various online and social networking platforms, causing widespread disruptions and influencing decision-making perceptions. Despite the growing importance of detecting fake news in politics, relatively limited research efforts have been made to develop artificial intelligence (AI) and machine learning (ML) oriented FNaD detection models suited to minimize supply chain disruptions (SCDs). Using a combination of AI and ML, and case studies based on data collected from Indonesia, Malaysia, and Pakistan, we developed a FNaD detection model aimed at preventing SCDs. This model based on multiple data sources has shown evidence of its effectiveness in managerial decision-making. Our study further contributes to the supply chain and AI-ML literature, provides practical insights, and points to future research directions.

Introduction

The increased scholarly focus has been directed to fake news detection given their widespread impact on supply chain disruptions, as was the case with the COVID-19 vaccine. Fake news and misinformation are highly disruptive, which create uncertainty and disruptions not only in society but also in business operations. Fake news and disinformation-related problems are exacerbated due to the rise of social media sites. Regarding this, using artificial intelligence (AI) to counteract the spread of false information is vital in acting against disruptive effects (Gupta et al., 2021 ). It has been observed that fake news and disinformation (FNaD) harm supply chains and make their operation unsustainable (Churchill, 2018 ). According to research, fake news can be classified into two distinct concepts of misinformation and disinformation (Petratos, 2021 ; Allcott & Gentzkow, 2017 ) defined fake news as “ news articles that are intentionally and verifiably false, and could mislead readers ” (p. 213). According to Wardle ( 2017 ), misinformation refers to “ the inadvertent sharing of false information ”, while disinformation can be defined as “ the deliberate creation and sharing of information known to be false ”. Among the negative consequences that fake news can have for companies are loss of sponsorships, reduced credibility, and loss of reputation which can adversely affect performance (Di Domenico et al., 2021 ). In such a context AI is shaping decision-making in an increasing range of sectors and could be used to improve the effectiveness of fake news timely detection and identification (Gupta et al., 2021 ). Whereas many new efforts to develop AI-based fake news detection systems have concentrated on the political process, the consequences of FNaD on supply chain operations have been relatively underexplored (Gupta et al., 2021 ).

Kaplan and Haenlein ( 2019 ) addressed AI “ as a system’s ability to interpret external data correctly, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation ” (p.17). Although emerging technologies such as AI may sometimes have negative effects, they can be utilized to combat disinformation. As scholarship is showing increasing interest in how AI can improve operationally and supply chain efficiencies (Brock & von Wangenheim, 2019 ), researchers have recently called for more studies on how organizational strengths and the use of AI influence the outcomes for decision-making structures (Shrestha et al., 2019 ). Fake news has considerable negative effects on firms’ operations, such as repeated disruptions of supply chains (Churchill, 2018 ). FNaD influence the use of a company’s product or services (Zhang et al., 2019 ; Sohrabpour et al., 2021 ) argued that leveraging AI to improve supply chain operations will likely improve firms’ planning, strategy, marketing, logistics, warehousing, and resource management in the presence of any environmental uncertainty, including that caused by FNaD.

Scholars have called for research to attain an in-depth understanding of AI and of how to tailor it to enhance business efficiencies and minimize supply chain disruptions (SCDs) (e.g., Grewal et al., 2021 ; Churchill, 2018 ). The extant literature has drawn mixed conclusions on whether AI-driven or hybrid AI decision-making benefits a firm’s supply chain (Shrestha et al., 2019 ). The question of why some firms are more effective than others in using AI to manage SCDs has largely been overlooked (Toorajipour et al., 2021 ). Increased research efforts are being made to identify and manage fake news risk in supply chain operations (Reisach, 2021 ). In today’s digital media landscape, the term ‘fake news’ has gained relevance following the 2016 US presidential elections (Allcott & Gentzkow, 2017 ). People have been observed to be unable to clearly distinguish between fake and real news and to tend to perceive ‘fake news’ as a more significant issue within the current information landscape (Tong et al., 2020 ). Therefore, decision-makers are often influenced by FNaD, thus ending up making erroneous decisions and drawing inaccurate conclusions regarding current scenarios (e.g., Lewandowsky et al., 2012 ; Di Domenico et al., 2021 ). From a supply chain perspective, researchers have highlighted how FNaD can lead to SCDs (e.g., Gupta et al., 2021 ; Kovács & Sigala, 2021 ; Sodhi & Tang, 2021 ), which can have a far-reaching impact on the functioning of global supply chains.

Additionally, the United Nations ( 2020 ) have suggested that, despite the measures put in place to build confidence in people, businesses, and supply chain operations, SCDs have remained a problematic area for businesses in recent years. Resilinc ( 2021 ) revealed that SCDs have been increasing by 67% year-over-year, with 83% of such disruptive events being caused by human activity—not natural disasters. EverStream Analytics (2020) found that 40.5% and 33.4% of businesses are respectively getting their information and intelligence relating to supply chain issues from their customers and social media. The detection of fraudulent information is thus critical to avoid such consequences (Kim & Ko, 2021 ), and businesses need to set up specific processes or routines to filter incoming business-related information and mitigate any possible related harm to their operations (Kim & Ko, 2021 ; Kim & Dennis, 2019 ) emphasized research underpinning emerging technologies such as AI suited to tackle FNaD. As FNaD have become increasingly relevant in the field of operations management, and given their effects on decision-making, there is a need to understand what business processes require to be implemented to contain their spread and minimize SCDs.

However, there is still a limited understanding of how AI techniques can help in eliminating FNaD. We, therefore, sought to define an AI-oriented business process suited to remove the effects of FNaD on decision-making and set our research question as: “ How can firms integrate AI in their operations to reduce the impact of FNaD regarding SCDs ?” In answering this question, our study makes three contributions to the literature. First, it develops a new theoretical framework suited to mitigate the impacts of FNaD on SCDs and it analyses the relationship using a specific dataset and support-vector machine. The resulting business process manages the dissemination of information, accurately mitigating FNaD and enabling correct decision-making in regard to tackling complex issues (e.g., Jayawickrama et al., 2019 ). Second, by presenting key findings gleaned by interviewing senior managers from three different countries (Indonesia, Malaysia, and Pakistan) with expertise in supply chains, our study provides new theoretical evidence regarding how firms can avoid SCDs in emerging economies. To the best of our knowledge, our study is the first to focus on the implications and integration of AI in business processes to the end of mitigating the effects of FNaD on SCDs. Our framework thus links the supply chain and AI literature and explicates their utility in mitigating SCDs against the backdrop of fake news and disinformation campaigns. In our study, we adopted a qualitative method that involved integrating the AI literature with research on fake news to reveal how the effectiveness of decision-making can be ensured within supply chain operations. Much previous research has advanced our understanding of fake news detection mechanisms using graphs and summarization techniques (Kim & Ko, 2021 ). Furthermore, a recent study has proposed an AI-based, real-time fake news detection system by conducting a systematic literature review (Gupta et al., 2021 ). Third, our study fills a gap in the literature by providing a practical solution aimed at eliminating or reducing FNaD in business scenarios, specifically acting to minimize SCDs. The extant literature is somewhat scattered and fragmented that has not helped researchers to address many questions about FNaD (Di Domenico & Visentin, 2020 ). Our study proposes an AI-oriented business process that flags/reduces/eliminates FNaD before it can reach decision-makers and allows authentic news and information to filter through to supply chain operation resilience and prevent SCDs.

This paper is structured as follows. Section2 presents a discussion of the related literature, which is followed by an illustration of our research methodology in Sect.  3 . In Sect.  4 , the implementation details, findings, and proposed model are provided. In Sect.  5 , the implications of our model are discussed and, to conclude, future research directions are suggested.

Literature review

Theoretical background.

Organizational Information Processing Theory (OIPT) proposes a systematic comprehension of processing and exchanging of information to increase capacities. OIPT reasons that firms need a stabilizing mechanism by possessing resources and capacities in operations to cope with uncertainties and manage unforeseen events that disturb normal business and supply chain operations (Wong et al., 2020 ). Scholarship suggests that SCDs could be caused by disinformation (e.g., Konstantakis et al., 2022 ; Xu et al., 2020 ). It is ultimately inevitable for supply chains to cultivate the capability and capacity to proactively engage the filtration of the information and news to improve supply chain operations. Firms could either opt to rely on mechanistic organizational resources for reducing their reliance on information or enhance their information processing capabilities. The more environmental uncertainty facing firms, the more information they need to gather and process to achieve better performance (Bode et al., 2011 ). OIPT proposing the primary goal of organizational-related process designs is linked with uncertainty by acquiring, analyzing, and sharing information from the business environment (Swink & Schoenherr, 2015 ; Yu et al., 2019 ). OIPT addresses the development of organizational capabilities to fill their information processing requirements (Wamba et al., 2020 ). SCDs can be avoided by the filtration of receiving accurate and timely information. Di Domenico et al., ( 2021 ) suggested that FNaD during disruptions i.e., the supply chain may cause the loss of preventable lives, misguiding information on business activities and innovation. Fact-checking measures like “know why”, “know how”, “know what”, and “know when” could be checked by emerging technologies and information processing capabilities (Jayawickrama et al., 2016 ; Swanson & Wang, 2005 ). In this perspective, AI and Machine Learning (ML) could manage the dissemination of real information by accurately detecting and mitigating false information and making correct decisions when tackling difficult issues (Endsley, 2018 ; Jayawickrama et al., 2019 ; Roozenbeek and van der Linden, 2019 ). OIPT thus focuses on linking uncertainty with information needs and information processing capacities and prescribes organizational designs to reduce uncertainty. Our study thus seeks to provide a holistic theoretical framework (integrated with AI and ML) built based on OIPT to minimize the chances of SCDs.

Artificial intelligence and supply chain operations

In academia, the concept of AI was first established in the 1950s (Haenlein & Kaplan, 2019 ). However, McCulloch & Pitts ( 1943 ) ideas on logical expression represent a notable landmark, as they led to the development of a neurocomputer design (Milner, 2003 ). While the exact year is unknown, the origins of AI can thus be dated to the 1940s; notably, to Isaac Asimov’s 1942 short tale ‘Runaround’, published in ‘Science Fiction’ magazine. In it, Asimov formalized his three laws of robotics: first, a robot cannot harm a human being; second, a robot must follow human commands; and third, a robot must defend itself (Haenlein & Kaplan, 2019 ). In 1955, in a research project on AI (McCarthy et al., 1955) Dartmouth college defined it as “ making a machine behave in ways that would be called intelligent if a human were so behaving ” (p.11). Since 1955, AI has evoked the idea of relevant human intuition and artificial machines that could stimulate the human brain and come up with environmental abstractions to work on difficult problems. During the following decade, in 1966, Joseph Weizenbaum created the famous ELIZA computer program, a ‘natural language processing (NLP) tool that was capable of holding a conversation with a human being and maintaining the illusion of comprehension. This was labelled heuristic programming and AI (Weizenbaum, 1966 ). In the 1980s, research on backpropagation in neural networks saw rapid development (Zhang & Lu, 2021 ). Under Ernst Dickmanns, Mercedes-Benz developed and commercialized a driverless vehicle fitted with cameras and sensors and an onboard computer system controlling the steering (Delcker, 2018 ). With the continuous development of AI tools, the success of IBM’s ‘Deep Blue’ chess-playing supercomputer laid the foundations for research on and the application of expert systems (Haenlein & Kaplan, 2019 ).

AI is viewed as a game-changer and as being able to facilitate both the “ abilities to self learn and a race to improve decision quality ” (Vincent, 2021 , p. 425). Kaplan and Haenlein ( 2019 ) defined AI “ as a system’s ability to interpret external data correctly, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation ” (p.17). In supply chain management and the manufacturing industry, there has been an upsurge in AI (Kumar et al., 2019 ) that has significantly impacted operations and human roles in firms (Vincent, 2021 ; Awan et al., 2021 ) suggested that AI initiatives in firm supply chain operations can improve knowledge of the processes used to generate business performance. AI is a complex and multifaceted construct with profound implications for firm operations management (Zeba et al., 2021 ). The supply chain literature has recently emphasized the link between the application of AI and process improvement (Toorajipour et al., 2021 ). Although several AI-based supply chain applications have appeared in recent years, little research has explored their use (Riahi et al., 2021 ). While the debate on the operational outcomes of AI is still ongoing, there is little evidence in the operations management literature of how the adoption of AI may improve supply chain operations (Raisch & Krakowski, 2020 ).

Recent advancements in material and production technologies hold great possibilities for a better understanding of how to improve other manufacturing and supply chain operations (Grewal et al., 2021 ). AI-based models provide near-optimal solutions to a wide variety of routing challenges, ensuring on-time deliveries and optimizing warehouse transport (Riahi et al., 2021 ). However, little attention has been devoted to how the use of AI techniques may affect the reverse auctioning that involves supply chain partners and planning for vehicle routing and volume discount acquisition (Toorajipour et al., 2021 ). By affecting decision-making and increasing effective knowledge creation aimed at developing products customized for specific situations, AI technologies may have significant implications for a firm’s production capabilities (Awan et al., 2021 ). As a creative and frequently disruptive technology, AI facilitates the design of new products, services, industrial processes, and organizational structures that meet client needs. Further, product, service, manufacturing, or organizational processes can be designed using AI (Wamba-Taguimdje et al., 2020 ). For B2B companies, customer understanding is critical to boost products or services (Paschen et al., 2019 ). The integration of AI with the industrial Internet of Things holds significant potential for solving production-process problems and making better-informed decisions (Zeba et al., 2021 ). Early adopters of AI have created new and improved goods, which has enabled them to outperform the competition (Behl et al., 2021 ). By analyzing market intelligence, AI can uncover themes and patterns in data and may provide insights into how users creatively alter products and services (Paschen et al., 2019 ). A growing number of scholars are maximizing the influence of AI on supply chain risk management and monitoring systems to avoid SCDs (Toorajipour et al., 2021 ). However, little is known about its role in shaping monitoring, and controlling supply chain operations (Pournader et al., 2021 ). Although research has found that AI is used to improve supply chain performance, just a few AI approaches and algorithms have been explored and are used in supply chain processes (Riahi et al., 2021 ).

AI is linked to analytical, self-learning, and predictive machine learning approaches (Shrestha et al., 2019 ). These methods offer a variety of answers and prescriptive inputs to choose from when deciding how to proceed with complicated scenarios (Belhadi et al., 2021 ). Even though researchers have focused on the use of AI in different fields of study, it is important to note that very few studies have looked at how AI can be used in enhancing supply chain operations. However, the importance of AI in predicting and mitigating supply chain risk has been well established in the literature (Riahi et al., 2021 ). AI can accurately and rapidly detect relevant supply chain information by using analytics produced through AI techniques and models. They give managers a greater understanding of how each system operates and help them to discover areas in which they can improve those operations. The development of AI has made it possible to deploy predictive algorithms that allow for faster evaluations and more effective risk minimization across supply chains (Ni et al., 2020 ). The extant literature on AI argues that applying different machine learning approaches with AI can substantially decrease SCDs (Riahi et al., 2021 ). AI and ML enhance operations in many domains, including supply chain management, logistics, and inventory management (Belhadi et al., 2021 ; Ni et al., 2020 ) showed that supply chain managers can use AI to watch for and avoid incidents interrupting supply chain operations. This includes everything from the most prevalent occurrences to unknown factors such as delivery delays, quality defects, among others (Belhadi et al., 2021 ).

AI provides the opportunities and promises to move toward data-driven decision support systems. Despite the integration of AI in many firm processes, there are still challenges regarding the design of a firm supply chain that depends heavily on human contributions (Kumar et al., 2019 ). However, it has been established in the operations management (OM) literature that AI has a positive impact on various supply chain management activities (Dubey et al., 2021 ). Still, it rarely addresses how AI is applied in the OM field, such as in manufacturing, production, warehousing and logistics, and robot dynamics (Toorajipour et al., 2021 ). Even though the supply chain literature has acknowledged that many AI applications include production forecasting, supplier selection, material consumption forecasting, and customer segmentation (Toorajipour et al., 2021 ), the AI literature typically revolves around understanding effective ways to combine human intuition and decision-making (Vincent, 2021 ). The use of AI technologies gives marketers a competitive edge that reflects marketing tactics and customer behaviors (Jabbar et al., 2020 ). Customer order processing can be automated with AI, and chatbots can handle any follow-up chores (Paschen et al., 2019 ), which can increase supply chain effectiveness. It is possible to take proactive measures to combat supply chain risks by uncovering new trends in the data; this is expected to assist in achieving adaptability and higher levels of supply chain maturity (Riahi et al., 2021 ). Multiple courses of action are open to firms confronted with the risk linked to investing in AI and its positive impacts on supply chain activities. The proliferation of evolving AI technology has led to premature and conflicting conclusions regarding specific outcomes.

Scholars increasingly recognize the importance of AI in lowering downtime costs, better utilizing real-time data, better scheduling, and preserving firm operations from risks (Chen et al., 2021 ). Additionally, Chen et al., ( 2021 ) suggested a predictive maintenance framework for the management of assets under pandemic conditions, including new technologies, such as AI, for pandemic preparedness and the avoidance of business disruptions. The implementation of AI-based systems influences supply chain inventory management, “ for instance performance analysis, resilience analysis or demand forecasting ” (Riahi et al., 2021 , p.13). This raises the question of whether the use of AI systems to determine short-order policies and mitigate any bullwhip effects has been adequately addressed in the literature (Preil & Krapp, 2021 ). A review found that the adoption of AI in supply chains improves performance, lowers costs, minimizes losses, and makes such chains more flexible, agile, and robust (Riahi et al., 2021 ). Recent advances in AI help supply chain firms to enhance their analytics capabilities, leading to improved operational performance (Dubey et al., 2021 ). AI-enabled supply chain performance is becoming increasingly important to enhance financial performance; yet, no studies have been hitherto conducted to the end of gaining a better understanding of the critical antecedents of AI in driving supply chain analytics (Dubey et al., 2021 ). The direct and positive impact of AI-based relational decision-making on firm performance has been established (e.g., Bag et al., 2021 ; Behl et al., 2021 ).

Fake news, disinformation, and supply chain disruptions

Fake news (Oxford English Dictionary, 2021a) is defined as: “ false reports of events, written and read on websites. ” Furthermore, disinformation (Oxford English Dictionary, 2021b) is construed as: “ false information that is given deliberately ”. The impact of FNaD is substantial, disrupting economic operations and societal activities. FNaD also threaten brand names and potentially affect the consumption of products and services, ultimately impacting supply chain operations and demand (Zhang et al., 2019 ; Petratos, 2021 ). These are affected by panic-driven or bad decisions based on disinformation (e.g. Ahmad et al., 2019 ; Matheus et al., 2020 ; Zheng et al., 2021 ). SCD is defined as a disturbance in the flows of material, financial, and information resources between firms and their major stakeholders—e.g., suppliers, manufacturers, distributors, retailers, and customers. Disruption may affect supply chain operations for random periods (Mehrotra & Schmidt, 2021 ). Supply chains encompass the activities needed for firms to deliver products and services to their final consumers, and accurate information is an integral part of such chains, as it enables decision-makers to make decisions on future demand, supply, cash flows, returns, among other supply chain operations. There are historical examples of how FNaD can affect the supply chain and business operations. From the 1950s to 1990, the tobacco industry constantly shared disinformation on the adverse effects of active and secondhand smoke exposure by manipulating research, data, and the media (Bero, 2005 ; Dearlove et al., 2002 ). In September 2006, the Royal Society, Britain’s premier scientific academy, wrote to ExxonMobil urging it to stop funding the dozens of groups spreading disinformation on global warming and claiming that the global temperature rise was not related to increases in carbon dioxide levels in the atmosphere (Adam, 2006 ). In 2013, the Associated Press official Twitter account was hacked and a tweet was made about two explosions injuring President Barack Obama; within hours, this wiped US$130billion from the stock market (Parsons, 2020 ; Tandoc et al., 2018 ), which affected stock supply chain operations. In 2017, six UK Indian restaurants fell victim to fake news stories claiming that they were serving human flesh (Barns, 2017 ; Mccallum, 2017 ). One restaurant had to cut staff hours and saw its revenue fall by half (National Crime Agency, 2018 ). Such events could also have indirect effects on supply chain operations, ultimately being conducive to SCDs.

The persuasive power of fake news could continuously damage global supply chains, such as meat, vegetables, fresh food, and fruits in different parts of the world (Xu et al., 2020 ). All businesses have felt the rapid dissemination of false information and propaganda among suppliers and distributors. Many countries such as France, Germany, India, and the US imposed restrictions on products from entering and leaving different countries due to disinformation and fake news about Covid-19 (Xu et al., 2020 ). Moreover, several fake social media posts or misinformation about the U.S. food plant fire caused the food supply disruption. The USDA told Reuters via email that it is not true that these fires were started on purpose (Reuters, 2022 ). Due to the widespread false information about COVID-19, there was an epidemic of methanol poisoning. It is claimed that 796 Iranians lost their lives to the alcohol intoxication after reading online claims that alcohol may treat their illnesses (Mahdavi et al., 2021 ). This echoed the rapid dissemination of false information regarding COVID-19 on social media at the outbreak’s onset and disrupted the supply of many food items (Mahdavi et al., 2021 ). Some people falsely claim that using alcohol to rinse the mouth and avoid COVID-19 infection works (Delirrad & Mohammadi, 2020; Soltaninejad, 2020). The global supply chains have been shaken by the widespread of fake news causing widespread disruptions and affecting firms’ reputations. The effects of fake news on COVID-19 are still being felt by supply chains across many sectors, and it has irrevocably affected long-term supply chain strategies.

In more recent times, the COVID-19 pandemic has spawned high volumes of FNaD. One example of disinformation pertaining to a COVID-19 remedy involved a herb named ‘ Senna Makki ’ in Pakistan., Someone started sharing on social media as a cure for the virus, which caused escalating demand and an increase in price from 1.71USD to 8.57-11.43USD per kg within two months (The News, 2020 ). This kind of fake news could affect vaccine supply chains. In the stock market, Clarke et al., ( 2020 ) revealed that a well-known crowd-sourced content service for financial market websites had been generating fake news stories due to the editors’ lack of ability to detect them. This attempted use of fake news had ‘widespread short-term implications for the financial markets. The Kroll global fraud and risk report of 2019/20 shared an incident of fake news in the banking industry. The rival institution purchased an African bank, the purchaser was confronted with a negative social media campaign, fabricated news, and stories, and manipulated closed-circuit television footage (Booth et al., 2019 ). These examples demonstrate how FNaD can disrupt supply chains and business operations.

Our review of past studies yielded Table  1 , which summarizes key studies interlinking AI, SCDs, and FNaD. These studies are selected from top-ranked journals (e.g., CABS 3 ranked and above) published in the last three years. The lack of research on all three aspects is very clear, with no studies emphasizing their combination. Our study significantly contributes to bridging this gap.

Selective studies on AI and supply chain operations

Author, year, journals Key PointsResearch gapAI + SCDs + FNaD

(Behl et al., )

(AOR)

AI, operational efficiency, trust, and transparencyHow does a firm establish links between AI, SCM, and risk management?Enabling artificial intelligence by limiting erroneous information.

(Chen et al., )

(AOR)

Proactive maintenance and human-centered AI decision system.What is the impact of Trust in AI on human-centric decision supportAI’s application for predictive maintenance to reduce uncertainty.

(Bag et al., )

(IMM)

AI, relationship management, and firm performanceThe use of AI is a prerequisite for market knowledge creation.

Artificial intelligence

Fake news and relationship management.

(Dubey et al., ) (IMM)Alliance management, AI, and pandemic crisesThe examination of the key antecedents of AI, as driven by Supply Chain AnalyticsAI-driven supply chain firms’ internal dynamic capabilities lower the SCDs.

(Grewal et al., )

(JBR)

The dark side of AI

Does it make procurement more efficient?

How can AI improve fraud detection in SC?AI is expected to minimize the SCDs caused by uncertain external events.

(Mikalef et al., )

(IMM)

AI capability, uncertain environment, and firm performance.The examination of the antecedents of developing resources inside your own companyAI can improve the decision-making process, and Lowering risk facilitates high-value analysis.

(Pournader et al., )

(IJPE)

AI, operational performance, and Information ManagementThe examination of the relationship between MIS and human behavior in SCMAI can gain SC operational performance in the face of information uncertainty.

(Zeba et al., )

(TFSC)

AI technologies production systems.AI technologies production systems and leveraging knowledge on firm sustainability performance.AI supports knowledge protection and minimizes the risk of disinformation to do so.

(Farrokhi et al., )

(IMM)

AI, crisis management, and decision-making.The use of AI to explain business situations is still evolving.AI can be used to detect disinformation and misinformation.

(Shrestha et al., )

(CMR)

AI trust and developing internal capabilities.How is the hybrid human-AI decision-making process beneficial?AI could be useful for accurate decision-making regarding disinformation and the reduction of risk.

Note : a (AOR): “Annals of Operations Research, (CMR): “California Management Review, (IMM): “Industrial Marketing Management”, (IJPE): “International Journal of Production Economics, (JBR); “Journal of Business Research”, (TFSC); “Technological Forecasting & Social Change”

Methodology

We used mixed methods, the AI and ML-driven method, and our case study interviews to further validate our model. The following procedure was used to execute the AI and ML-driven method for data analysis. (1) The Dataset Enrichment was based on two techniques—i.e., Porter stemmer (PS) and Term Frequency-Inverse Document Frequency (TFIDF). (2) Query Expansion was utilized for natural language processing (NLP) to precisely predict the accuracy of fake news. (3) The Support Vector Machine (SVM) classifier was utilized to train the model and then finally evaluate fake and real news outcomes for effective decision-making SCDs. Table  2 provides further justification for this approach. The studies also used other measures such as precision, recall, and accuracy and we also integrated similar measures in our analysis.

Algorithms and approaches used by previous studies for FNaD detection

StudyApplicationApproach/classifiers/algorithm
Jiang et al., ( )Fake news detection

SVM

Logistic Regression

Decision Tree and others

Random Forest

TFIDF

Wang et al., ( )Fake News Detection

SVM

Binary Classifier

Zhou et al., ( )Early fake news detection

SVM

Logistic Regression

Random Forests and others

Kareem & Awan ( )Fake news classification

SVM

K-Nearest Neighbor

Random Forest and others

PS

Ibrishimova & Li ( ; )Fake news detection

Binary Classifier

TensorFlow’s

Linear Classifier and others

Semantic similarity measures and WordNet ontology

Poddar & Umadevi ( )Accurate fake news detection

Support Vector

Machine (SVM)

Logistic Regression

Decision Tree

Artificial Neural Networks

Sabeeh et al., ( )Enhanced fake news detection

SVM

Decision Tree

Secondly, the involved cases for interviews were from Indonesia, Malaysia, and Pakistan. These countries share similarities. Furthermore, these are emerging countries with common economic and political ties as well as government-to-government contacts. Similarly, these nations are making strides toward digitization and AI. Numerous previous studies (e.g., Atkin et al., 2017 ; Ghazali et al., 2018 ; Rahi et al., 2019 ; Siew et al., 2020 ) also used targeted populations from these countries to conduct studies on comparable issues. The case study method was best suited to achieve the objectives of our study, considering the explanatory nature of the research question (Eisenhardt & Graebner, 2007 ; Yin, 2014 ) and the fact that this is an emergent research area to be combined with modern methodological innovations such as ML (Gupta et al., 2021 ; Kovács & Sigala, 2021 ; Sodhi & Tang, 2021 ; Sheng et al., 2020 ). Further, multiple case studies are assumed to be more reliable because they enable phenomena to be observed and studied in many contexts—thereby helping to provide replication logic for particular cases, which would otherwise be viewed as independent (Yin, 2014 )—and because they are useful for theory development (Eisenhardt, 1989 ).

A goal-directed sampling technique—i.e., an incremental selection method (Denzin & Lincoln, 2005 )—was employed to investigate the flagging/reducing/eliminating process of FNaD. This technique is effective for the collection of both qualitative and quantitative data because the sample is purposely chosen based on the project’s unique requirements and the evaluator’s judgment (Polit & Beck, 2012 ; Vos et al., 2011). The data acquired using this approach and technique tend to be of a high standard only if the participants are willing and able to provide accurate information that will enable the researcher to gain a thorough knowledge of an experience. We thus adopted this sampling technique because our target key management functionaries would be the participants best suited to furnish the essential data for our study (Creswell, 2014 ), providing useful insight into what works and what does not in terms of its theoretical and technical components. Any case variations were scrutinized based on the industry to achieve a better understanding of FNaD impacts.

To conduct the interviews, we approached 33 firms—36.36% Malaysian, 36.36% Pakistani, and 27.28% Indonesian ones. Eventually, a total of 16 firm representatives participated in our study—six each from Malaysia and Pakistan, and four from Indonesia. According to Teddlie & Yu ( 2007 ), this was a sample size sufficient to produce narrative records adequate to provide viewpoints directly relevant to the topic under study. The sample firms were small and medium-sized, with staff numbers ranging from 18 to 183. Small businesses have learning systems that are complex and dynamic and geared to generate the efficiency needed to sustain such firms in the market (Zhang et al., 2006 ). Therefore, an in-depth investigation was needed to understand the stances of small and medium-sized firms about FNaD. Fifteen semi-structured in-depth interviews were prepared, based on the interviewees’ understanding of their respective firms and the inflow and outflow information features such as sources, routines, systems, and processes. The interviewees were owners, chief executive officers, directors, and associated top managers. The participants’ varied perspectives reduced dependency on a single participant’s perspective, enriching the data obtained. We also used project reports, operational policies, and other relevant documentation to identify and triangulate themes during the data analysis (Yin, 2014 ). Due to the COVID-19 pandemic, 90% of the interviews were conducted over the internet (e.g., GoogleMeet) and any observations were recorded and reviewed later.

We followed the interview protocol to integrate the philosophy, processes, and questions of the study to attain reliability (Frost et al., 2020 ; Ponterotto, 2005 ). To delve deep into the situation about the impacts of fake news and any remedial actions, we used relevant prompts in open-ended questions. The extensive conversations provided key findings regarding the solutions adopted to counter the effects of fake news on business and supply chain operations by reflecting on three industry and technology developments, gathering din depth and valuable narratives in the process. All interviews were audio-recorded, converted into MSWORD, and thematically analyzed via NVIVO 12. To ensure validity and reliability, each researcher independently coded the responses given to the open-ended questions to fully grasp any concepts that were not readily provided by existing theories or field research. The answers were discreetly coded to fully understand any new sentiments, knowledge, and opinions that may not be available in the literature on the selected industries and countries. This practice provided an AI-based solution to fake news issues in business. Furthermore, other consistency checks were carried out, whereby the data and preliminary interpretations were presented to the interviewees from whom they had been sourced to determine their credibility and incorporate any necessary changes, and the scripts were then finalized following their approval (Merriam, 2009 ).

Implementation procedures, findings, and proposed model

Ai and ml implementation, obtaining the dataset.

The dataset for the AI and ML approach was drawn from four Pakistani major online news sources—i.e., ‘Geo News’, ‘The Dawn’, ‘Express Tribune’, and ‘The News’. Approximately 500 pages from each source were scrutinized to extract the relevant affairs and topics from January to April 2021. SCD data were divided into natural, human-caused, maritime, and mass disruptions related to FNaD. Table  3 provides some examples.

Examples of SCD-related words/data

NaturalHuman-causedMaritimeMass

“Tornadoes and Severe Storms”

“Hurricanes”

“Tropical Storms”

“Floods”

“Wildfires”

“Earthquakes”

“Drought”

“Industrial accidents”

“Shootings”

“Acts of terrorism”

“Mass Labor strikes”

“Incidents of mass violence”

“Nuclear Facilities failure”

“Political Crises”

“Wars”

“Offshore Oil Rig Mishaps”

“Cruise Vessel Mishaps”

“Commercial Fishing Mishaps”

“Accidents on Tugboats”

“Accidents on Crude Oil” “

Tankers and Cargo Ships”

“Grounding of Ships”

“Crane Mishaps”

“Accidents in Shipyards”

“Cargo Hauling Accidents”

“Port Delays”

“Infectious disease outbreaks”

“Incidents of community unrest”

“Mass migration and refugees”

“Covid-19”

“Pandemic”

The words were used in different contexts. For example, Words such as health, vaccine, Covid-19, and pandemic were mostly used in the main text in articles related to health supply chains and their disruptions, and words like political, freedom, rights, democracy, and military were applied in political articles. We followed a step-by-step procedure for the implementation.

Pre-processing dataset

  • • PS (Porter Stemmer) was used to index each news page/article to filter out any stop, repeated, and common words to avoid noise in the dataset. The algorithm was used, over several rounds, to remove any non-relevant words from the datasets/textual scripts before considering all criteria or defined rules (Zhang et al., 2020 ). Such an algorithm has been proven to be one of the best techniques in terms of performance (Joshi et al., 2016 ).
  • • TDIF (Term Frequency - Inverse Document Frequency) was utilized for the classical ML models. TDIF is a text classification technique utilized for organizing textual documents from raw datasets into predefined categories to obtain useful information. This is done by representing textual documents into feature vectors consisting of weights that indicate the contribution of each term in text classification (Deng et al., 2004 ; Dogan & Uysal, 2019 ). The effectiveness of TDIF has also been proven to be significant in the weighting process (Dogan & Uysal, 2019 ).

Dimension reduction and features engineering

The Query Expansion.

In the field of natural language processing (NLP) and information retrieval such as metrics of text semantic similarity are the most used techniques (Zhu et al., 2018 , 2020 ; Gao et al., 2015 ). The query expansion approach was utilized for natural language processing to precisely compute the relevancy of keywords related to FNaD by calculating the semantic distance between keywords related to SCDs. This phase explained the keywords used in SCDs to train the classifier to predict the appropriateness of SCDs in the news. As a feature, we analyzed the significance of each SCD keyword to a news category. We had the news articles in the business category and we wanted to evaluate whether it is appropriate for SCDs. The degree of appropriateness was based on the feature’s relevancy score; +1 perfectly appropriate and − 1 vice versa. The outputs of fake or real were decided based on the scores. Each keyword pair of news category and supply chain disruptions were considered appropriate if the relevancy score is + 0.65. As a result, we trained the SCD keywords appropriateness to predict based on the remaining dataset. We trained various frequently used classifiers and reported SVM results as SVM outperformed others.

Query Expansion: The query expansion approach was utilized for natural language processing (NLP) to precisely compute the relevancy of keywords related to FNaD by calculating the semantic distance between keywords related to SCDs, queries, and news articles. This phase explains the keywords used in SCDs to train a classifier to predict the appropriateness of SCDs in the news. We analyze the significance of each SCD keyword to a news category. We have news articles in the category “Business” and we wish to evaluate whether it is appropriate for a SCD example such keywords in the category; “natural disasters”, “man-made disasters”, “marine incidents”, and “mass trauma incidents”. We consider the degree of appropriateness of the news category with each of the SCD keywords based on the feature’s relevancy score. In this scenario, -1 represents that a news item with the category “sports” is utterly inappropriate to be examined with supply chain disruption keywords “offshore oil rig mishaps,“ but a + 1 score suggests that it is perfectly appropriate. Table  4 shows the examples of the relevancy score of the news category against supply chain disruption keywords present in our database.

Example of supply chain disruption features

Category (News)Supply Chain DisruptionRelevancy ScoreAppropriate (Output)
WorldWildfires+ 0.72Yes
PoliticsPolitical Crises+ 1Yes
OpinionMass migration+ 0.54No
TechNatural Disasters-0.87Yes
ScienceNuclear Facilities failure+ 0.21No
HealthInfectious disease outbreaks+ 1Yes
BusinessPort Delays0.68Yes
SportsOffshore Oil Rig Mishaps0.00No

The WordNet ontology (Leão et al., 2019 ) was utilized here for calculating semantic differences between the multiple keywords. The semantic similarity analysis is performed to determine the degree of semantic similarity between the texts. In the fields of NLP, natural language understanding (NLU), and information retrieval, such metrics of text semantic similarity are the most used such as WordNet ontology as a linguistic source because of its wide vocabulary and the explicit definite semantic hierarchy (Zhu et al., 2018 , 2020 ; Gao et al., 2015 ).

Determine the ML task and position the dataset

Classifier Training: The problem of SCD appropriateness to a news article is formulated into two possible outcomes “fake” or “real” as a binary classification. The training set was created from the randomly selected news articles from the dataset. To interpret the training dataset, a supervised learning task was performed with human assistance to make machine learned if a news category “world” is appropriate in supply chain disruption keywords such as “Wildfires”, “Political Crises”, and “Mass migration”. Each keyword pair of news category and SCDs were considered appropriate if the relevancy score is + 0.65. We trained the SCD keywords appropriateness on the selected articles in the training set to predict the appropriateness scores in the test set. We trained numerous frequently used classifiers to train the proposed model, however, because Support Vector Machines (SVM) produced the best results, therefore, we only provide the findings of the SVM classifier in this study.

Machine learning technique and interpreting results

Support Vector Machine: We develop a model for each news item by training a binary classifier using two possible outcomes positive (real) and negative (false) news with the appropriateness of SCDs. We chose a binary classification because it is assumed that a responsible user will want to verify the news before spreading it. In order to determine whether the news is real or fake, the user would most likely check online sources (databases/publishers), and in the process, the user may be able to verify or refute the information depending on the source. Subsequently, if the model is trained using a supervised learning technique with only two possible outcomes, the model is forced to make a binary decision, this increases the model’s accuracy significantly. Therefore, the Support Vector Machine (SVM) (Cortes & Vapnik, 1995 ) classifier was utilized to train the model. SVM is based on a binary classification model, which divides the training samples into two further classes based on multiple support vector hyperplanes in a vector space (Melki et al., 2017 ; Tharwat, 2019 ). The supervised learning approach such as SVM is a widely used machine learning method utilizing training examples or datasets to train the model that can be used to solve classification, and regression problems (Melki et al., 2018a , b ).

The metrics we utilized for evaluation were; Mean Reciprocal Rank (MRR) (Ghanbari & Shakery, 2019 ), Precision at 5 (P@5) (Sharma et al., 2020 ), and Normalized Discounted Cumulative Gain at 5 (NCDG@5) (Alqahtani et al., 2020 ). A threshold was used for the 5 best matches and the top match was given as the output. These evaluation measures account for the testing accuracy of the constructed model. The comparison between the training set and test set values for MRR, P@5, and NCDG@5 depicted slight differences, this validates the accuracy of the model in the test set with 0.647, 0.656, and 0.511 respectively.

Model results

ModelMRRP@5NCDG@5
Training Set0.6630.6810.594
Test Set0.6470.6560.511

Note: Mean Reciprocal Rank = MRR; Precision at 5 = P@5; Normalized Discounted Cumulative Gain = NCDG@5

Interview-based validation and proposed model

The impact of fake news on supply chain operations emerged as the first theme from the analysis. When asked about their knowledge and understanding of FNaD, the respondents gave detailed replies. Some argued that it is one of the most harmful aspects of the internet, with the potential to create SCDs. Respondent 4 said that “ Internet information makes us more attentive. ” Respondents 1, 5, 9, 11, and 13 shared the negative impacts of FNaD on quick, routine, and time-consuming decision making: “ Quick decisions based on the inclusion of the FNaD could be a transcendent disaster for any firm’s supply chain ” (Respondent 5). “ if FNaD is included in routine or prolonged decision making, it definitely agonizes the future result ” (Respondent 2), and “If any decision is based on lies, how can someone expect positive consequences? ” (Respondent 11). Two respondents shared that the purpose of spreading FNaD is to create a specific mindset and narrative in the economy/market to manipulate it: “ Misleading information supposed to build a specific narrative and sentiment in the market, enemies and indirect competitors usually involve in it ” (Respondent 1). The respondents indicated that fake news directly affects operations and has an indirect influence on supply chain operations, contributing to SCDs.

Dealing with FNaD emerged as the second theme of the analysis. The respondents provided numerous options in this regard, suggesting that global communities, social media sites, government, technology, and top management can play key roles in countering FNaD. FNaD should be dealt with smoothly and promptly; otherwise, it will negatively affect supply chain performance: “ As a business community, together we should deal with misleading information and news. Otherwise it could become a scar in business performance ” (Respondent 16). Respondents 12, 15, and 16 mentioned that FNaD should be dealt with as a pandemic like the COVID-19 one. Some respondents mentioned the names of the entities they considered to be primarily responsible for keeping FNaD under control, with Respondents 8, 2, and 11 pointing at the global internet community, social media sites, and the government. On the other hand, respondents 1, 3, 9, 10, 12, and 13 believed that the responsibility of curbing the effects of FNaD on supply chain performance and decision-making should fall on specific industries and businesses. Respondent 13 further explained that “as a business entity, we need to find a mechanism which guides us that specific news or information is legit or not ”, while Respondent 6 opined that “in today’s world, if your business isn’t data-driven, then you are definitely living in the jungle.”

FNaD filtering and counter modeling process suggestions and preparations became the third theme of the analysis. This enriched session contributed many insights into and inputs about countermeasures to FNaD in business and supply chain operations. When we asked Respondent 7 about this, he shared the following Bill Gates quote “ The world won’t care about your self-esteem. The world will expect you to accomplish something before you feel good about yourself ”, and further added “As a business caretaker, this is my responsibility is to shelter and protect my company from fake news, so that, at the end of the day, I will have no regrets.” Respondents 3, 5, 7, 10, 14, 15, and 16 suggested that AI will provide solutions suited to control and counter FNaD. Respondent 10 advised that “Data crawler integration with AI could provide a solution to FNaD”. Respondent 11 shared a similar thought “Each government should prepare AI-based processes according to specific society and economy to rectify the impact of fake news, and that process in the form of software should be provided free of cost to businesses”. The participants also highlighted the importance of using multiple sources to determine whether the news is fake or real, as a single source could be biased or politically driven. Based on the procedure applied for AI, the SVM, and the interview-based validation, we proposed the FNaD detection model shown in Fig.  1 , which encapsulates the key findings.

An external file that holds a picture, illustration, etc.
Object name is 10479_2022_5015_Fig1_HTML.jpg

A fake news and disinformation detection model that uses AI and ML

As depicted in Fig.  1 , the practical decision-making for SCDs is characterized by the predominant use of experiences, judgments, and multiple media resources. These can be categorized as real and fake news. Data demonstrates that the severity of the fake news impact is prompting businesses to invest in more robust, collaborative, and networked supply chains and should prepare AI-based processes according to specific societies and economies to rectify the impact of fake news. Datasets from multiple sources teach decision-makers about whether the particular news or information is legit or not. The data from multiple sources allows decision makers to apply the machine learning approaches and use artificial intelligence. They can therefore better select the appropriate mechanisms to detect fake and real news.

Contributions, implications, conclusion, and future research directions

Contributions and theoretical implications.

Our study fills the knowledge gap about SCDs by utilizing AI and ML that assist to act against FNaD affecting supply chain operations. Loureiro et al., ( 2020 ) suggested that AI has diverse applications in several industrial domains. Dolgui & Ivanov ( 2021 ) hinted that AI could assist in improving resilience against and mitigation of SCDs. We combined a case qualitative method, AI, and SVM in order to reveal how effective decisions could be made within supply chain operations. The extant research advanced our understanding of fake news detection mechanisms using graph and summarization techniques (Kim & Ko, 2021 ). Furthermore, a recent study proposed an AI-based real-time fake news detection system by conducting a systematic literature review (Gupta et al., 2021 ). Our study is novel and distinct from the previous ones in that it developed an effective decision-making model for SC firms to avoid any disruptions caused by FNaD. As such, it contributes to the SCDs literature that will be of interest to scholars and practitioners.

Additionally, the study bridges a gap in the literature by providing a practical solution suited to eliminate FNaD in business scenarios affected by SCDs. The scattered and fragmented extant literature had left many questions about FNaD unanswered (Di Domenico & Visentin, 2020 ). Therefore, the main contribution of our study is to propose an AI- and ML-oriented process capable of flagging/reducing/eliminating FNaD before it reaches decision-makers and of identifying any authentic news and information, thus counteracting SCD-aimed news.

The United Nations ( 2020 ) has urged the implementation of actions against misinformation and cybercrime. Edwards et al., ( 2021 ) concluded that such ‘digital wildfire’ spreads faster than original and legit news. We propose a process, named FNaD integrated with AI that initiates when news or information is embedded in it. It then begins verification within defined sources (e.g., major newspapers’ websites) and, in the next step, it starts seeking similarities between news or information keywords. Once the AI process reaches a decision, it provides an output by classifying the news item as FNaD (rejection) or real/ authentic news or information (acceptance).

FNaD can be significant determinants of SCDs, as is highlighted in research (Kovács & Sigala, 2021 ). They adversely influence firms’ operations, import, and export, and alter purchasing behaviors (e.g., Di Domenico et al., 2021 ; Petit et al., 2019 ; Wang et al., 2021 ). The FNaD model shows the ability to control the inclusion of FNaD into firms’ activities. Our study contributes to the management and detection of FNaD in firms’ supply chain operations by proposing and testing a FNaD detection model that uses AI and ML. This model could help to control the potential digital wildfire before it damages firms’ operations. FNaD create unnatural phenomena that interrupt supply chain operations and enhance demand-supply loopholes (e.g., De Chenecey 2018 ; Dwivedi et al., 2020 ).

Managerial and policy implications

Our model detects FNaD early before they can affect firms or managerial decision-making. The current pandemic scenario has turned the attention of managers and governments toward FNaD and their impacts on supply chain operations, economy, and society. On the other hand, with AI and ML becoming an integral part of firms and operations, managers should consider their adoption to deal with FNaD, given their potential to detect and filter them out. Our model is executed and managed based on major local databases and news outlets to support supply chain operations. Should managers wish to integrate adding any further international data and news outlets, they could do so based on their requirements. The implementation of our model would depend on a willing and authoritative IT infrastructure, with even small and medium enterprises being able to invest in its application. We proposed a process capable of detecting and filtering out FNaD. This process protects firms from the impacts of FNaD, enabling managers to engage in decision-making based on legitimate and valid news or information.

From the perspective of specific industries, newsrooms could utilize the FNaD detection model to confirm a news item from different sources. In other words, the FNaD detection model can help in the timely development of a counter-strategy by detecting any fake news before it spreads and causes SCDs. The phenomenon has recently been seen in the context of the COVID-19 pandemic, with people sharing unverified news items on the virus and the side effects of vaccines over social media, thus causing SCDs in vaccine distribution. Moreover, pre-emptive fake news detection can be equally beneficial in avoiding financial market crashes. For government policymakers, the FNaD detection model can be a comprehensive tool to be used during pandemics or similar situations. Governments have been seen to regularly change their decisions, rules, and regulations. Therefore, at the government level, the FNaD detection model can ensure that accurate and on-time legitimate information is received to deal with any economic, social, and health conditions. Another implication for governments pertains to the provision of this process—for free or at a discount—to all business-related entities, especially micro, small, and medium firms. Such a decision would create trust between the government and those entities.

Conclusion and future research directions

SCDs are problematic for business operations. It is believed that SCDs could cause obstacles due to disinformation. Therefore, we proposed the FNaD model that filters the FNaD by utilizing AI and ML. This model takes help from different sources on internet to verify the received information. It then decides and notifies whether that received news is authentic or not. By using a mixed-method approach, we proposed a way to tackle SCD-creating FNaD using AI- and ML-based techniques. In this regard, future research could, first, focus on more specific FNaD and supply chain operation case studies, such as the detection of FNaD in humanitarian operations using AI and ML approaches. Additionally, they could integrate specific operational performance measures in these approaches, combining them with advanced visual methods. Also, given the fast pace of scientific development, any new and effective algorithm or technique could be used in the proposed model in the future. Furthering, testing the model based on longitudinal studies aimed at exploring and understanding the developments in SCDs linked with FNaD would make it more reliable and refined.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Pervaiz Akhtar, Email: [email protected] .

Arsalan Mujahid Ghouri, Email: [email protected] .

Haseeb Ur Rehman Khan, Email: moc.liamg@6nahkrbeesah .

Mirza Amin ul Haq, Email: [email protected] .

Usama Awan, Email: [email protected] .

Nadia Zahoor, Email: [email protected] .

Zaheer Khan, Email: [email protected] .

Aniqa Ashraf, Email: nc.ude.ctsu.liam@aqinA .

  • Adam, D. (2006). Royal Society tells Exxon: stop funding climate change denial.The Guardian, https://www.theguardian.com/environment/2006/sep/20/oilandpetrol.business
  • Ahmad A, Webb J, Desouza KC, Boorman J. Strategically-motivated advanced persistent threat: Definition, process, tactics and a disinformation model of counterattack. Computers & Security. 2019; 86 :402–418. doi: 10.1016/j.cose.2019.07.001. [ CrossRef ] [ Google Scholar ]
  • Allcott H, Gentzkow M. Social media and fake news in the 2016 election. Journal of Economic Perspectives. 2017; 31 (2):211–236. doi: 10.1257/jep.31.2.211. [ CrossRef ] [ Google Scholar ]
  • Alnaied A, Elbendak M, Bulbul A. An intelligent use of stemmer and morphology analysis for Arabic information retrieval. Egyptian Informatics Journal. 2020; 21 (4):209–217. doi: 10.1016/j.eij.2020.02.004. [ CrossRef ] [ Google Scholar ]
  • Alqahtani, A., Alnefaie, M., Alamri, N., & Khorsi, A. (2020). Enhancing the capabilities of solr information retrieval system: Arabic language. In 2020 3rd International Conference on Computer Applications & Information Security (ICCAIS ) (pp.1–5). IEEE
  • Awan U, Kanwal N, Alawi S, Huiskonen J, Dahanayake A. Artificial intelligence for supply chain success in the era of data analytics. Studies in Computational Intelligence. 2021; 935 :3–21. [ Google Scholar ]
  • Atkin D, Chaudhry A, Chaudry S, Khandelwal AK, Verhoogen E. Organizational barriers to technology adoption: Evidence from soccer-ball producers in Pakistan. The Quarterly Journal of Economics. 2017; 132 (3):1101–1164. doi: 10.1093/qje/qjx010. [ CrossRef ] [ Google Scholar ]
  • Bag S, Gupta S, Kumar A, Sivarajah U. An integrated artificial intelligence framework for knowledge creation and B2B marketing rational decision making for improving firm performance. Industrial Marketing Management. 2021; 92 :178–189. doi: 10.1016/j.indmarman.2020.12.001. [ CrossRef ] [ Google Scholar ]
  • Barns, S. (2017). Trolls show people how to create fake news stories and spread them on Facebook… as curry houses fall victim to false ‘human meat’ claims . The Scottish Sun
  • https://www.thescottishsun.co.uk/living/1077871/trolls-show-people-how-to-create-fake-news-stories-and-spread-them-on-facebook-as-curry-houses-fall-victim-to-false-human-meat-claims/
  • Behl, A., Dutta, P., Luo, Z., & Sheorey, P. (2021). Enabling artificial intelligence on a donation-based crowdfunding platform: a theoretical approach.Annals of Operations Research,1–29
  • Belhadi, A., Mani, V., Kamble, S. S., Khan, S. A. R., & Verma, S. (2021). Artificial intelligence-driven innovation for enhancing supply chain resilience and performance under the effect of supply chain dynamism: an empirical investigation . Annals of Operations Research [ PMC free article ] [ PubMed ]
  • Bero LA. Tobacco industry manipulation of research. Public Health Reports. 2005; 120 (2):200–208. doi: 10.1177/003335490512000215. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bode C, Wagner SM, Petersen KJ, Ellram LM. Understanding responses to supply chain disruptions: Insights from information processing and resource dependence perspectives. Academy of Management Journal. 2011; 54 (4):833–856. doi: 10.5465/amj.2011.64870145. [ CrossRef ] [ Google Scholar ]
  • Booth, A., Hamilton, B., & Vintiadis, M. (2019). Fake news, real problems: combating social media disinformation. Global Fraud and Risk Report 2019/20 11th annual edition.
  • https:// www.kroll.com/-/media/kroll/pdfs/publications/global-fraud-and-risk-report-2019-20.pdf
  • Brock JKU, von Wangenheim F. Demystifying AI: What digital transformation leaders can teach you about realistic artificial intelligence. California Management Review. 2019; 61 (4):110–134. doi: 10.1177/1536504219865226. [ CrossRef ] [ Google Scholar ]
  • Cao G, Duan Y, Edwards JS, Dwivedi YK. Understanding managers’ attitudes and behavioural intentions towards using artificial intelligence for organizational decision-making. Technovation. 2021; 106 :102312. doi: 10.1016/j.technovation.2021.102312. [ CrossRef ] [ Google Scholar ]
  • Chen, J., Lim, C. P., Tan, K. H., Govindan, K., & Kumar, A. (2021). Artificial intelligence-based human-centric decision support framework: an application to predictive maintenance in asset management under pandemic environments.Annals of Operations Research.1–24 [ PMC free article ] [ PubMed ]
  • Churchill, F. (2018). Unilever says fake news makes digital supply chain unsustainable. https://www.cips.org/supply-management/news/2018/february/unilever-says-fake-news-makes-digital-supply-chain-unsustainable/ . Accessed 29 November 2021
  • Clarke J, Chen H, Du D, Hu YJ. Fake news, investor attention, and market reaction. Information Systems Research. 2020; 32 (1):35–52. doi: 10.1287/isre.2019.0910. [ CrossRef ] [ Google Scholar ]
  • Cortes C, Vapnik V. Support-vector networks. Machine Learning. 1995; 20 (3):273–297. doi: 10.1007/BF00994018. [ CrossRef ] [ Google Scholar ]
  • Creswell JW. Qualitative, quantitative and mixed methods approaches. Thousand Oaks, CA: Sage; 2014. [ Google Scholar ]
  • Cui, L., Wu, H., Wu, L., Kumar, A., & Tan, K. H. (2022). Investigating the relationship between digital technologies, supply chain integration and firm resilience in the context of COVID-19.Annals of Operations Research,1–29 [ PMC free article ] [ PubMed ]
  • De Chenecey, S. P. (2018). The post-truth business: How to rebuild brand authenticity in a distrusting world . Kogan Page Publishers
  • Dearlove JV, Bialous SA, Glantz SA. Tobacco industry manipulation of the hospitality industry to maintain smoking in public places. Tobacco Control. 2002; 11 (2):94–104. doi: 10.1136/tc.11.2.94. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Delcker, J. (2018). The man who invented the self-driving car (in 1986). https://www.politico.eu/article/delf-driving-car-born-1986-ernst-dickmanns-mercedes/ . Accessed 26 November 2021
  • Deng, Z. H., Tang, S. W., Yang, D. Q., Li, M. Z. L. Y., & Xie, K. Q. (2004). A comparative study on feature weight in text categorization. In Asia-Pacific Web Conference (pp.588–597). Springer, Berlin, Heidelberg
  • Denzin NK, Lincoln YS. The SAGE Handbook of qualitative research. Thousand Oaks, CA: Sage; 2005. [ Google Scholar ]
  • Di Domenico G, Visentin M. Fake news or true lies? Reflections about problematic contents in marketing. International Journal of Market Research. 2020; 62 (4):409–417. doi: 10.1177/1470785320934719. [ CrossRef ] [ Google Scholar ]
  • Di Domenico G, Sit J, Ishizaka A, Nunan D. Fake news, social media and marketing: A systematic review. Journal of Business Research. 2021; 124 :329–341. doi: 10.1016/j.jbusres.2020.11.037. [ CrossRef ] [ Google Scholar ]
  • Dogan T, Uysal AK. On term frequency factor in supervised term weighting schemes for text classification. Arabian Journal for Science and Engineering. 2019; 44 (11):9545–9560. doi: 10.1007/s13369-019-03920-9. [ CrossRef ] [ Google Scholar ]
  • Dolgui A, Ivanov D. Ripple effect and supply chain disruption management: new trends and research directions. International Journal of Production Research. 2021; 59 (1):102–109. doi: 10.1080/00207543.2021.1840148. [ CrossRef ] [ Google Scholar ]
  • Dubey R, Bryde DJ, Blome C, Roubaud D, Giannakis M. Facilitating artificial intelligence-powered supply chain analytics through alliance management during the pandemic crises in the B2B context. Industrial Marketing Management. 2021; 96 :135–146. doi: 10.1016/j.indmarman.2021.05.003. [ CrossRef ] [ Google Scholar ]
  • Dwivedi YK, Kelly G, Janssen M, Rana NP, Slade EL, Clement M. Social media: The good, the bad, and the ugly. Information Systems Frontiers. 2018; 20 (3):419–423. doi: 10.1007/s10796-018-9848-5. [ CrossRef ] [ Google Scholar ]
  • Dwivedi, Y. K., Ismagilova, E., Hughes, D. L., Carlson, J., Filieri, R., Jacobson, J., Jain, V., Karjaluoto, H., Kefi, H., Krishen, A. S., Kumar, V., Rahman, M. M., Raman, R., Rauschnabel, P. A., Rowley, J., Salo, J., Tran, G. A., & Wang, Y. (2020). Setting the future of digital and social media marketing research: Perspectives and research propositions.International Journal of Information Management, 102168
  • Edwards A, Webb H, Housley W, Beneito-Montagut R, Procter R, Jirotka M. Forecasting the governance of harmful social media communications: Findings from the digital wildfire policy Delphi. Policing and Society. 2021; 31 (1):1–19. doi: 10.1080/10439463.2020.1839073. [ CrossRef ] [ Google Scholar ]
  • Eisenhardt KM. Building theories from case study research. Academy of Management Review. 1989; 14 (4):532–550. doi: 10.2307/258557. [ CrossRef ] [ Google Scholar ]
  • Eisenhardt KM, Graebner ME. Theory building from cases: Opportunities and challenges. Academy of Management Journal. 2007; 50 (1):25–32. doi: 10.5465/amj.2007.24160888. [ CrossRef ] [ Google Scholar ]
  • Endsley MR. Combating information attacks in the age of the Internet: new challenges for cognitive engineering. Human factors. 2018; 60 (8):1081–1094. doi: 10.1177/0018720818807357. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • EverStream (2020). COVID-19: The future of supply chain. Retrieved from https://www.everstream.ai/risk-center/special-reports/covid-19-the-future-of-supply-chain/
  • Farrokhi A, Shirazi F, Hajli N, Tajvidi M. Using artificial intelligence to detect crisis related to events: Decision making in B2B by artificial intelligence. Industrial Marketing Management. 2020; 91 :257–273. doi: 10.1016/j.indmarman.2020.09.015. [ CrossRef ] [ Google Scholar ]
  • Frost DM, Hammack PL, Wilson BD, Russell ST, Lightfoot M, Meyer IH. The qualitative interview in psychology and the study of social change: sexual identity development, minority stress, and health in the generations study. Qualitative Psychology. 2020; 7 (3):245–266. doi: 10.1037/qup0000148. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gadri, S., & Moussaoui, A. (2015, May). Information retrieval: A new multilingual stemmer based on a statistical approach. In 2015 3rd International Conference on Control, Engineering & Information Technology (CEIT) (pp.1–6). IEEE
  • Gao JB, Zhang BW, Chen XH. A WordNet-based semantic similarity measurement combining edge-counting and information content theory. Engineering Applications of Artificial Intelligence. 2015; 39 :80–88. doi: 10.1016/j.engappai.2014.11.009. [ CrossRef ] [ Google Scholar ]
  • Ghanbari E, Shakery A. ERR. Rank: An algorithm based on learning to rank for direct optimization of Expected Reciprocal Rank. Applied Intelligence. 2019; 49 (3):1185–1199. doi: 10.1007/s10489-018-1330-z. [ CrossRef ] [ Google Scholar ]
  • Ghazali EM, Mutum DS, Chong JH, Nguyen B. Do consumers want mobile commerce? A closer look at M-shopping and technology adoption in Malaysia. Asia Pacific Journal of Marketing and Logistics. 2018; 30 (4):1064–1086. doi: 10.1108/APJML-05-2017-0093. [ CrossRef ] [ Google Scholar ]
  • Grewal D, Guha A, Satornino CB, Schweiger EB. Artificial intelligence: The light and the darkness. Journal of Business Research. 2021; 136 :229–236. doi: 10.1016/j.jbusres.2021.07.043. [ CrossRef ] [ Google Scholar ]
  • Grover, P., Kar, A. K., & Dwivedi, Y. K. (2020). Understanding artificial intelligence adoption in operations management: insights from the review of academic literature and social media discussions. Annals of Operations Research . Springer US
  • Guess A, Nagler J, Tucker J. Less than you think: Prevalence and predictors of fake news dissemination on Facebook. Science Advances. 2019; 5 (1):eaau4586. doi: 10.1126/sciadv.aau4586. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gupta, A., Li, H., Farnoush, A., & Jiang, K. (2021). W. Understanding Patterns of COVID Infodemic: A Systematic and Pragmatic Approach to Curb Fake News.Journal of Business Research [ PMC free article ] [ PubMed ]
  • Haenlein M, Kaplan A. A brief history of artificial intelligence: On the past, present, and future of artificial intelligence. California Management Review. 2019; 61 (4):5–14. doi: 10.1177/0008125619864925. [ CrossRef ] [ Google Scholar ]
  • Hopp T, Ferrucci P, Vargo CJ. Why do people share ideologically extreme, false, and misleading content on social media? A self-report and trace data–based analysis of countermedia content dissemination on Facebook and Twitter. Human Communication Research. 2020; 46 (4):357–384. doi: 10.1093/hcr/hqz022. [ CrossRef ] [ Google Scholar ]
  • Ibrishimova, M. D., & Li, K. F. (2019). A machine learning approach to fake news detection using knowledge verification and natural language processing. In International Conference on Intelligent Networking and Collaborative Systems (pp.223–234). Springer, Cham
  • Ibrishimova, M. D., & Li, K. F. (2018). Automating incident classification using sentiment analysis and machine learning. In International Conference on Intelligent, Secure, and Dependable Systems in Distributed and Cloud Environments (pp.50–62). Springer, Cham
  • Jabbar A, Akhtar P, Dani S. Real-time big data processing for instantaneous marketing decisions: A problematization approach. Industrial Marketing Management. 2020; 90 :558–569. doi: 10.1016/j.indmarman.2019.09.001. [ CrossRef ] [ Google Scholar ]
  • Jayawickrama U, Liu S, Hudson Smith M, Akhtar P, Bashir A. Knowledge retention in ERP implementations: the context of UK SMEs. Production Planning & Control. 2019; 30 (10–12):1032–1047. doi: 10.1080/09537287.2019.1582107. [ CrossRef ] [ Google Scholar ]
  • Jayawickrama U, Liu S, Smith MH. Empirical evidence of an integrative knowledge competence framework for ERP systems implementation in UK industries. Computers in Industry. 2016; 82 :205–223. doi: 10.1016/j.compind.2016.07.005. [ CrossRef ] [ Google Scholar ]
  • Jiang T, Li JP, Haq AU, Saboor A, Ali A. A Novel Stacking Approach for Accurate Detection of Fake News. Ieee Access : Practical Innovations, Open Solutions. 2021; 9 :22626–22639. doi: 10.1109/ACCESS.2021.3056079. [ CrossRef ] [ Google Scholar ]
  • Joshi A, Thomas N, Dabhade M. Modified porter stemming algorithm. International Journal of Computer Science and Information Technologies. 2016; 7 (1):266–269. [ Google Scholar ]
  • Kampakis, S., & Adamides, A. (2014). Using Twitter to predict football outcomes.arXiv preprint arXiv:1411.1243
  • Kaplan A, Haenlein M. Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Business Horizons. 2019; 62 (1):15–25. doi: 10.1016/j.bushor.2018.08.004. [ CrossRef ] [ Google Scholar ]
  • Kareem, I., & Awan, S. M. (2019). Pakistani Media Fake News Classification using Machine Learning Classifiers. In 2019 International Conference on Innovative Computing (ICIC) (pp.1–6). IEEE
  • Katsaros, D., Stavropoulos, G., & Papakostas, D. (2019). Which machine learning paradigm for fake news detection?. In 2019 IEEE/WIC/ACM International Conference on Web Intelligence (WI) (pp.383–387). IEEE
  • Konstantakis, K. N., Cheilas, P. T., Melissaropoulos, I. G., Xidonas, P., & Michaelides, P. G. (2022). Supply chains and fake news: a novel input–output neural network approach for the US food sector.Annals of Operations Research,1–16
  • Kim A, Dennis AR. Says who? The effects of presentation format and source rating on fake news in social media. MIS Quarterly: Management Information Systems. 2019; 43 (3):1025–1039. doi: 10.25300/MISQ/2019/15188. [ CrossRef ] [ Google Scholar ]
  • Kim G, Ko Y. Effective fake news detection using graph and summarization techniques. Pattern Recognition Letters. 2021; 151 :135–139. doi: 10.1016/j.patrec.2021.07.020. [ CrossRef ] [ Google Scholar ]
  • Kovács G, Sigala IF. Lessons learned from humanitarian logistics to manage supply chain disruptions. Journal of Supply Chain Management. 2021; 57 (1):41–49. doi: 10.1111/jscm.12253. [ CrossRef ] [ Google Scholar ]
  • Kumar V, Rajan B, Venkatesan R, Lecinski J. Understanding the role of artificial intelligence in personalized engagement marketing. California Management Review. 2019; 61 (4):135–155. doi: 10.1177/0008125619859317. [ CrossRef ] [ Google Scholar ]
  • Leão F, Revoredo K, Baião F. Extending WordNet with UFO foundational ontology. Journal of Web Semantics. 2019; 57 :100499. doi: 10.1016/j.websem.2019.02.002. [ CrossRef ] [ Google Scholar ]
  • Lewandowsky S, Ecker UKH, Seifert CM, Schwarz N, Cook J. Misinformation and its correction: Continued influence and successful debiasing. Psychological Science in the Public Interest. 2012; 13 (3):106–131. doi: 10.1177/1529100612451018. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Li L, Zhang Q, Wang X, Zhang J, Wang T, Gao T, Duan W, Tsoi KK, Wang F. Characterizing the propagation of situational information in social media during COVID-19 epidemic: A case study on Weibo. IEEE Transactions on Computational Social Systems. 2020; 7 (2):556–562. doi: 10.1109/TCSS.2020.2980007. [ CrossRef ] [ Google Scholar ]
  • Loureiro SMC, Guerreiro J, Tussyadiah I. Artificial intelligence in business: State of the art and future research agenda. Journal of Business Research. 2020; 129 :911–926. doi: 10.1016/j.jbusres.2020.11.001. [ CrossRef ] [ Google Scholar ]
  • Mahdavi SA, Kolahi AA, Akhgari M, Gheshlaghi F, Gholami N, Moshiri M, Mohtasham N, Ebrahimi S, Ziaeefar P, McDonald R, Tas B, Kazemifar AM, Amirabadizadeh A, Ghadirzadeh M, Jamshidi F, Dadpour B, Mirtorabi SD, Farnaghi F, Zamani N, Hassanian-Moghaddam H. COVID-19 pandemic and methanol poisoning outbreak in Iranian children and adolescents: A data linkage study. Alcoholism: Clinical and Experimental Research. 2021; 45 (9):1853–1863. doi: 10.1111/acer.14680. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Matheus R, Janssen M, Maheshwari D. Data science empowering the public: Data-driven dashboards for transparent and accountable decision-making in smart cities. Government Information Quarterly. 2020; 37 (3):101284. doi: 10.1016/j.giq.2018.01.006. [ CrossRef ] [ Google Scholar ]
  • Mccallum, S. (2017). Restaurant hit by ‘human meat’ fake news claims, BBC . https://www.bbc.com/news/newsbeat-39966215
  • McCarthy J, Minsky ML, Rochester N, Shannon CE. A proposal for the Dartmouth summer research project on artificial intelligence, august 31, 1955. AI Magazine. 2006; 27 (4):12. [ Google Scholar ]
  • McCulloch WS, Pitts W. A logical calculus of the ideas immanent in nervous activity. The Bulletin of Mathematical Biophysics. 1943; 5 (4):115–133. doi: 10.1007/BF02478259. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Mehrotra M, Schmidt W. The value of supply chain disruption duration information. Production and Operations Management. 2021; 30 (9):3015–3035. doi: 10.1111/poms.13415. [ CrossRef ] [ Google Scholar ]
  • Melki G, Cano A, Ventura S. MIRSVM: multi-instance support vector machine with bag representatives. Pattern Recognition. 2018; 79 :228–241. doi: 10.1016/j.patcog.2018.02.007. [ CrossRef ] [ Google Scholar ]
  • Melki G, Kecman V, Ventura S, Cano A. OLLAWV: online learning algorithm using worst-violators. Applied Soft Computing. 2018; 66 :384–393. doi: 10.1016/j.asoc.2018.02.040. [ CrossRef ] [ Google Scholar ]
  • Melki G, Cano A, Kecman V, Ventura S. Multi-target support vector regression via correlation regressor chains. Information Sciences. 2017; 415 :53–69. doi: 10.1016/j.ins.2017.06.017. [ CrossRef ] [ Google Scholar ]
  • Merriam SB. Qualitative research: A guide to design and implementation. San Francisco, CA: Jossey-Bass; 2009. [ Google Scholar ]
  • Mikalef P, Conboy K, Krogstie J. Artificial intelligence as an enabler of B2B marketing: A dynamic capabilities micro-foundations approach. Industrial Marketing Management. 2021; 98 :80–92. doi: 10.1016/j.indmarman.2021.08.003. [ CrossRef ] [ Google Scholar ]
  • Milner P. A brief history of the Hebbian learning rule. Canadian Psychology. 2003; 44 (1):5–9. doi: 10.1037/h0085817. [ CrossRef ] [ Google Scholar ]
  • National Crime Agency (2018). UK national cyber security centre, the cyber threat to UK business, 2017–2018 Report, April 10, 2018. Unclassified, National Security Archive. https://nsarchive.gwu.edu/media/17676/ocr
  • Ni D, Xiao Z, Lim MK. A systematic review of the research trends of machine learning in supply chain management. International Journal of Machine Learning and Cybernetics. 2020; 11 (7):1463–1482. doi: 10.1007/s13042-019-01050-0. [ CrossRef ] [ Google Scholar ]
  • Niessner, M. (2018). Does fake news sway financial markets?”Yale Insights. https://insights.som.yale.edu/insights/does-fake-news-sway-financial-markets
  • Oxford English Dictionary (2020a). Oxford, UK:Oxford University Press. https://www.oxfordlearnersdictionaries.com/definition/english/fake-news
  • Oxford English Dictionary (2020). Oxford, UK:Oxford University Press. https://www.oxfordlearnersdictionaries.com/definition/english/disinformation
  • Parsons, D. D. (2020). The impact of fake news on company value: evidence from tesla and galena biopharma. Chancellor’s Honors Program Projects. https://trace.tennessee.edu/utk_chanhonoproj/2328
  • Paschen J, Kietzmann J, Kietzmann TC. Artificial intelligence (AI) and its implications for market knowledge in B2B marketing. Journal of Business and Industrial Marketing. 2019; 34 (7):1410–1419. doi: 10.1108/JBIM-10-2018-0295. [ CrossRef ] [ Google Scholar ]
  • Petratos PN. Misinformation, disinformation, and fake news: Cyber risks to business. Business Horizons. 2021; 64 (6):763–774. doi: 10.1016/j.bushor.2021.07.012. [ CrossRef ] [ Google Scholar ]
  • Petit TJ, Croxton KL, Fiksel J. The evolution of resilience in supply chain management: A retrospective on ensuring supply chain resilience. Journal of Business Logistics. 2019; 40 (1):56–65. doi: 10.1111/jbl.12202. [ CrossRef ] [ Google Scholar ]
  • Poddar, K., & Umadevi, K. S. (2019). ). Comparison of various machine learning models for accurate detection of fake news. In 2019 Innovations in Power and Advanced Computing Technologies (i-PACT) (Vol.1, pp.1–5). IEEE
  • Polit DF, Beck CT. Gender bias undermines evidence on gender and health. Qualitative Health Research. 2012; 22 (9):1298. doi: 10.1177/1049732312453772. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ponterotto JG. Qualitative research in counseling psychology: A primer on research paradigms and philosophy of science. Journal of Counseling Psychology. 2005; 52 (2):126–136. doi: 10.1037/0022-0167.52.2.126. [ CrossRef ] [ Google Scholar ]
  • Pournader M, Ghaderi H, Hassanzadegan A, Fahimnia B. Artificial intelligence applications in supply chain management. International Journal of Production Economics. 2021; 241 :108250. doi: 10.1016/j.ijpe.2021.108250. [ CrossRef ] [ Google Scholar ]
  • Preil, D., & Krapp, M. (2021). Artificial intelligence-based inventory management: a Monte Carlo tree search approach.Annals of Operations Research,1–25
  • Rahi S, Ghani MA, Ngah AH. Integration of unified theory of acceptance and use of technology in internet banking adoption setting: Evidence from Pakistan. Technology in Society. 2019; 58 :101120. doi: 10.1016/j.techsoc.2019.03.003. [ CrossRef ] [ Google Scholar ]
  • Raisch S, Krakowski S. Artificial Intelligence and Management: The Automation-Augmentation Paradox. Academy of Management Review. 2020; 46 (1):1–48. [ Google Scholar ]
  • Ransbotham, S., Kiron, D., Gerbert, P., & Reeves, M. (2017). Reshaping business with artificial intelligence: Closing the gap between ambition and action. MIT Sloan Management Review , 59 (1). Retrieved from https://www.proquest.com/docview/1950374030?pq-origsite=gscholar&fromopenview=true
  • Reisach U. The responsibility of social media in times of societal and political manipulation. European Journal of Operational Research. 2021; 291 (3):906–917. doi: 10.1016/j.ejor.2020.09.020. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Resilinc (2021). Supply chain disruptions up 67% in 2020 with factory fires taking top spot for second year in a row. Retrieved from https://www.resilinc.com/press-release/supply-chain-disruptions-up-67-in-2020-with-factory-fires-taking-top-spot-for-second-year-in-a-row/
  • Reuters (2022). Fact check-Food processing plant fires in 2022 are not part of a conspiracy to trigger U.S. food shortages. Reuters . Retrieved from https://www.reuters.com/article/factcheck-processing-fire-idUSL2N2WW2CY
  • Riahi Y, Saikouk T, Gunasekaran A, Badraoui I. Artificial intelligence applications in the supply chain: A descriptive bibliometric analysis and future research directions. Expert Systems with Applications. 2021; 173 :114702. doi: 10.1016/j.eswa.2021.114702. [ CrossRef ] [ Google Scholar ]
  • Roozenbeek J, Van Der Linden S. The fake news game: actively inoculating against the risk of misinformation. Journal of Risk Research. 2019; 22 (5):570–580. doi: 10.1080/13669877.2018.1443491. [ CrossRef ] [ Google Scholar ]
  • Roscoe RD, Grebitus C, O’Brian J, Johnson AC, Kula I. Online information search and decision making: Effects of web search stance. Computers in Human Behavior. 2016; 56 :103–118. doi: 10.1016/j.chb.2015.11.028. [ CrossRef ] [ Google Scholar ]
  • Sabeeh, V., Zohdy, M., & Al Bashaireh, R. (2019). Enhancing the Fake News Detection by Applying Effective Feature Selection Based on Semantic Sources. In 2019 International Conference on Computational Science and Computational Intelligence (CSCI) (pp.1365–1370). IEEE
  • Sharma, M., Luthra, S., Joshi, S., & Kumar, A. (2021). Implementing challenges of artificial intelligence: Evidence from the public manufacturing sector of an emerging economy.Government Information Quarterly,101624
  • Sharma, V. K., Mittal, N., & Vidyarthi, A. (2020). Context-based translation for the out of vocabulary words applied to Hindi-English cross-lingual information retrieval.IETE Technical Review,1–10
  • Sheng J, Amankwah-Amoah J, Khan Z, Wang X. COVID-19 Pandemic in the New Era of Big Data Analytics: Methodological Innovations and Future Research Directions. British Journal of Management. 2020; 32 (4):1164–1183. doi: 10.1111/1467-8551.12441. [ CrossRef ] [ Google Scholar ]
  • Shrestha YR, Ben-Menahem SM, von Krogh G. Organizational Decision-Making Structures in the age of artificial intelligence. California Management Review. 2019; 61 (4):66–83. doi: 10.1177/0008125619862257. [ CrossRef ] [ Google Scholar ]
  • Siew EG, Rosli K, Yeow PH. Organizational and environmental influences in the adoption of computer-assisted audit tools and techniques (CAATTs) by audit firms in Malaysia. International Journal of Accounting Information Systems. 2020; 36 :100445. doi: 10.1016/j.accinf.2019.100445. [ CrossRef ] [ Google Scholar ]
  • Sodhi M, Tang C. Supply chain management for extreme conditions: Research opportunities. Journal of Supply Chain Management. 2021; 57 (1):7–16. doi: 10.1111/jscm.12255. [ CrossRef ] [ Google Scholar ]
  • Sohrabpour V, Oghazi P, Toorajipour R, Nazarpour A. Export sales forecasting using artificial intelligence. Technological Forecasting and Social Change. 2021; 163 :120480. doi: 10.1016/j.techfore.2020.120480. [ CrossRef ] [ Google Scholar ]
  • Swanson EB, Wang P. Knowing why and how to innovate with packaged business software. Journal of Information Technology. 2005; 20 (1):20–31. doi: 10.1057/palgrave.jit.2000033. [ CrossRef ] [ Google Scholar ]
  • Swink M, Schoenherr T. The effects of cross-functional integration on profitability, process efficiency, and asset productivity. Journal of Business Logistics. 2015; 36 (1):69–87. doi: 10.1111/jbl.12070. [ CrossRef ] [ Google Scholar ]
  • Talamo A, Marocco S, Tricol C. “The Flow in the funnel”: Modeling organizational and individual decision-making for designing financial AI-based systems. Frontiers in Psychology. 2021; 12 :697101. doi: 10.3389/fpsyg.2021.697101. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Tandoc EC, Jr., Lim ZW, Ling R. Defining “fake news” A typology of scholarly definitions. Digital Journalism. 2018; 6 (2):137–153. doi: 10.1080/21670811.2017.1360143. [ CrossRef ] [ Google Scholar ]
  • Teddlie C, Yu F. Mixed methods sampling: A typology with examples. Journal of Mixed Methods Research. 2007; 1 (1):77–100. doi: 10.1177/1558689806292430. [ CrossRef ] [ Google Scholar ]
  • The News (2020). Growing demand drives herb prices up, The News . https://www.thenews.com.pk/print/669097-growing-demand-drives-herb-prices-up
  • Tharwat A. Parameter investigation of support vector machine classifier with kernel functions. Knowledge and Information Systems. 2019; 61 (3):1269–1302. doi: 10.1007/s10115-019-01335-4. [ CrossRef ] [ Google Scholar ]
  • Tong C, Gill H, Li J, Valenzuela S, Rojas H. Fake news is anything they say!”—Conceptualization and weaponization of fake news among the American public. Mass Communication and Society. 2020; 23 (5):755–778. doi: 10.1080/15205436.2020.1789661. [ CrossRef ] [ Google Scholar ]
  • Toorajipour R, Sohrabpour V, Nazarpour A, Oghazi P, Fischl M. Artificial intelligence in supply chain management: A systematic literature review. Journal of Business Research. 2021; 122 :502–517. doi: 10.1016/j.jbusres.2020.09.009. [ CrossRef ] [ Google Scholar ]
  • United Nations (2020). UN tackles ‘infodemic’ of misinformation and cybercrime in COVID-19 crisis. Retrieved from https://www.un.org/en/un-coronavirus-communications-team/un-tackling-%E2%80%98infodemic%E2%80%99-misinformation-and-cybercrime-covid-19
  • Vincent VU. Integrating intuition and artificial intelligence in organizational decision-making. Business Horizons. 2021; 64 (4):425–438. doi: 10.1016/j.bushor.2021.02.008. [ CrossRef ] [ Google Scholar ]
  • Vos, A. D., Strydom, H., Fouche, C. B., & Delport, C. S. L. (2005). Research at grassroots. For the social sciences and human service professions . Pretoria: Van Schaik Publishers
  • Wamba SF, Dubey R, Gunasekaran A, Akter S. The performance effects of big data analytics and supply chain ambidexterity: The moderating effect of environmental dynamism. International Journal of Production Economics. 2020; 222 :107498. doi: 10.1016/j.ijpe.2019.09.019. [ CrossRef ] [ Google Scholar ]
  • Wamba-Taguimdje SL, Wamba F, Kala Kamdjoug S, Wanko T. Influence of artificial intelligence (AI) on firm performance: the business value of AI-based transformation projects. Business Process Management Journal. 2020; 26 (7):1893–1924. doi: 10.1108/BPMJ-10-2019-0411. [ CrossRef ] [ Google Scholar ]
  • Wang X, Reger RK, Pfarrer MD. Faster, hotter, and more linked in: managing social disapproval in the social media era. Academy of Management Review. 2021; 46 (2):275–298. doi: 10.5465/amr.2017.0375. [ CrossRef ] [ Google Scholar ]
  • Wang, Y., Qian, S., Hu, J., Fang, Q., & Xu, C. (2020). Fake news detection via knowledge-driven multimodal graph convolutional networks. In Proceedings of the 2020 International Conference on Multimedia Retrieval (pp.540–547)
  • Wardle, C. (2017). Fake news. It’s complicated. https://medium.com/1st-draft/fake-news-its-complicated-d0f773766c79 . Accessed 29 November 2021
  • Weizenbaum J. ELIZA—A Computer Program For the Study of Natural Language Communication Between Man And Machine. Communications of the ACM. 1966; 9 (1):36–45. doi: 10.1145/357980.357991. [ CrossRef ] [ Google Scholar ]
  • Wong CW, Lirn TC, Yang CC, Shang KC. Supply chain and external conditions under which supply chain resilience pays: An organizational information processing theorization. International Journal of Production Economics. 2020; 226 :107610. doi: 10.1016/j.ijpe.2019.107610. [ CrossRef ] [ Google Scholar ]
  • Xu Z, Elomri A, Kerbache L, Omri E. Impacts of COVID-19 on global supply chains: Facts and perspectives. IEEE Engineering Management Review. 2020; 48 (3):153–166. doi: 10.1109/EMR.2020.3018420. [ CrossRef ] [ Google Scholar ]
  • Yin RK. Case study research: Design and methods. 5. Thousand Oaks, CA: Sage Publications; 2014. [ Google Scholar ]
  • Yu W, Chavez R, Jacobs M, Wong CY, Yuan C. Environmental scanning, supply chain integration, responsiveness, and operational performance: an integrative framework from an organizational information processing theory perspective. International Journal of Operations & Production Management. 2019; 39 (5):787–814. doi: 10.1108/IJOPM-07-2018-0395. [ CrossRef ] [ Google Scholar ]
  • Zeba G, Dabić M, Čičak M, Daim T, Yalcin H. Technology mining: Artificial intelligence in manufacturing . Technological Forecasting and Social Change. 2021; 171 :120971. doi: 10.1016/j.techfore.2021.120971. [ CrossRef ] [ Google Scholar ]
  • Zhang C, Gupta A, Kauten C, Deokar AV, Qin X. Detecting fake news for reducing misinformation risks using analytics approaches. European Journal of Operational Research. 2019; 279 (3):1036–1052. doi: 10.1016/j.ejor.2019.06.022. [ CrossRef ] [ Google Scholar ]
  • Zhang C, Lu Y. Study on artificial intelligence: The state of the art and future prospects. Journal of Industrial Information Integration. 2021; 23 :100224. doi: 10.1016/j.jii.2021.100224. [ CrossRef ] [ Google Scholar ]
  • Zhang M, Li X, Yue S, Yang L. An empirical study of TextRank for keyword extraction. Ieee Access : Practical Innovations, Open Solutions. 2020; 8 :178849–178858. doi: 10.1109/ACCESS.2020.3027567. [ CrossRef ] [ Google Scholar ]
  • Zhang M, Macpherson A, Jones O. Conceptualizing the learning process in SMEs: improving innovation through external orientation. International Small Business Journal. 2006; 24 (3):299–323. doi: 10.1177/0266242606063434. [ CrossRef ] [ Google Scholar ]
  • Zheng K, Zhang Z, Chen Y, Wu J. Blockchain adoption for information sharing: risk decision-making in spacecraft supply chain. Enterprise Information Systems. 2021; 15 (8):1070–1091. doi: 10.1080/17517575.2019.1669831. [ CrossRef ] [ Google Scholar ]
  • Zhou X, Jain A, Phoha VV, Zafarani R. Fake news early detection: A theory-driven model. Digital Threats: Research and Practice. 2020; 1 (2):1–25. doi: 10.1145/3377478. [ CrossRef ] [ Google Scholar ]
  • Zhu X, Li F, Chen H, Peng Q. An efficient path computing model for measuring semantic similarity using edge and density. Knowledge and Information Systems. 2018; 55 (1):79–111. doi: 10.1007/s10115-017-1078-5. [ CrossRef ] [ Google Scholar ]
  • Zhu X, Yang X, Huang Y, Guo Q, Zhang B. Measuring similarity and relatedness using multiple semantic relations in WordNet. Knowledge and Information Systems. 2020; 62 (4):1539–1569. doi: 10.1007/s10115-019-01387-6. [ CrossRef ] [ Google Scholar ]

IMAGES

  1. Detecting Fake News Takes Time

    detecting fake news assignment vaccines answer key

  2. The Fact Checker’s guide for detecting fake news

    detecting fake news assignment vaccines answer key

  3. ASU professor, doctoral student develop program to detect 'fake news

    detecting fake news assignment vaccines answer key

  4. Detecting Fake News on Social Media

    detecting fake news assignment vaccines answer key

  5. How to spot fake news- Covid Vaccine edition

    detecting fake news assignment vaccines answer key

  6. Cambridge scientists consider fake news 'vaccine'

    detecting fake news assignment vaccines answer key

VIDEO

  1. Data Science Competition- Detecting Fake News #datascience #fakenews #artificialintelligence

  2. Fake News Detection using machine learning

  3. DETECTING FAKE NEWS USING NAIVE BAYES CLASSIFIERS

  4. Vaccines can help protect you, your family, and your community

  5. What’s real on the web? Detecting fake content at the Digital Desk

  6. The Future of Detecting Fake News: THEME Project and Global Research Solutions

COMMENTS

  1. New method developed to detect fake vaccines in supply chains

    New method developed to detect fake vaccines in supply chains. Research published this week and led by University of Oxford researchers describes a first-of-its-kind method capable of distinguishing authentic and falsified vaccines by applying machine learning to mass spectral data. The method proved effective in differentiating between a range ...

  2. Scientists develop new method to detect fake vaccines

    New method to detect fake vaccines. A team from the University of Oxford have developed a first of its kind mass spectrometry method for vaccine authenticity screening using machine learning. The method repurposes clinical mass spectrometers already present in hospitals worldwide, making the approach feasible for global supply chain monitoring.

  3. How to detect, resist and counter the flood of fake news

    How to detect, resist and counter the flood of fake news Although most people are concerned about misinformation, few know how to spot a deceitful post

  4. PDF Fake News and Advertising on Social Media: A Study of the Anti

    We study the role of social networks and advertising on social networks in the dissemination of false news stories about childhood vaccines. We document that anti-vaccine Facebook groups disseminate false stories beyond the groups as well as serving as an "echo" chamber.

  5. Reexamining Misinformation: How Unflagged, Factual Content Drives

    One of the paper's key findings is that "fake news," or articles flagged as misinformation by professional fact-checkers, has a much smaller overall effect on vaccine hesitancy than unflagged stories that the researchers describe as "vaccine-skeptical," many of which focus on statistical anomalies that suggest that COVID-19 vaccines ...

  6. Fact-Checking in an Era of Fake News

    This productive discussion about "fake news" encourages students to think about the importance of validating information and reflect on how they currently do this.

  7. Detecting fake news for COVID-19 using deep learning: a review

    The first step against the fake news regarding vaccines was taken in the form of ANTI-Vax [ 28 ], a dataset tailored made for COVID-19 vaccines fake news. It was released in early 2022 and was one of the latest dataset released in this genre.

  8. Quantifying the impact of misinformation and vaccine ...

    The COVID-19 pandemic was exacerbated by poor utilization of vaccines caused by the spread of misinformation. Fortunately, the impact of flagrant vaccine misinformation on Facebook was greatly attenuated once such posts were flagged and debunked as false by third-party fact-checkers. However, ambiguous misinformation remained unflagged.

  9. Defusing Fake News: New Stevens Research Points the Way

    But new research from Stevens Institute of Technology faculty, students and alumni — working with MIT, Penn and others to study Congress, analyze social media and develop fake news-spotting artificial intelligence — is giving new hope in the fight for facts. Their work is pointing the way to novel technologies and strategies that can ...

  10. Tone as important as truth to counter vaccine fake news

    Tone as important as truth to counter vaccine fake news Lack of trust in health authorities, combined with the fear and uncertainty about the disease, created fertile ground for false rumours to spread about Covid-19 vaccines.

  11. Acceptance of a Covid-19 vaccine is associated with ability to detect

    Conclusions To promote acceptance of a vaccine against SARS-CoV-2, it is recommended to increase individuals' ability to detect fake news and health literacy through education and communication programs. Keywords: misinformation, fake news, health literacy, vaccination, Covid-19

  12. The Correlation Among COVID-19 Vaccine Acceptance, the Ability to

    Plain Language Summary: We investigated the relationship between the ability to detect fake news and e-health literacy, with acceptance of a COVID-19 vaccine in Isfahan. The results demonstrated that acceptance of the COVID-19 vaccine is associated with a high level of ability to detect fake news and high level of e-Health literacy in individuals.

  13. COVID-19 misinformation: scientists create a 'psychological vaccine' to

    To counter their viral misinformation at a time when COVID-19 vaccines are being rolled out, our research team has produced a "psychological vaccine" that helps people detect and resist the ...

  14. COVID-19 vaccination challenges: from fake news to vaccine ...

    This article aims to synthesize articles addressing fake news and COVID-19 vaccine hesitancy in the context of public health. We conducted an integrative review of articles published in any language between 2019 and 2022 in journals indexed in the following databases: Latin American and the Caribbea …

  15. PDF A Survey of Fake News: Fundamental Theories, Detection Methods, and

    The explosive growth in fake news and its erosion to democracy, justice, and public trust has increased the demand for fake news detection and intervention. This survey reviews and evaluates methods that can detect fake news from four perspectives: (1) the false it carries, (2) its writing , (3) its patterns, and (4) the credibility of its .

  16. Credible Sources as a Vaccine against Fake News on COVID-19

    Credible Sources as a Vaccine against Fake News on COVID-19. The dis-misinformation and conspiracy theories on COVID-19 pandemic have continued to present great challenges for medical professionals, social media platforms, journalists, government and concerned citizens. Strongly, the uncertainty induced by Covid-19, has stimulated fear, anxiety ...

  17. Emotions unveiled: detecting COVID-19 fake news on social media

    Our findings underscore the crucial role of emotions in detecting fake news and provide valuable insights into how machine-learning models can be trained to recognize these features.

  18. Acceptance of a Covid-19 vaccine is associated with ability to detect

    To promote acceptance of a vaccine against SARS-CoV-2, it is recommended to increase individuals' ability to detect fake news and health literacy through education and communication programs.

  19. The Correlation Among COVID-19 Vaccine Acceptance, the Ability to

    Key results: The study findings revealed a statistically significant relationship between acceptance of the COVID-19 vaccine and the ability to identify deceptive news. An increase of one unit in the score for recognizing misinformation led to a 24% and 32% reduction in vaccine hesitancy and the intention to remain unvaccinated, respectively.

  20. Detecting fake news and disinformation using artificial ...

    Fake news and disinformation (FNaD) are increasingly being circulated through various online and social networking platforms, causing widespread disruptions and influencing decision-making perceptions. Despite the growing importance of detecting fake news in politics, relatively limited research efforts have been made to develop artificial intelligence (AI) and machine learning (ML) oriented ...

  21. Detecting fake news and disinformation using artificial intelligence

    Fake news and disinformation (FNaD) are increasingly being circulated through various online and social networking platforms, causing widespread disruptions and influencing decision-making perceptions. Despite the growing importance of detecting fake ...

  22. Interventions against misinformation also increase skepticism toward

    Efforts to tackle false information through fact-checking or media literacy initiatives increases the public's skepticism toward 'fake news'. However, they also breed distrust in genuine, fact ...

  23. DOCX Mrs. Solarez

    Fake news sites are designed to look like real news, but do not follow the same journalistic standards that you would expect from a real news source. The information in the article may be misleading or completely false. Fake news has become a big problem with the growth of social media, with stories about political candidates, vaccines, and other hot topics being passed as real. The motive for ...