• - Google Chrome

Intended for healthcare professionals

  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs

Discourse analysis

  • Related content
  • Peer review
  • Brian David Hodges , associate professor, vice chair (education), and director 1 ,
  • Ayelet Kuper , assistant professor 2 ,
  • Scott Reeves , associate professor 3
  • 1 Department of Psychiatry, Wilson Centre for Research in Education, University of Toronto, 200 Elizabeth Street, Eaton South 1-565, Toronto, ON, Canada M5G 2C4
  • 2 Department of Medicine, Sunnybrook Health Sciences Centre, and Wilson Centre for Research in Education, University of Toronto, 2075 Bayview Avenue, Room HG 08, Toronto, ON, Canada M4N 3M5
  • 3 Department of Psychiatry, Li Ka Shing Knowledge Institute, Centre for Faculty Development, and Wilson Centre for Research in Education, University of Toronto, 200 Elizabeth Street, Eaton South 1-565, Toronto, ON, Canada M5G 2C4
  • Correspondence to: B D Hodges brian.hodges{at}utoronto.ca

This articles explores how discourse analysis is useful for a wide range of research questions in health care and the health professions

Previous articles in this series discussed several methodological approaches used by qualitative researchers in the health professions. This article focuses on discourse analysis. It provides background information for those who will encounter this approach in their reading, rather than instructions for conducting such research.

What is discourse analysis?

Discourse analysis is about studying and analysing the uses of language. Because the term is used in many different ways, we have simplified approaches to discourse analysis into three clusters (table 1 ⇓ ) and illustrated how each of these approaches might be used to study a single domain: doctor-patient communication about diabetes management (table 2 ⇓ ). Regardless of approach, a vast array of data sources is available to the discourse analyst, including transcripts from interviews, focus groups, samples of conversations, published literature, media, and web based materials.

  • In this window
  • In a new window

 Three approaches to discourse analysis

 Three approaches to a specific research question: example of doctor-patient communications about diabetes management

What is formal linguistic discourse analysis?

The first approach, formal linguistic discourse analysis, involves a structured analysis of text in order to find general underlying rules of linguistic or communicative function behind the text. 4 For example, Lacson and colleagues compared human-human and machine-human dialogues in order to study the possibility of using computers to compress human conversations about patients in a dialysis unit into a form that physicians could use to make clinical decisions. 5 They transcribed phone conversations between nurses and 25 adult dialysis patients over a three month period and coded all 17 385 words by semantic type (categories of meaning) and structure (for example, sentence length, word position). They presented their work as a “first step towards an automatic analysis of spoken medical dialogue” that would allow physicians to “answer questions …

Log in using your username and password

BMA Member Log In

If you have a subscription to The BMJ, log in:

  • Need to activate
  • Log in via institution
  • Log in via OpenAthens

Log in through your institution

Subscribe from £184 *.

Subscribe and get access to all BMJ articles, and much more.

* For online subscription

Access this article for 1 day for: £33 / $40 / €36 ( excludes VAT )

You can download a PDF version for your personal record.

Buy this article

research article on discourse analysis

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Critical Discourse Analysis | Definition, Guide & Examples

Critical Discourse Analysis | Definition, Guide & Examples

Published on August 23, 2019 by Amy Luo . Revised on June 22, 2023.

Critical discourse analysis (or discourse analysis) is a research method for studying written or spoken language in relation to its social context. It aims to understand how language is used in real life situations.

When you conduct discourse analysis, you might focus on:

  • The purposes and effects of different types of language
  • Cultural rules and conventions in communication
  • How values, beliefs and assumptions are communicated
  • How language use relates to its social, political and historical context

Discourse analysis is a common qualitative research method in many humanities and social science disciplines, including linguistics, sociology, anthropology, psychology and cultural studies.  

Table of contents

What is discourse analysis used for, how is discourse analysis different from other methods, how to conduct discourse analysis, other interesting articles.

Conducting discourse analysis means examining how language functions and how meaning is created in different social contexts. It can be applied to any instance of written or oral language, as well as non-verbal aspects of communication such as tone and gestures.

Materials that are suitable for discourse analysis include:

  • Books, newspapers and periodicals
  • Marketing material, such as brochures and advertisements
  • Business and government documents
  • Websites, forums, social media posts and comments
  • Interviews and conversations

By analyzing these types of discourse, researchers aim to gain an understanding of social groups and how they communicate.

Prevent plagiarism. Run a free check.

Unlike linguistic approaches that focus only on the rules of language use, discourse analysis emphasizes the contextual meaning of language.

It focuses on the social aspects of communication and the ways people use language to achieve specific effects (e.g. to build trust, to create doubt, to evoke emotions, or to manage conflict).

Instead of focusing on smaller units of language, such as sounds, words or phrases, discourse analysis is used to study larger chunks of language, such as entire conversations, texts, or collections of texts. The selected sources can be analyzed on multiple levels.

Critical discourse analysis
Level of communication What is analyzed?
Vocabulary Words and phrases can be analyzed for ideological associations, formality, and euphemistic and metaphorical content.
Grammar The way that sentences are constructed (e.g., , active or passive construction, and the use of imperatives and questions) can reveal aspects of intended meaning.
Structure The structure of a text can be analyzed for how it creates emphasis or builds a narrative.
Genre Texts can be analyzed in relation to the conventions and communicative aims of their genre (e.g., political speeches or tabloid newspaper articles).
Non-verbal communication Non-verbal aspects of speech, such as tone of voice, pauses, gestures, and sounds like “um”, can reveal aspects of a speaker’s intentions, attitudes, and emotions.
Conversational codes The interaction between people in a conversation, such as turn-taking, interruptions and listener response, can reveal aspects of cultural conventions and social roles.

Discourse analysis is a qualitative and interpretive method of analyzing texts (in contrast to more systematic methods like content analysis ). You make interpretations based on both the details of the material itself and on contextual knowledge.

There are many different approaches and techniques you can use to conduct discourse analysis, but the steps below outline the basic structure you need to follow. Following these steps can help you avoid pitfalls of confirmation bias that can cloud your analysis.

Step 1: Define the research question and select the content of analysis

To do discourse analysis, you begin with a clearly defined research question . Once you have developed your question, select a range of material that is appropriate to answer it.

Discourse analysis is a method that can be applied both to large volumes of material and to smaller samples, depending on the aims and timescale of your research.

Step 2: Gather information and theory on the context

Next, you must establish the social and historical context in which the material was produced and intended to be received. Gather factual details of when and where the content was created, who the author is, who published it, and whom it was disseminated to.

As well as understanding the real-life context of the discourse, you can also conduct a literature review on the topic and construct a theoretical framework to guide your analysis.

Step 3: Analyze the content for themes and patterns

This step involves closely examining various elements of the material – such as words, sentences, paragraphs, and overall structure – and relating them to attributes, themes, and patterns relevant to your research question.

Step 4: Review your results and draw conclusions

Once you have assigned particular attributes to elements of the material, reflect on your results to examine the function and meaning of the language used. Here, you will consider your analysis in relation to the broader context that you established earlier to draw conclusions that answer your research question.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Measures of central tendency
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Thematic analysis
  • Cohort study
  • Peer review
  • Ethnography

Research bias

  • Implicit bias
  • Cognitive bias
  • Conformity bias
  • Hawthorne effect
  • Availability heuristic
  • Attrition bias
  • Social desirability bias

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Luo, A. (2023, June 22). Critical Discourse Analysis | Definition, Guide & Examples. Scribbr. Retrieved June 10, 2024, from https://www.scribbr.com/methodology/discourse-analysis/

Is this article helpful?

Amy Luo

Other students also liked

What is qualitative research | methods & examples, what is a case study | definition, examples & methods, how to do thematic analysis | step-by-step guide & examples, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

  • Linguistics
  • Discourse Analysis

DISCOURSE ANALYSIS

  • September 2015
  • In book: Issues in the study of language and literature (pp.169-195)
  • Publisher: Ibadan: Kraft Books Limited.

Ikenna Kamalu at University of Port Harcourt

  • University of Port Harcourt

Ayo Osisanwo at University of Ibadan

  • University of Ibadan

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations

Yanga L.P. Majola

  • Roger Fowler
  • Doris L. Payne
  • Deborah Schiffrin

Jan Blommaert

  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Critical Discourse Analysis | Definition, Guide & Examples

Critical Discourse Analysis | Definition, Guide & Examples

Published on 5 May 2022 by Amy Luo . Revised on 5 December 2022.

Discourse analysis is a research method for studying written or spoken language in relation to its social context. It aims to understand how language is used in real-life situations.

When you do discourse analysis, you might focus on:

  • The purposes and effects of different types of language
  • Cultural rules and conventions in communication
  • How values, beliefs, and assumptions are communicated
  • How language use relates to its social, political, and historical context

Discourse analysis is a common qualitative research method in many humanities and social science disciplines, including linguistics, sociology, anthropology, psychology, and cultural studies. It is also called critical discourse analysis.

Table of contents

What is discourse analysis used for, how is discourse analysis different from other methods, how to conduct discourse analysis.

Conducting discourse analysis means examining how language functions and how meaning is created in different social contexts. It can be applied to any instance of written or oral language, as well as non-verbal aspects of communication, such as tone and gestures.

Materials that are suitable for discourse analysis include:

  • Books, newspapers, and periodicals
  • Marketing material, such as brochures and advertisements
  • Business and government documents
  • Websites, forums, social media posts, and comments
  • Interviews and conversations

By analysing these types of discourse, researchers aim to gain an understanding of social groups and how they communicate.

Prevent plagiarism, run a free check.

Unlike linguistic approaches that focus only on the rules of language use, discourse analysis emphasises the contextual meaning of language.

It focuses on the social aspects of communication and the ways people use language to achieve specific effects (e.g., to build trust, to create doubt, to evoke emotions, or to manage conflict).

Instead of focusing on smaller units of language, such as sounds, words, or phrases, discourse analysis is used to study larger chunks of language, such as entire conversations, texts, or collections of texts. The selected sources can be analysed on multiple levels.

Critical discourse analysis
Level of communication What is analysed?
Vocabulary Words and phrases can be analysed for ideological associations, formality, and euphemistic and metaphorical content.
Grammar The way that sentences are constructed (e.g., verb tenses, active or passive construction, and the use of imperatives and questions) can reveal aspects of intended meaning.
Structure The structure of a text can be analysed for how it creates emphasis or builds a narrative.
Genre Texts can be analysed in relation to the conventions and communicative aims of their genre (e.g., political speeches or tabloid newspaper articles).
Non-verbal communication Non-verbal aspects of speech, such as tone of voice, pauses, gestures, and sounds like ‘um’, can reveal aspects of a speaker’s intentions, attitudes, and emotions.
Conversational codes The interaction between people in a conversation, such as turn-taking, interruptions, and listener response, can reveal aspects of cultural conventions and social roles.

Discourse analysis is a qualitative and interpretive method of analysing texts (in contrast to more systematic methods like content analysis ). You make interpretations based on both the details of the material itself and on contextual knowledge.

There are many different approaches and techniques you can use to conduct discourse analysis, but the steps below outline the basic structure you need to follow.

Step 1: Define the research question and select the content of analysis

To do discourse analysis, you begin with a clearly defined research question . Once you have developed your question, select a range of material that is appropriate to answer it.

Discourse analysis is a method that can be applied both to large volumes of material and to smaller samples, depending on the aims and timescale of your research.

Step 2: Gather information and theory on the context

Next, you must establish the social and historical context in which the material was produced and intended to be received. Gather factual details of when and where the content was created, who the author is, who published it, and whom it was disseminated to.

As well as understanding the real-life context of the discourse, you can also conduct a literature review on the topic and construct a theoretical framework to guide your analysis.

Step 3: Analyse the content for themes and patterns

This step involves closely examining various elements of the material – such as words, sentences, paragraphs, and overall structure – and relating them to attributes, themes, and patterns relevant to your research question.

Step 4: Review your results and draw conclusions

Once you have assigned particular attributes to elements of the material, reflect on your results to examine the function and meaning of the language used. Here, you will consider your analysis in relation to the broader context that you established earlier to draw conclusions that answer your research question.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Luo, A. (2022, December 05). Critical Discourse Analysis | Definition, Guide & Examples. Scribbr. Retrieved 10 June 2024, from https://www.scribbr.co.uk/research-methods/discourse-analysis-explained/

Is this article helpful?

Amy Luo

Other students also liked

Case study | definition, examples & methods, how to do thematic analysis | guide & examples, content analysis | a step-by-step guide with examples.

  • Privacy Policy

Research Method

Home » Discourse Analysis – Methods, Types and Examples

Discourse Analysis – Methods, Types and Examples

Table of Contents

Discourse Analysis

Discourse Analysis

Definition:

Discourse Analysis is a method of studying how people use language in different situations to understand what they really mean and what messages they are sending. It helps us understand how language is used to create social relationships and cultural norms.

It examines language use in various forms of communication such as spoken, written, visual or multi-modal texts, and focuses on how language is used to construct social meaning and relationships, and how it reflects and reinforces power dynamics, ideologies, and cultural norms.

Types of Discourse Analysis

Some of the most common types of discourse analysis are:

Conversation Analysis

This type of discourse analysis focuses on analyzing the structure of talk and how participants in a conversation make meaning through their interaction. It is often used to study face-to-face interactions, such as interviews or everyday conversations.

Critical discourse Analysis

This approach focuses on the ways in which language use reflects and reinforces power relations, social hierarchies, and ideologies. It is often used to analyze media texts or political speeches, with the aim of uncovering the hidden meanings and assumptions that are embedded in these texts.

Discursive Psychology

This type of discourse analysis focuses on the ways in which language use is related to psychological processes such as identity construction and attribution of motives. It is often used to study narratives or personal accounts, with the aim of understanding how individuals make sense of their experiences.

Multimodal Discourse Analysis

This approach focuses on analyzing not only language use, but also other modes of communication, such as images, gestures, and layout. It is often used to study digital or visual media, with the aim of understanding how different modes of communication work together to create meaning.

Corpus-based Discourse Analysis

This type of discourse analysis uses large collections of texts, or corpora, to analyze patterns of language use across different genres or contexts. It is often used to study language use in specific domains, such as academic writing or legal discourse.

Descriptive Discourse

This type of discourse analysis aims to describe the features and characteristics of language use, without making any value judgments or interpretations. It is often used in linguistic studies to describe grammatical structures or phonetic features of language.

Narrative Discourse

This approach focuses on analyzing the structure and content of stories or narratives, with the aim of understanding how they are constructed and how they shape our understanding of the world. It is often used to study personal narratives or cultural myths.

Expository Discourse

This type of discourse analysis is used to study texts that explain or describe a concept, process, or idea. It aims to understand how information is organized and presented in such texts and how it influences the reader’s understanding of the topic.

Argumentative Discourse

This approach focuses on analyzing texts that present an argument or attempt to persuade the reader or listener. It aims to understand how the argument is constructed, what strategies are used to persuade, and how the audience is likely to respond to the argument.

Discourse Analysis Conducting Guide

Here is a step-by-step guide for conducting discourse analysis:

  • What are you trying to understand about the language use in a particular context?
  • What are the key concepts or themes that you want to explore?
  • Select the data: Decide on the type of data that you will analyze, such as written texts, spoken conversations, or media content. Consider the source of the data, such as news articles, interviews, or social media posts, and how this might affect your analysis.
  • Transcribe or collect the data: If you are analyzing spoken language, you will need to transcribe the data into written form. If you are using written texts, make sure that you have access to the full text and that it is in a format that can be easily analyzed.
  • Read and re-read the data: Read through the data carefully, paying attention to key themes, patterns, and discursive features. Take notes on what stands out to you and make preliminary observations about the language use.
  • Develop a coding scheme : Develop a coding scheme that will allow you to categorize and organize different types of language use. This might include categories such as metaphors, narratives, or persuasive strategies, depending on your research question.
  • Code the data: Use your coding scheme to analyze the data, coding different sections of text or spoken language according to the categories that you have developed. This can be a time-consuming process, so consider using software tools to assist with coding and analysis.
  • Analyze the data: Once you have coded the data, analyze it to identify patterns and themes that emerge. Look for similarities and differences across different parts of the data, and consider how different categories of language use are related to your research question.
  • Interpret the findings: Draw conclusions from your analysis and interpret the findings in relation to your research question. Consider how the language use in your data sheds light on broader cultural or social issues, and what implications it might have for understanding language use in other contexts.
  • Write up the results: Write up your findings in a clear and concise way, using examples from the data to support your arguments. Consider how your research contributes to the broader field of discourse analysis and what implications it might have for future research.

Applications of Discourse Analysis

Here are some of the key areas where discourse analysis is commonly used:

  • Political discourse: Discourse analysis can be used to analyze political speeches, debates, and media coverage of political events. By examining the language used in these contexts, researchers can gain insight into the political ideologies, values, and agendas that underpin different political positions.
  • Media analysis: Discourse analysis is frequently used to analyze media content, including news reports, television shows, and social media posts. By examining the language used in media content, researchers can understand how media narratives are constructed and how they influence public opinion.
  • Education : Discourse analysis can be used to examine classroom discourse, student-teacher interactions, and educational policies. By analyzing the language used in these contexts, researchers can gain insight into the social and cultural factors that shape educational outcomes.
  • Healthcare : Discourse analysis is used in healthcare to examine the language used by healthcare professionals and patients in medical consultations. This can help to identify communication barriers, cultural differences, and other factors that may impact the quality of healthcare.
  • Marketing and advertising: Discourse analysis can be used to analyze marketing and advertising messages, including the language used in product descriptions, slogans, and commercials. By examining these messages, researchers can gain insight into the cultural values and beliefs that underpin consumer behavior.

When to use Discourse Analysis

Discourse analysis is a valuable research methodology that can be used in a variety of contexts. Here are some situations where discourse analysis may be particularly useful:

  • When studying language use in a particular context: Discourse analysis can be used to examine how language is used in a specific context, such as political speeches, media coverage, or healthcare interactions. By analyzing language use in these contexts, researchers can gain insight into the social and cultural factors that shape communication.
  • When exploring the meaning of language: Discourse analysis can be used to examine how language is used to construct meaning and shape social reality. This can be particularly useful in fields such as sociology, anthropology, and cultural studies.
  • When examining power relations: Discourse analysis can be used to examine how language is used to reinforce or challenge power relations in society. By analyzing language use in contexts such as political discourse, media coverage, or workplace interactions, researchers can gain insight into how power is negotiated and maintained.
  • When conducting qualitative research: Discourse analysis can be used as a qualitative research method, allowing researchers to explore complex social phenomena in depth. By analyzing language use in a particular context, researchers can gain rich and nuanced insights into the social and cultural factors that shape communication.

Examples of Discourse Analysis

Here are some examples of discourse analysis in action:

  • A study of media coverage of climate change: This study analyzed media coverage of climate change to examine how language was used to construct the issue. The researchers found that media coverage tended to frame climate change as a matter of scientific debate rather than a pressing environmental issue, thereby undermining public support for action on climate change.
  • A study of political speeches: This study analyzed political speeches to examine how language was used to construct political identity. The researchers found that politicians used language strategically to construct themselves as trustworthy and competent leaders, while painting their opponents as untrustworthy and incompetent.
  • A study of medical consultations: This study analyzed medical consultations to examine how language was used to negotiate power and authority between doctors and patients. The researchers found that doctors used language to assert their authority and control over medical decisions, while patients used language to negotiate their own preferences and concerns.
  • A study of workplace interactions: This study analyzed workplace interactions to examine how language was used to construct social identity and maintain power relations. The researchers found that language was used to construct a hierarchy of power and status within the workplace, with those in positions of authority using language to assert their dominance over subordinates.

Purpose of Discourse Analysis

The purpose of discourse analysis is to examine the ways in which language is used to construct social meaning, relationships, and power relations. By analyzing language use in a systematic and rigorous way, discourse analysis can provide valuable insights into the social and cultural factors that shape communication and interaction.

The specific purposes of discourse analysis may vary depending on the research context, but some common goals include:

  • To understand how language constructs social reality: Discourse analysis can help researchers understand how language is used to construct meaning and shape social reality. By analyzing language use in a particular context, researchers can gain insight into the cultural and social factors that shape communication.
  • To identify power relations: Discourse analysis can be used to examine how language use reinforces or challenges power relations in society. By analyzing language use in contexts such as political discourse, media coverage, or workplace interactions, researchers can gain insight into how power is negotiated and maintained.
  • To explore social and cultural norms: Discourse analysis can help researchers understand how social and cultural norms are constructed and maintained through language use. By analyzing language use in different contexts, researchers can gain insight into how social and cultural norms are reproduced and challenged.
  • To provide insights for social change: Discourse analysis can provide insights that can be used to promote social change. By identifying problematic language use or power imbalances, researchers can provide insights that can be used to challenge social norms and promote more equitable and inclusive communication.

Characteristics of Discourse Analysis

Here are some key characteristics of discourse analysis:

  • Focus on language use: Discourse analysis is centered on language use and how it constructs social meaning, relationships, and power relations.
  • Multidisciplinary approach: Discourse analysis draws on theories and methodologies from a range of disciplines, including linguistics, anthropology, sociology, and psychology.
  • Systematic and rigorous methodology: Discourse analysis employs a systematic and rigorous methodology, often involving transcription and coding of language data, in order to identify patterns and themes in language use.
  • Contextual analysis : Discourse analysis emphasizes the importance of context in shaping language use, and takes into account the social and cultural factors that shape communication.
  • Focus on power relations: Discourse analysis often examines power relations and how language use reinforces or challenges power imbalances in society.
  • Interpretive approach: Discourse analysis is an interpretive approach, meaning that it seeks to understand the meaning and significance of language use from the perspective of the participants in a particular discourse.
  • Emphasis on reflexivity: Discourse analysis emphasizes the importance of reflexivity, or self-awareness, in the research process. Researchers are encouraged to reflect on their own positionality and how it may shape their interpretation of language use.

Advantages of Discourse Analysis

Discourse analysis has several advantages as a methodological approach. Here are some of the main advantages:

  • Provides a detailed understanding of language use: Discourse analysis allows for a detailed and nuanced understanding of language use in specific social contexts. It enables researchers to identify patterns and themes in language use, and to understand how language constructs social reality.
  • Emphasizes the importance of context : Discourse analysis emphasizes the importance of context in shaping language use. By taking into account the social and cultural factors that shape communication, discourse analysis provides a more complete understanding of language use than other approaches.
  • Allows for an examination of power relations: Discourse analysis enables researchers to examine power relations and how language use reinforces or challenges power imbalances in society. By identifying problematic language use, discourse analysis can contribute to efforts to promote social justice and equality.
  • Provides insights for social change: Discourse analysis can provide insights that can be used to promote social change. By identifying problematic language use or power imbalances, researchers can provide insights that can be used to challenge social norms and promote more equitable and inclusive communication.
  • Multidisciplinary approach: Discourse analysis draws on theories and methodologies from a range of disciplines, including linguistics, anthropology, sociology, and psychology. This multidisciplinary approach allows for a more holistic understanding of language use in social contexts.

Limitations of Discourse Analysis

Some Limitations of Discourse Analysis are as follows:

  • Time-consuming and resource-intensive: Discourse analysis can be a time-consuming and resource-intensive process. Collecting and transcribing language data can be a time-consuming task, and analyzing the data requires careful attention to detail and a significant investment of time and resources.
  • Limited generalizability: Discourse analysis is often focused on a particular social context or community, and therefore the findings may not be easily generalized to other contexts or populations. This means that the insights gained from discourse analysis may have limited applicability beyond the specific context being studied.
  • Interpretive nature: Discourse analysis is an interpretive approach, meaning that it relies on the interpretation of the researcher to identify patterns and themes in language use. This subjectivity can be a limitation, as different researchers may interpret language data differently.
  • Limited quantitative analysis: Discourse analysis tends to focus on qualitative analysis of language data, which can limit the ability to draw statistical conclusions or make quantitative comparisons across different language uses or contexts.
  • Ethical considerations: Discourse analysis may involve the collection and analysis of sensitive language data, such as language related to trauma or marginalization. Researchers must carefully consider the ethical implications of collecting and analyzing this type of data, and ensure that the privacy and confidentiality of participants is protected.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Bimodal Histogram

Bimodal Histogram – Definition, Examples

Documentary Analysis

Documentary Analysis – Methods, Applications and...

Narrative Analysis

Narrative Analysis – Types, Methods and Examples

Grounded Theory

Grounded Theory – Methods, Examples and Guide

Cluster Analysis

Cluster Analysis – Types, Methods and Examples

Graphical Methods

Graphical Methods – Types, Examples and Guide

Advertisement

Advertisement

AI Through Ethical Lenses: A Discourse Analysis of Guidelines for AI in Healthcare

  • Original Research/Scholarship
  • Open access
  • Published: 04 June 2024
  • Volume 30 , article number  24 , ( 2024 )

Cite this article

You have full access to this open access article

research article on discourse analysis

  • Laura Arbelaez Ossa   ORCID: orcid.org/0000-0002-8303-8789 1 ,
  • Stephen R. Milford   ORCID: orcid.org/0000-0002-7325-9940 1 ,
  • Michael Rost   ORCID: orcid.org/0000-0001-6537-9793 1 ,
  • Anja K. Leist   ORCID: orcid.org/0000-0002-5074-5209 2 ,
  • David M. Shaw   ORCID: orcid.org/0000-0001-8180-6927 1 , 3 &
  • Bernice S. Elger   ORCID: orcid.org/0000-0002-4249-7399 1 , 4  

319 Accesses

Explore all metrics

While the technologies that enable Artificial Intelligence (AI) continue to advance rapidly, there are increasing promises regarding AI’s beneficial outputs and concerns about the challenges of human–computer interaction in healthcare. To address these concerns, institutions have increasingly resorted to publishing AI guidelines for healthcare, aiming to align AI with ethical practices. However, guidelines as a form of written language can be analyzed to recognize the reciprocal links between its textual communication and underlying societal ideas. From this perspective, we conducted a discourse analysis to understand how these guidelines construct, articulate, and frame ethics for AI in healthcare. We included eight guidelines and identified three prevalent and interwoven discourses: (1) AI is unavoidable and desirable; (2) AI needs to be guided with (some forms of) principles (3) trust in AI is instrumental and primary. These discourses signal an over-spillage of technical ideals to AI ethics, such as over-optimism and resulting hyper-criticism. This research provides insights into the underlying ideas present in AI guidelines and how guidelines influence the practice and alignment of AI with ethical, legal, and societal values expected to shape AI in healthcare.

Similar content being viewed by others

research article on discourse analysis

Integrating ethics in AI development: a qualitative study

Ethics of ai and health care: towards a substantive human rights framework.

research article on discourse analysis

Implementing Ethics in Healthcare AI-Based Applications: A Scoping Review

Avoid common mistakes on your manuscript.

Introduction

The increasing number of Artificial intelligence (AI) ethics guidelines reflects the growing recognition of AI’s potential benefits and risks. As AI technology advances, there is increasing enthusiasm for AI, especially machine learning (ML) techniques, because of their capacity to analyze already available health data for preventive, diagnostic, or treatment support (Leist et al., 2022 ). However, the assumption that AI applications might become more prevalent in society has raised concerns over the ethical implications of its use. Common questions include what is necessary to trust AI, respect people's autonomy, and avoid biases and discrimination (Floridi et al., 2018 ; Murphy et al., 2021 ). AI guidelines aim to guide our approach to AI for the benefit of society through the use of principles, statements, rules, or recommendations. As such, academic, (non)governmental, and other institutions worldwide have published guidelines to guide AI development and those working with it.

Reviews of generic AI guidelines (AI used across settings without specific healthcare focus) have sought to map and examine the common themes and areas of focus they address (Bélisle-Pipon et al., 2022 ; Fjeld et al., 2020 ; Fukuda-Parr & Gibbons, 2021 ; Jobin et al., 2019 ; Ryan & Stahl, 2020 ). Some concerns generic AI guidelines address include privacy, bias, transparency, autonomy, explainability, well-being promotion, and responsibility. These reviews provide a helpful overview of the state of AI ethics guidelines to understand critical issues and challenges related to AI ethics. Although generic AI guidelines could apply across different disciplines, some guidelines specifically address the use of AI in healthcare. These guidelines strongly emphasize considering the ethical implications of using AI in medical decision-making and other healthcare applications. A prominent example is the World Health Organization (WHO) publication on "ethics and governance of artificial intelligence for health" (World Health Organization, 2021 ).

The field of AI in healthcare is still relatively new, and there is an ongoing debate about the best approaches to ensuring the ethical use of AI. Noticeably, the use of AI in healthcare raises specific ethical issues related to the beneficence and respect of autonomy, as patients and communities require assurance that introducing AI would not jeopardize their rights. Beyond challenges inherent to AI, decisions taken in healthcare are frequently intertwined with high-risk scenarios and highly sensitive data. Health is central to individual well-being; doctors must support, safeguard, and advocate for patients. For example, an essential pillar of medical ethics, shared decision-making between patients and their doctors, could be affected by the introduction of AI as a potential threat to patients' and doctors' autonomy if AI does not account for their rights and preferences (Abbasgholizadeh Rahimi et al., 2022 ).

Guidelines as a form of written language can be analyzed to identify the links between textual communication and our societal ideas. Discourse (i.e. a group of ideas or patterned ways of thinking in textual form) not only reflects but reproduces our social realities with its dominant beliefs, power structures, and ideologies (Lupton, 1992 ). Discourse analysis (DA) as a qualitative methodology can analyze the contextual structure surrounding communication, including the context in which it takes place and how it shapes a common sociocultural understanding (Fairclough, 2022 ; Lupton, 1992 ; Yazdannik et al., 2017 ). From that perspective, the discourse in ethical guidelines for AI can significantly shape the healthcare community's understanding and approach to ethics. Therefore, guidelines discourse requires particular attention because it is a powerful driver for discussing and (re-)orienting AI ethics. For example, guidelines can base their ideals on practical (e.g., efficiency), technical (e.g., performance), or ethical (e.g., beneficence) frameworks, thus, helping to legitimize certain foundations, concepts, and notions in AI ethics for healthcare. Therefore, AI guidelines can establish a common framework for thinking about and addressing ethical issues in AI. In that sense, it is essential to look at the understanding of ethics in AI guidelines and critically examine if it meets the moral requirements of healthcare settings.

This paper analyzes how guidelines construct, articulate, and frame AI ethics for healthcare. The aim is to look beyond what is written and critically interpret these guidelines' underlying ideologies ((Cheek, 2004 ; Lupton, 1992 ; Yazdannik et al., 2017 ). As such, we are interested in how the guidelines shape AI ethics in healthcare, including whose perspectives are considered when determining ethical issues in AI and the implications for ethics, AI, and healthcare stakeholders.

Previous work has synthesized generic AI guidelines through thematic or content summaries (Fjeld et al., 2020 ; Jobin et al., 2019 ; Ryan & Stahl, 2020 ). Policy and social researchers have used Critical Discourse Analysis (CDA) to understand public health documents, albeit this methodology has yet to be applied to AI-guiding documents. However, the usability of CDA has been visible in other domains, for example, by using CDA to examine how health policy documents constructed chronically ill patients' roles or how inclusion policies framed health inequalities (Tweed et al., 2022 ; Walton & Lazzaro-Salazar, 2016 ). Other researchers used CDA to analyze the discourse surrounding AI in social media and the academic discussion on artificial general intelligence (Graham, 2022 ; Mao & Shi-Kupfer, 2021 ; Singler, 2020 ). Given the importance of written AI guidelines for understanding AI ethics for healthcare, we undertook a CDA of AI guidelines, which allows us to have an in-depth interpretation of the construction, articulation, and framing of AI ethics for healthcare. Therefore, we aimed to analyze the discourse in AI guidelines rather than systematically map the content and themes.

Identifying Relevant Studies

First, given the absence of a unified database for AI healthcare guidelines, we reviewed all the documents inventoried by previous researchers for potential inclusion. Additionally, we reviewed database initiatives that track AI policies: Nesta’s Footnote 1 “AI governance database”, Algorithm Watch’s Footnote 2 “AI Ethics Guidelines Global Inventory”, OECD.AI’s Footnote 3 “policy observatory”, and AI Ethics Lab’s Footnote 4 “Toolbox: Dynamics of AI Principles”. We use a purposive sample to find documents written by influential institutions such as governments, intergovernmental organizations, or non-profit organizations. Second, Google Search was used as a general search engine because AI guidelines are not academic publications and thus fall under the "gray literature" category. The first author searched and screened for AI guidelines to select a final set.

Inclusion and Exclusion Criteria

For this review, we consider ‘AI guidelines’ to be documents that provide ethical guidance, including policies, guidelines, principles, or position papers introduced by governmental, inter-governmental, or professional organizations. Including this type of AI guidelines allow us to analyze how influential institutions construct, articulate, and frame AI ethics in healthcare. To be included, guidelines must provide normative guidance for AI in healthcare: principles, tenets, recommendations, propositions, or tangible steps for developing or implementing AI in healthcare.

We excluded documents that provided observations regarding advances in AI for a particular year. Additionally, we excluded “internal” company principles due to the limited intended audience, as they are primarily created for the respective institution. We also excluded documents solely focusing on one disease application or a specific medical specialty because these might not be generalizable to other healthcare contexts. We finalized the search in August 2022. The first author screened 179 document titles. We excluded 169 documents because they either did not qualify as guidelines or were outside the scope of this review (i.e., documents that were not about AI or were unrelated to healthcare). Summary of reasons for exclusion in Supplementary materials 2.

We departed from the analytical positivist approach of a systematic literature review. Footnote 5 DA is a diverse methodology for analyzing the language in use and how discourse creates a shared understanding of a topic. DA goes beyond the content of words and interprets how a topic is constructed, represented, and reflected within its context (Fairclough, 2013 , 2022 ). In particular, we used CDA because language expresses and shapes social and political relationships, and its analysis can uncover underlying ideologies or power dynamics.

We transferred the guideline texts to a qualitative data management software (MAXQDA) to carry out data analysis. We analyzed the guidelines in three phases. First, the first author read the included guidelines in detail and extracted high-level information. During data familiarization, the authors discussed preliminary ideas on trends in the guidelines and created a list of specific questions that we considered relevant to answer the main research question. The first author analyzed the guidelines in the second phase by creating high-level analytical themes that focus on organizing the material into the following discourse strands: How do guidelines (1) discuss ethical motivation to develop and implement AI and ethics (e.g., what is the justification and primary goal of guidelines); (2) construct ethical AI (e.g., if guidelines used principles); (3) assign the roles of different stakeholders. Third, all authors tested and critically interrogated the analytical themes and organization of results. The authors reached a consensus about the structure and characteristics of the several discourses. This process eventually resulted in the description of three discourses.

See Fig.  1 .

figure 1

Flow diagram (PRISMA) (Page et al., 2021 )

Applying the selection criteria led to eight guidelines ultimately being included in this analysis (Supplementary materials 1). Most of them were published in 2021. Intergovernmental organizations published two documents. All other guidelines came from high-income countries (the United Kingdom, the United States of America, Canada, Singapore, and the United Arab Emirates) (Table  1 ). The length of the documents varies widely, with G1 being the longest (114 pages) and G5 the shortest (two pages). Guidelines G3, G5, G6, and G7 focus on (general) good practice or good AI. Guidelines G2, G4, and G8 are generally intended to guide AI in healthcare but do not specifically focus on ethical AI. Guideline G1 focuses on ethics and governance.

The guidelines address AI developers (G1, G8) but also describe them as innovators (G3, G6), and manufacturers (G4, G7, G8). Other described addressees are policymakers (G1, G2, G5), healthcare professionals (G1, G4) and healthcare institutions (G1, G4). "AI actors" describes all stakeholders in the AI system lifecycle (G2.1 p. 7). The guideline G8 uses an umbrella group called 'implementers' that could include healthcare professionals and institutions. To this extent, G8 acknowledges that the " groups are not mutually exclusive " (G8 p. 8), which creates some uncertainties in interpreting guidelines for individual stakeholders. The guidelines sometimes discuss AI recommendations without specifying a responsible party. For example, G4 mentions the need for verifiable and explainable AI without indicating who should ensure this (G4 p.8). Guideline G5 mentions a human in the loop without describing anyone specifically.

Lack of standard definition of AI

Most guidelines focus their discussion on AI (G1, G2, G4, G6, G7). Four guidelines make a distinction: G3 describes “ digital and data-driven technologies ” that include AI, guideline G5 focuses only on machine learning (ML), and G6-G7 combines both as AI/ML-enabled medical devices (Supplementary materials 3 in Table  1 ).

The guidelines lack a standard definition of AI, thus, leading to different interpretations between data-driven programs (such as prediction or diagnosis) and a potential program that resembles a more general state of intelligence (human-like cognition). When the object of regulation is still a topic of debate, it may result in regulating entirely different or not yet existing systems, including Artificial General Intelligence. Consequently, these guidelines could evoke an understanding of AI driven by the potential human-like capacities of the systems rather than a more measurable technical definition. Informing the definition of AI with such futuristic perceptions may contribute to the mystification of AI and increase fears regarding its application. Fears can result in learned helplessness, where people disengage from AI, diminish participation in discussions, and become relegated to passive acceptance and hindering participation (Lindebaum et al., 2020 ).

Discourse 1: AI is Unavoidable and Desirable

All guidelines agree that AI will be an agent of change in medicine. Discussions on AI are fundamentally based on its potential, making these AI guidelines future-looking, prospective, and, to some extent, speculative. Most guidelines describe the benefits and risks of AI techniques (G1, G2, G3, G5, G6, G7, G8). For example, G2 states that AI in healthcare has "profound potential, but real risks "(G2 p. 7). The guideline G5 mentions that AI and ML “ have the potential to transform health care […], but they also present unique considerations due to their complexity and the iterative and data-driven nature ” (G5 p. 1). In doing so, guidelines frequently juxtapose opportunities and threats while justifying the need for considerations to avoid harm. Therefore, guidelines tend to describe their primary motivation as avoiding harm while harnessing the promised potential of AI technologies (Supplementary materials 3 in Table  2 ). These statements are pragmatic formulations derived from the (unspoken) assumption that AI will be implemented and that healthcare needs to make the best of it. However, this type of discourse entails a matrix of beliefs: AI is an unavoidable development and undeniably useful.

Guidelines fail to be sufficiently cautionary against the techno-cultural ideals and the hype surrounding technological developments. The pressure to adopt innovation based on enthusiasm and economic or technical forces could undermine the debate about demonstrating that AI improves healthcare quality (Dixon-Woods et al., 2011 ). Guideline G1 (G2 also, to some extent) questions whether AI should be used (or not) and the risk of overestimating the benefits of AI or dismissing the risks (G1 p. 31–33). None of the guidelines were sufficiently critical against the base assumptions that AI is an agent of benefits and progress in medicine. However, there is no evidence yet of this change because most AI systems are not currently used in daily real clinical scenarios. For example, a guideline states that they “recognizes that AI holds great promise for the practice of public health and medicine” (G1 p. xi). The guideline G6 states that “ the use of AI/ML […] presents a significant potential benefit to patients and health systems in a wide range of clinical applications […] ” (G6 p. 4). In that sense, there is an unspoken but present assumption that AI is mainly—at least potentially—beneficial and that if used correctly, AI will change life and medicine. In the guidelines, the desire to harness or guide the potentials of AI indicates that this innovation is at least an acceptable reality or a potentially desirable development. This discourse might echo sentiments from the technology industry, where innovation is the ultimate goal and something new might be better just because it is new. However, a strong pro-innovation stance could lead to risk-taking or scientifically unfounded experimentation for innovation and change. Slota et al. rightly pointed out this challenge and have critically questioned that innovation may not be positive per se and cannot be unquestionably accepted and suggested that innovation needs to abide by prerequisites to be considered positive, for example, reliability measurements (Slota et al., 2021 ).

When guidelines base their discussion primarily on AI’s potential, AI might have a special status compared to other healthcare innovations, especially because AI’s potential became a justification for its support and development. For example, drug development guidelines request manufacturers to establish benefit/risk assessment on the evidence for a drug’s safety and effectiveness to improve, change or remove diseases. Guidelines are cautious, even when using unproven interventions (with no evidence available through clinical trials), emphasizing that potential benefits must be substantial and that there should be no other alternatives (EMA, 2018a ; FDA, 2019 ). Giving AI special treatment due to the desire to realize AI’s potential risks prompts technology companies to take advantage of their expertise and unduly influence governmental decisions regarding AI’s regulation and practices. For example, in contact tracing technology for Covid-19 (although not always AI-enabled), government concerns over data privacy allowed technology companies to gain influence because of their expertise in data privacy, inadvertently permitting them to influence how this technology was developed (Sharon, 2021 ). As technology develops, many small decisions need to be taken, which, when combined, can significantly impact how a policy is implemented and its practical interpretation. In AI guidelines, industry representatives are often involved and may have an imbalance of influence over the development of these guidelines compared with other directly impacted stakeholders such as patients (Bélisle-Pipon et al., 2022 ).

Discourse 2: The Necessity of Principles to Guide AI

Despite using different terms, having different aims, and addressing different stakeholder groups, guidelines agree that AI needs principles to be guided. However, there is wide variation in the usage and conceptualization of these principles, with most documents not clarifying the theoretical basis for including them. Only G1 provides an account of their definition of principles, which references bioethics and human rights as the theoretical framework. G6–G7 cross-references the definition and construction used in G1. There is no common assumption about the conceptual framework behind using these principles, leaving their interpretation and operationalization up to the reader's discretion (Supplementary materials 3 in Table 3).

Positively, guidelines aim to help AI be developed within the acceptable limits of society and human ideals, including safety protocols. However, the guidelines see principles as a viable, feasible, and acceptable solution to guide AI. This cultural understanding could have originated from the influential science fiction work by Isaac Asimov, in which robots must follow hardwired social and moral norms (do no harm to humans, obey humans, and protect themselves) (Asimov, 1950 ; Jung, 2018 ). Asimov’s laws were the author’s answer to finding protection against the potential malicious consequences of technology, though he also acknowledged in his work the potential for conflict between these laws. Using principles in the guidelines comes from a similar perspective whereby there are concerns about the potential negative consequences of AI.

Guidelines fluctuate between discussions on important principles and how to apply these and develop acceptable AI. For example, G6–G7 discusses aspects of AI such as suitability and robustness while adding ethical aspects such as inclusiveness, fairness, or risks for health discrimination. Guideline G1 starts with ethical principles and continues to add recommendations on AI’s development, while G8 includes fairness in the guiding principles and recommendations for data representativeness. The guideline G3 requests manufacturers to ensure “t he product is easy to use and accessible to all users ” and “ ensure that the product is clinically safe to use ” which are both operationalizations (G3 p. 7, 9). The same guideline (G3) also asks manufacturers for ethical behavior and to “ be fair, transparent and accountable about what data is being used ” (G3 p. 12). Although more technical, several guidelines (G5, G6, G7, G8) do not provide measurable estimations on AI’s behavior or what is acceptable. For example, stating that “ to promote technical robustness, manufacturers […] should test performance by comparing it to existing benchmarks, ensuring that the results are reproducible […]and reported using standard performance metrics ” (G6 p. 13). However, there is no mention of what would be acceptable for performance metrics or how to select acceptable benchmarks.

Most guidelines emphasize "non-maleficence" (G1, G2, G3, G4, G6, G7, G8). However, the emphasis on producing no harm could create a paradoxical interpretation where ‘no harm’ becomes the aim. For example, G1 discusses its principle to “ promote human well-being, safety and public interes t” by stating that “ AI technologies should not harm people. They should satisfy regulatory requirements for safety, accuracy, and efficacy […] to assess whether they have any detrimental impact […]. Preventing harm requires that use of AI technologies does not result in any mental or physical harm ” (G1 p. 26). These prevention-framed messages emphasize behavior to avoid possible negative consequences. Still, they do not highlight what benefits can justify the usage of AI. Moreover, avoiding all harm might be an unrealistic expectation for AI. For example, an AI robot that performs surgery needs to produce an injury (surgical incision) to perform a procedure. If the principles aim to avoid all physical harm, would it be acceptable to have a surgical AI? In the discourse, it is difficult to clarify. Moreover, patients' risk acceptance is not a dichotomous ‘all or nothing,’ as most patients understand that risk is a spectrum of likelihood. For example, patients with psoriasis were willing to accept the risk of serious infection between 20 to 59% as a side effect of their treatment, depending on their disease severity (Kauf et al., 2015 ). There are nuances in what is acceptable for healthcare stakeholders, and creating principles—although appealing—might not meet healthcare needs. Hutler et al. utilize a similar example of a surgical robot to state that it is not as simple as “training” robots to avoid harm and that challenges exist to conceptualize what is harmful and what should be morally allowed while designing robots (Hutler et al., 2023 ).

Nearly all guidelines consider transparency or explainability essential for ethical, good, or responsible AI (G1, G2, G4, G5, G6, G7, G8). However, explainability is a debated concept without consensus on its importance or meaning (Mittelstadt et al., 2019 ). Guidelines often see transparency as an enabler of ethical practices by rendering AI’s processes visible and able to be held accountable (unclear if AI or the people working with it). However, there is no unified definition or acceptability about what and when AI is transparent. Considering that an explainable AI equals ethical AI might be a fig leaf where AI developers cover methodological shortfalls by providing end-users with a false understanding (Starke et al., 2022 ). In contrast, when these principles aim to provide a basis for technical assurance, they should be described as technically feasible and operationalizable. In the current form, guidelines principles seem to be best followed as a thought experiment that re-analyzes the expectations for AI rather than a static set of rules for AI’s development or ethical behavior.

Discourse 3: The Primacy of Trust

Guidelines frame trust, as in ‘ trustworthy AI ’, as the answer to overcoming public doubt. While well-performing AI might build trust, when the center of the discussion is on trustworthy AI, there is a shift from performance expectations (quality) to trust. Reading statements within the guidelines in which trust is central gives one the impression that trust matters more than AI's usability, feasibility, or performance. For example, G1 acknowledges that " trust is key to facilitating the adoption of AI in medicine ." (G1 p. 48); G2 discusses entirely trustworthy AI, and G6-G7 repeatedly discusses trustworthy innovation. Guideline G3 mentions that achieving algorithm transparency can “ build trust in users and enable better adoption and uptake ” (G3 p. 16). Potentially, these statements implicitly apply trustworthiness as a quality seal for good AI, although trust and good are slippery concepts and do not equate to one another. For example, a guideline mentions that “ discussions are crucial to guide the development and use of trustworthy AI for the wider good” (G2 p. 6). The guideline G3 states, “we must approach the adoption of these promising technologies responsibly and in a way that is conducive to public trust, ” (G3 p. 5). Some guidelines consider the lack of trust to impede the usage of data. For example, a guideline mentions that “lack of trust […], in how data are used and protected is a major impediment to data use,and sharing.” (G2 p. 16). Others equate trust as an impediment to the development of AI itself; for example, mentioning “ whether AI can advance […] depends on numerous factors beyond the state of AI science and on the trust of providers, patients, and health-care professionals ” (G1 p. 15). These arguments frame trust as a commodity (measured, managed, or acquired) for the benefit of innovation or technical interests instead of focusing on the preconditions for acceptable AI, such as technical robustness, proven effectiveness, and protection frameworks in case of errors (Krüger & Wilson, 2022 ).

When guidelines describe trust as a means to further innovation, they may fall into the role of advocates for technology, especially when they motivate or suggest that trust in AI is crucial. For example, a guideline “re cognizes that ethics guidance […] is critical to build trust in these technologies to guard against negative or erosive effects and to avoid the proliferation of contradictory guidelines ” (G1 p. 3). The guideline G8 states that “ with the increasing use of healthcare AI […], the intent of the [guideline] is to improve clinical and public trust in the technology by providing a set of recommendations to encourage the safe development and implementation[…] ” (G8 p. 5). This discourse indicates that (1) public trust in AI matters; (2) there might be concerns that the public does not trust AI. The importance of healthcare stakeholders, especially patients, is narrowed to the expectation of acquiring their trust and their position of vulnerability in healthcare.

Patients’ roles are discussed concerning data protection, safety assurance, and as subjects that must trust AI. There is a cursory mention of " patient-centricity " in the guidelines and the importance of patients in AI design. Guideline G1 mentions the importance of patients and their role in ensuring " human warranty ". Guideline G3 mentions that patients need assurance, G4 mentions patients as part of their potential audience. Although these guidelines touch on other situations requiring patients' input, they do not give them an active voice. Most guidelines focus on informing patients about AI (G1, G3, G6, G7, G8) and their data usage. Guidelines discuss the role of patients as subjects worthy of protection due to their vulnerability in healthcare but limit their role to passive bystanders (Table  2 ). Guidelines have tended to focus more on treating patients as mere data subjects. While G1, G5, G8 mention a citizen participation mechanism as they welcome feedback through public docket or direct contact, the feedback is only collected after the first iteration of guidelines. None of the guidelines are written specifically for patients, by or in collaboration with patients, even though guidelines advocate for including patients in AI’s design. In generic AI ethics guidelines, researchers observed that the lack of stakeholder engagement is a prevalent issue, with less than 6% included citizen participation (Bélisle-Pipon et al., 2022 ). Most guidelines do not mention allowing patients to decide if or when to use AI. Uniquely among the guidelines, G8 refers to patients’ ability to decide whether to continue using AI or receive care from a clinician instead (G8 p. 33). Another guideline, for example, only allows people “ to opt out of their confidential patient information being used for purposes beyond their individual care and treatment ” (G3 p. 13).

Our analysis of guidelines for AI in healthcare identified a lack of a standard definition of AI and three main discourses: (1) AI is a desirable and unavoidable development, (2) Principles are the solution to guiding AI, and (3) Trust has a central role. Important for the intended audience of these documents (mainly software developers, but also innovators and manufacturers) is that the discourses were largely concerned with AI applications possibly available in an undefined future. Each of the guidelines discourses cannot be taken in isolation as, to some extent, they reference and influence each other. For example, G1 references the definitions used in G2, and G6-G7 references the principles in G1. In that sense, there may be certain reproductions of ideas that do not exclusively represent the vision of the publishing institution. While acknowledging this possibility, in its totality, the discourses seem to be, in many instances, determined by broader societal discourses, such as the technology industry's optimistic and innovation-driven ideals. In a review of techno-optimism, Danaher concludes that while common in industry and policy, strong forms of techno-optimism may be unwarranted without further analysis and justifications (Danaher, 2022 ). However, the optimistic assessment of AI regarding its qualities and faculties is well-established in other policy documents for generic AI applications. In a discourse analysis published after the completion of this paper, researchers reviewed policies from China, the United States, France, and Germany that also established AI as inevitable and framed an interdependence between technology and societal good creating a powerful rhetoric “that sheds pivotal attention and necessity to AI, lifting it into a sublime aura of a savior”(Bareis & Katzenbach, 2022 ). In the broad European policy context—albeit also not healthcare specific—researchers found that AI is also represented as a “transformational force, either with redeeming of “salvific” qualities drawing from techno-solutionist discourse, or through mystified lens with allusion to dystopian narratives” (Gonzalez Torres et al., 2023 ). Our results demonstrate that similar discourses are built into the AI guidelines for healthcare.

While experts and institutions contributing to the guidelines have made a commendable effort to stay on top of AI innovation, the guidelines are undoubtedly a work in progress. In particular, the discourses show a tension between a pro-growth stance (AI as medical progress) and the need for caution (guidance, principles, trust, and ethics). For example, technical performance metrics, such as achieving the highest accuracy in prediction or classification, can conflict with ethical performance, which aims to avoid making decisions based on sensitive attributes or proxies of those attributes. The problem is already part of the discussion in non-AI clinical decision algorithms where race has been (wrongly) used to change risk assessments, for example, kidney function (Vyas et al., 2020 ). For the current AI discussion, it is unclear how to reconcile both views and if we can or should. For example, commitments to ensuring AI is fair or respects human dignity might not be specific enough to be action-guiding or operationalizable. On the contrary, focusing simply on technical measurements could not meet ethical requirements. Cybersecurity and data protection are often conflated with respect for autonomy or non-maleficence, potentially simplifying the interpretation and applicability of the ethical value. Ethically, respect for autonomy is associated with the right of patients to decide if, when, and how to receive health care. Operationalizing respect for autonomy would include a discussion on patients’ consent to use AI, including their preferences, and not only about data consent. To an extent, AI ethics might fail to uphold its boundaries, especially to the techno-optimism driving AI and its techno-solutionism.

Most guidelines do not include the sociotechnical context of those involved in AI. The most common addressees (developers, innovators, and manufacturers) might need a more comprehensive understanding of ethical concepts during their training or support afterward. Indeed, some ethical statements in these guidelines are meaningless without the proper ethical acculturation. For example, ethical education in computer science degrees in Europe is often a standalone subject with limited hours (Stavrakakis et al., 2022 ). The discourse often addresses stakeholders' responsibilities (using terms such as ‘should’ or ‘shall’). However, there is limited engagement in defining rights. For example, what are the rights of end-users? The rights could be implicit, but if the desire is to promote the active engagement of other non-technology stakeholders in the ethical development of AI, they should be made aware of their rights and educated about their options.

As an overarching analysis, we identified that AI guidelines switch between technical and ethical expectations, concepts, and notions. Other applications are precise in distinguishing their aim and intended usage. For example, guidance for medical devices for cervical cancer includes quality management, standards, and operational consideration (WHO, 2020 ). As another example, good manufacturing practices describe the minimum standards pharmaceutical manufacturers must meet in their production processes (EMA, 2018b ; European Commission, 2003 ; WHO, 2014 ). Quality-by-design is an approach to ensure the quality of medicines by “employing statistical, analytical and risk-management methodology in the design, development, and manufacturing of medicines” (EMA, 2018c ). Finally, Good Clinical Research Practice (GCP) principles are descriptive and focus on making research scientifically sound and justifiable (WHO, 2005 ). The lack of precision could be one of the reasons why there has been a backlash against utilizing ethics as a framework to inform AI guidelines. Some academics have criticized AI ethics for being toothless, useless or vague (Fukuda-Parr & Gibbons, 2021 ; Héder, 2020 ; Heilinger, 2022 ; Munn, 2022 ). Critics have mentioned that AI ethical guidelines do not offer robust strategies to protect human rights and cannot emphasize accountability, participation, and remedy as protection mechanisms for people (Fukuda-Parr & Gibbons, 2021 ). Others have criticized AI ethics in its current form, for the difficulty of implementing moral ideals in technological practices and the lack of consensus on ethical principles for AI (Munn, 2022 ).

The criticism of AI ethics might be due to a misconception of the role of ethics and the way guidelines are constructed, articulated, and framed for healthcare. Simplifying guidelines as a document that includes all AI, aims to guide in all scenarios, and tries to cover all stakeholders is over-ambitious. Compared to guidelines in other medical areas, principles for AI include autonomy, transparency, non-maleficence, fairness, trust, and responsibility. Therefore, AI’s approach to ethics tends to remain abstract, hindering the value of AI ethics and its potential application (Zhou & Chen, 2022 ). For example, ethics-by-design in AI replaced quality-by-design in pharmaceutical development. Ethics-by-design aims to make people consider ethical concerns and requirements such as respect for human agency, privacy and data governance, transparency, fairness, and individual, social, and environmental well-being (European Commission, 2021 ). However, ethics-by-design is not as operable as quality-by-design. When the goal is to operationalize ethics, AI guidelines might lack qualitative and quantitative suggestions to validate when and how to achieve and respect the proposed principles (Zhou & Chen, 2022 ). Therefore, limiting the contribution of AI ethics and potentially legitimizing content-thin ethics that are easy—at least pretend to be easy—to follow. In that sense, the criticism of ethical guidelines does not directly signal a failure of ethics but a potential over-spill between theoretical boundaries and aims. In the worst case, these guidelines can delay effective legislation. Guidelines can be used for ethics-washing, where it becomes easier to appear ethical than take ethical actions, especially if guidelines rely on forms of self-regulation and there are no legal consequences for the actions or if the content of the guidelines is abstract or general (Wagner, 2018 ). AI actors could use superficial recommendations as a red herring, resulting in widely ignored or superficially followed guidelines because they lack operational consequences for their choices.

Limitations

To our knowledge, this is the first comprehensive review of healthcare AI guidelines (from governments or institutions) from an ethical perspective, carried out by a multidisciplinary team. Although including various subjects (bioethics, philosophy, medicine, public health, theology, and psychology), our background has certainly informed our research and influenced our analysis. However, to overcome these challenges, we have reflected on our positionality and analyzed the guidelines in a nonlinear nature that forced us to contest our assumptions continuously. Given the continuous development of AI guidelines, the vast nature of AI, and our available resources, we noted several limitations. We did not aim to do a systematic review but to examine the widely available and influential guidelines worldwide critically. However, some relevant documents might have been excluded because they are hard to locate online or unavailable in the public domain. Limiting the analysis to English documents implied some linguistic exclusions and might limit a broad geographical interpretation. The search ended in the first half of 2022, which might be too early as most of the included guidelines were published from 2021 onwards. For example, the WHO outlined considerations for regulating artificial intelligence for health in Nov 2023, which indicates that other guidelines may be available since the final completion of this paper in Feb 2023. At least two research teams have done discourse analysis of AI policies, and have been published recently—albeit not healthcare-specific (Bareis & Katzenbach, 2022 ; Gonzalez Torres et al., 2023 ). The search for gray literature is challenging and could lead to biased inclusion of those documents which contain key search terms in their titles. We could not include guidelines from Latin America, Central Asia, or Africa, as none of the available guidelines fulfilled the inclusion criteria (domain-specific guidelines for healthcare). Previous researchers have acknowledged this limitation because they have also been unable to analyze guidelines from those geographical regions (Jobin et al., 2019 ). However, we noticed that initiatives are starting to emerge for the general governance of AI, such as national strategies (Kenya) or data focus AI guidelines in several Latin American countries (Gaffley et al., 2022 ; tmg, 2020 ). Given the nature of CDA as a qualitative research method, our results cannot be generalized for other guidelines not included in this study.

Conclusions

While AI systems may be required to adhere to existing legal frameworks, it may be necessary to modify or augment these frameworks to account for the unique considerations posed by AI. These guidelines will inform other forms of regulations, and it is vital to understand what they establish throughout their discourse (Larsson, 2020 ). It is essential that guidelines clarify their intentions and that they stand, at least as much as possible, immune to undue influence from the technology industry. Currently, guidelines tend to be over-enthusiastic about the capacities of the technology and the possibilities of change. First, AI is a broad concept, and guiding the development of something general is challenging. Second, it is dangerous to consider everything through the lenses of potential (benefits and risks). Like technology, AI ethics can be a victim of hype and reduce credibility. Third, supposing the concepts and conceptualizations employed in the guidelines are not thoughtfully considered. In that case, there is a risk that the guidelines may endorse values that fail to align with the needs of society. Guidelines focus on analyzing the potential benefits and risks of having an all-smart AI while focusing less on the social context necessary to use AI ethically. For example, except for G1, most guidelines do not explicitly address the fact that some public health problems could be equally—or less expensively—addressed via non-technical solutions. Similarly to guidance for pharmaceutical development, guidelines could recommend a justification to use technology, either because there are no better options or when it is demonstrably the best strategy.

Future AI guidelines for healthcare could benefit from implementing other approaches if they wish to guide ethical development. For example, patients' limited contribution could be resolved using participatory strategies such as citizen advisory groups. Other approaches beyond principles could be pertinent to achieving the goals of AI ethics. The Swiss Medical Association (FMH) issued practical demands for the development of AI instead of principles: defining AI's role as a medical device, requesting AI to follow evidence-based medicine practices, and assigning doctors and patients roles as coordinators of care (FMH, 2022 ). Defining AI and people’s roles in the form of ‘usage requirements’ could be another way to achieve the objective of integrating AI in healthcare. Care ethics focuses on relationships, dependencies, and societal and cultural factors that could help contextualize AI solutions to their intended application. Alternatively, process-based ethical frameworks are a valid basis because AI is not a single solution or a single problem. Also, other approaches, such as codes of conduct for specific stakeholders, might bring the expected results of guiding the people working with AI. For example, a code of conduct would be more useful if they address specific stakeholders as it can go in-depth and analyze role-based problems. The construction of AI ethics guidelines in its current form is narrow, focusing on creating or identifying a static list of principles and not engaging in more thorough approaches. A change would require an awareness of the potential of ethics as a framework for moral inquiry and a deep understanding of the purpose of AI ethics and its limits. Future guidelines iterations, therefore, might need to refine, shift and reshape their approach to AI guidelines and AI ethics.

Nesta is a UK based agency for social good. There developed a pilot project to map global initiatives for AI governance https://www.nesta.org.uk/data-visualisation-and-interactive/mapping-ai-governance/

AlgorithmWatch is a non-profit research and advocacy organization that is committed to watch, unpack and analyze automated decision-making (ADM) systems and their impact on society. https://algorithmwatch.org/en/

Global policy observatory by the OECD https://oecd.ai/

AI Ethics Lab aims to detect and address ethics risks and opportunities in building and using AI systems to enhance technology development. https://aiethicslab.com/big-picture/

The positivism paradigm aims to obtain explanations and predictions by relying on hypothetico-deductive method to verify a priori hypotheses that are often stated quantitatively (Park et al., 2020 ). In contrast, DA methodologies focus on interpreting data in a social context.

AbbasgholizadehRahimi, S., Cwintal, M., Huang, Y., Ghadiri, P., Grad, R., Poenaru, D., Gore, G., Zomahoun, H. T. V., Légaré, F., & Pluye, P. (2022). Application of artificial intelligence in shared decision making: Scoping review. JMIR Medical Informatics, 10 (8), e36199. https://doi.org/10.2196/36199

Article   Google Scholar  

Asimov, I. (1950). I robot . Random House Worlds.

Google Scholar  

Bareis, J., & Katzenbach, C. (2022). Talking AI into being: The narratives and imaginaries of national AI strategies and their performative politics. Science, Technology, & Human Values, 47 (5), 855–881. https://doi.org/10.1177/01622439211030007

Bélisle-Pipon, J.-C., Monteferrante, E., Roy, M.-C., & Couture, V. (2022). Artificial intelligence ethics has a black box problem. AI & Society . https://doi.org/10.1007/s00146-021-01380-0

Cheek, J. (2004). At the margins? Discourse analysis and qualitative research. Qualitative Health Research, 14 (8), 1140–1150. https://doi.org/10.1177/1049732304266820

Danaher, J. (2022). Techno-optimism: An analysis, an evaluation and a modest defence. Philosophy & Technology, 35 (2), 54. https://doi.org/10.1007/s13347-022-00550-2

Dixon-Woods, M., Amalberti, R., Goodman, S., Bergman, B., & Glasziou, P. (2011). Problems and promises of innovation: Why healthcare needs to rethink its love/hate relationship with the new. BMJ Quality & Safety, 20 (Suppl 1), i47–i51. https://doi.org/10.1136/bmjqs.2010.046227

European Medicines Agency, EMA (2020). Compassionate use . Available at: https://www.ema.europa.eu/en/human-regulatory/research-development/compassionate-use

European Medicines Agency, EMA (2018, September 17). Good manufacturing practice. Available at: https://www.ema.europa.eu/en/human-regulatory/research-development/compliance/good-manufacturing-practice

European Medicines Agency, EMA. (2017, September 17). Quality by design. Available at: https://www.ema.europa.eu/en/human-regulatory/research-development/quality-design

European Union (2003). COMMISSION DIRECTIVE 2003/94/EC of 8 October 2003 laying down the principles and guidelines of good manufacturing practice in respect of medicinal products for human use and investigational medicinal products for human use . Official Journal of the European Union. Available at: https://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=OJ:L:2003:262:0022:0026:en:PDF

European Commission. (2021). Ethics by design and ethics of use approaches for artificial intelligence . https://ec.europa.eu/info/funding-tenders/opportunities/docs/2021-2027/horizon/guidance/ethics-by-design-and-ethics-of-use-approaches-for-artificial-intelligence_he_en.pdf

Fairclough, N. (2013). Critical discourse analysis and critical policy studies. Critical Policy Studies, 7 (2), 177–197. https://doi.org/10.1080/19460171.2013.798239

Fairclough, N. (2022). Methods of critical discourse analysis (1st ed., pp. 121–138). SAGE Publications.

Food and Drug Administration, United States of America (2019, June 21). Expanded Access for Medical Devices . FDA; FDA. Available at: https://www.fda.gov/medical-devices/investigational-device-exemption-ide/expanded-access-medical-devices

Fjeld, J., Achten, N., Hilligoss, H., Nagy, A., & Srikumar, M. (2020). Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for AI (SSRN Scholarly Paper 3518482). https://doi.org/10.2139/ssrn.3518482

Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P., & Vayena, E. (2018). AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28 (4), 689–707. https://doi.org/10.1007/s11023-018-9482-5

Swiss Medical Association (FMH, SMA) (2022). Verbindung der Schweizer Ärztinnen und Ärzte, Bern. Künstliche Intelligenz im ärztlichen Alltag Einsatzgebiete in der Medizin : Nutzen , Herausforderungen und Forderungen der FMH (German). Available at: https://www.fmh.ch/files/pdf27/20220914_fmh_brosch-ki_d.pdf

Fukuda-Parr, S., & Gibbons, E. (2021). Emerging consensus on ‘ethical AI’: Human rights critique of stakeholder guidelines. Global Policy, 12 (S6), 32–44. https://doi.org/10.1111/1758-5899.12965

Gaffley, M., Adams, R., & Shyllon, O. (2022). Artificial intelligence. African Insight. A research summary of the ethical and human rights implications of AI in Africa . HSRC & Meta AI and Ethics Human Rights Research Project for Africa – Synthesis Report. https://africanaiethics.com/wp-content/uploads/2022/02/Artificial-Intelligence-African-Insight-Report.pdf

Gonzalez Torres, A. P., Kajava, K., & Sawhney, N. (2023). Emerging AI discourses and policies in the EU: Implications for evolving AI governance. In A. Pillay, E. Jembere, & A. J. Gerber (Eds.), Artificial intelligence research (pp. 3–17). Springer. https://doi.org/10.1007/978-3-031-49002-6_1

Chapter   Google Scholar  

Graham, R. (2022). Discourse analysis of academic debate of ethics for AGI. AI & Society, 37 (4), 1519–1532. https://doi.org/10.1007/s00146-021-01228-7

Héder, M. (2020). A criticism of AI ethics guidelines. Információs Társadalom: Társadalomtudományi Folyóirat, 20 (4), 4.

Heilinger, J.-C. (2022). The ethics of AI ethics. A constructive critique. Philosophy & Technology, 35 (3), 61. https://doi.org/10.1007/s13347-022-00557-9

Hutler, B., Rieder, T. N., Mathews, D. J. H., Handelman, D. A., & Greenberg, A. M. (2023). Designing robots that do no harm: Understanding the challenges of ethics for robots. AI and Ethics . https://doi.org/10.1007/s43681-023-00283-8

Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1 (9), 9. https://doi.org/10.1038/s42256-019-0088-2

Jung, G. (2018). Our AI overlord: The cultural persistence of Isaac Asimov’s three laws of robotics in understanding artificial intelligence . https://emergencejournal.english.ucsb.edu/wp-content/uploads/2018/06/Our-AI-Overlord-Jung-Thesis-1.pdf

Kauf, T. L., Yang, J.-C., Kimball, A. B., Sundaram, M., Bao, Y., Okun, M., Mulani, P., Hauber, A. B., & Johnson, F. R. (2015). Psoriasis patients’ willingness to accept side-effect risks for improved treatment efficacy. Journal of Dermatological Treatment, 26 (6), 507–513. https://doi.org/10.3109/09546634.2015.1034071

Krüger, S., & Wilson, C. (2022). The problem with trust: On the discursive commodification of trust in AI. AI & Society . https://doi.org/10.1007/s00146-022-01401-6

Larsson, S. (2020). On the governance of artificial intelligence through ethics guidelines. Asian Journal of Law and Society, 7 (3), 437–451. https://doi.org/10.1017/als.2020.19

Leist, A. K., Klee, M., Kim, J. H., Rehkopf, D. H., Bordas, S. P. A., Muniz-Terrera, G., & Wade, S. (2022). Mapping of machine learning approaches for description, prediction, and causal inference in the social and health sciences. Science Advances, 8 (42), eabk1942. https://doi.org/10.1126/sciadv.abk1942

Lindebaum, D., Vesa, M., & den Hond, F. (2020). Insights from “the machine stops” to better understand rational assumptions in algorithmic decision making and its implications for organizations. Academy of Management Review, 45 (1), 247–263. https://doi.org/10.5465/amr.2018.0181

Lupton, D. (1992). Discourse analysis: A new methodology for understanding the ideologies of health and illness. Australian Journal of Public Health, 16 (2), 145–150. https://doi.org/10.1111/j.1753-6405.1992.tb00043.x

Mao, Y., & Shi-Kupfer, K. (2021). Online public discourse on artificial intelligence and ethics in China: Context, content, and implications. AI & Society . https://doi.org/10.1007/s00146-021-01309-7

Mittelstadt, B., Russell, C., & Wachter, S. (2019). Explaining explanations in AI. In Proceedings of the conference on fairness, accountability, and transparency—FAT*’19 . https://doi.org/10.1145/3287560.3287574

Munn, L. (2022). The uselessness of AI ethics. AI and Ethics . https://doi.org/10.1007/s43681-022-00209-w

Murphy, K., Di Ruggiero, E., Upshur, R., Willison, D. J., Malhotra, N., Cai, J. C., Malhotra, N., Lui, V., & Gibson, J. (2021). Artificial intelligence for good health: A scoping review of the ethics literature. BMC Medical Ethics, 22 (1), 14. https://doi.org/10.1186/s12910-021-00577-8

Page, M. J., Moher, D., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., Shamseer, L., Tetzlaff, J. M., Akl, E. A., Brennan, S. E., Chou, R., Glanville, J., Grimshaw, J. M., Hróbjartsson, A., Lalu, M. M., Li, T., Loder, E. W., Mayo-Wilson, E., McDonald, S., & McKenzie, J. E. (2021). PRISMA 2020 explanation and elaboration: Updated guidance and exemplars for reporting systematic reviews. BMJ, 372 , n160. https://doi.org/10.1136/bmj.n160

Park, Y. S., Konge, L., & Artino, A. R. J. (2020). The positivism paradigm of research. Academic Medicine, 95 (5), 690. https://doi.org/10.1097/ACM.0000000000003093

Ryan, M., & Stahl, B. C. (2020). Artificial intelligence ethics guidelines for developers and users: Clarifying their content and normative implications. Journal of Information, Communication and Ethics in Society, 19 (1), 61–86. https://doi.org/10.1108/JICES-12-2019-0138

Sharon, T. (2021). Blind-sided by privacy? Digital contact tracing, the Apple/Google API and big tech’s newfound role as global health policy makers. Ethics and Information Technology, 23 (Suppl 1), 45–57. https://doi.org/10.1007/s10676-020-09547-x

Singler, B. (2020). “Blessed by the algorithm”: Theistic conceptions of artificial intelligence in online discourse. AI & Society, 35 (4), 945–955. https://doi.org/10.1007/s00146-020-00968-2

Slota, S. C., Fleischmann, K. R., Greenberg, S., Verma, N., Cummings, B., Li, L., & Shenefiel, C. (2021). Something new versus tried and true: Ensuring ‘innovative’ AI is ‘good’ AI. In K. Toeppe, H. Yan, & S. K. W. Chu (Eds.), Diversity, divergence, dialogue (pp. 24–32). Springer. https://doi.org/10.1007/978-3-030-71292-1_3

Starke, G., Schmidt, B., De Clercq, E., & Elger, B. S. (2022). Explainability as fig leaf? An exploration of experts’ ethical expectations towards machine learning in psychiatry. AI and Ethics . https://doi.org/10.1007/s43681-022-00177-1

Stavrakakis, I., Gordon, D., Tierney, B., Becevel, A., Murphy, E., Dodig-Crnkovic, G., Dobrin, R., Schiaffonati, V., Pereira, C., Tikhonenko, S., Gibson, J. P., Maag, S., Agresta, F., Curley, A., Collins, M., & O’Sullivan, D. (2022). The teaching of computer ethics on computer science and related degree programmes. A European survey. International Journal of Ethics Education, 7 (1), 101–129. https://doi.org/10.1007/s40889-021-00135-1

TMG. (2020). Overview of AI policies and developments in Latin America . (Visited on May 2022). Available at: https://www.tmgtelecom.com/wp-content/uploads/2020/03/TMG-Report-on-Overview-of-AI-Policies-and-Developments-in-Latin-America.pdf

Tweed, E. J., Popham, F., Thomson, H., & Katikireddi, S. V. (2022). Including ‘inclusion health’? A discourse analysis of health inequalities policy reviews. Critical Public Health, 32 (5), 700–712. https://doi.org/10.1080/09581596.2021.1929847

Vyas, D. A., Eisenstein, L. G., & Jones, D. S. (2020). Hidden in plain sight—Reconsidering the use of race correction in clinical algorithms. New England Journal of Medicine, 383 (9), 874–882. https://doi.org/10.1056/NEJMms2004740

Wagner, B. (2018). Ethics as an escape from regulation. From “ethics-washing” to ethics-shopping? In Ethics as an escape from regulation. From “ethics-washing” to ethics-shopping? (pp. 84–89). Amsterdam University Press. https://doi.org/10.1515/9789048550180-016

Walton, J. A., & Lazzaro-Salazar, M. (2016). Othering the chronically Ill: A discourse analysis of New Zealand health policy documents. Health Communication, 31 (4), 460–467. https://doi.org/10.1080/10410236.2014.966289

World Health Organization, W. H. O. (2005). Handbook for good clinical research practice (GCP). Guidance for implementation. Available at: https://apps.who.int/iris/bitstream/handle/10665/43392/924159392X_eng.pdf

World Health Organization, W. H. O. (2014). G ood manufacturing practices for pharmaceutical products: Main principles . Available at: https://www.who.int/publications/m/item/trs986-annex2

World Health Organization, W. H. O. (2020). Technical guidance and specifications of medical devices for screening and treatment of precancerous lesions in the prevention of cervical cancer . Available at: https://www.who.int/publications-detail-redirect/9789240002630

World Health Organization, W.H.O. (2021) . Ethics and governance of artificial intelligence for health. Available at: https://www.who.int/publications-detail-redirect/9789240029200

Yazdannik, A., Yousefy, A., & Mohammadi, S. (2017). Discourse analysis: A useful methodology for health-care system researches. Journal of Education and Health Promotion, 6 , 111. https://doi.org/10.4103/jehp.jehp_124_15

Zhou, J., & Chen, F. (2022). AI ethics: From principles to practice. AI & Society . https://doi.org/10.1007/s00146-022-01602-z

Download references

Open access funding provided by University of Basel. This work was enabled by the Swiss National Research Foundation in the framework of the National Research Program “Digital Transformation”, NRP 77 [Project Number 187263, Grant No.:407740_187263/1, recipient: Prof. Bernice Simone Elger].

Author information

Authors and affiliations.

Institute for Biomedical Ethics, University of Basel, Basel, Switzerland

Laura Arbelaez Ossa, Stephen R. Milford, Michael Rost, David M. Shaw & Bernice S. Elger

Institute for Research on Socio-Economic Inequality (IRSEI) in the Department of Social Sciences, University of Luxembourg, Esch-Sur-Alzette, Luxembourg

Anja K. Leist

Care and Public Health Research Institute, Maastricht University, Maastricht, The Netherlands

David M. Shaw

Center for Legal Medicine (CURML), University of Geneva, Geneva, Switzerland

Bernice S. Elger

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Laura Arbelaez Ossa .

Ethics declarations

Competing interest.

The authors have no competing interests to declare that are relevant to the content of this article.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file1 (PDF 75 kb)

Supplementary file2 (docx 13 kb), rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Arbelaez Ossa, L., Milford, S.R., Rost, M. et al. AI Through Ethical Lenses: A Discourse Analysis of Guidelines for AI in Healthcare. Sci Eng Ethics 30 , 24 (2024). https://doi.org/10.1007/s11948-024-00486-0

Download citation

Received : 08 March 2023

Accepted : 30 April 2024

Published : 04 June 2024

DOI : https://doi.org/10.1007/s11948-024-00486-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Artificial intelligence
  • AI guidelines
  • Regulatory affairs
  • Regulations
  • Find a journal
  • Publish with us
  • Track your research

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Published: 05 June 2024

Post-January 6th deplatforming reduced the reach of misinformation on Twitter

  • Stefan D. McCabe   ORCID: orcid.org/0000-0002-7180-145X 1   na1 ,
  • Diogo Ferrari   ORCID: orcid.org/0000-0003-2454-0776 2   na1 ,
  • Jon Green 3 ,
  • David M. J. Lazer   ORCID: orcid.org/0000-0002-7991-9110 4 , 5 &
  • Kevin M. Esterling   ORCID: orcid.org/0000-0002-5529-6422 2 , 6  

Nature volume  630 ,  pages 132–140 ( 2024 ) Cite this article

2765 Accesses

250 Altmetric

Metrics details

The social media platforms of the twenty-first century have an enormous role in regulating speech in the USA and worldwide 1 . However, there has been little research on platform-wide interventions on speech 2 , 3 . Here we evaluate the effect of the decision by Twitter to suddenly deplatform 70,000 misinformation traffickers in response to the violence at the US Capitol on 6 January 2021 (a series of events commonly known as and referred to here as ‘January 6th’). Using a panel of more than 500,000 active Twitter users 4 , 5 and natural experimental designs 6 , 7 , we evaluate the effects of this intervention on the circulation of misinformation on Twitter. We show that the intervention reduced circulation of misinformation by the deplatformed users as well as by those who followed the deplatformed users, though we cannot identify the magnitude of the causal estimates owing to the co-occurrence of the deplatforming intervention with the events surrounding January 6th. We also find that many of the misinformation traffickers who were not deplatformed left Twitter following the intervention. The results inform the historical record surrounding the insurrection, a momentous event in US history, and indicate the capacity of social media platforms to control the circulation of misinformation, and more generally to regulate public discourse.

This is a preview of subscription content, access via your institution

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

24,99 € / 30 days

cancel any time

Subscribe to this journal

Receive 51 print issues and online access

185,98 € per year

only 3,65 € per issue

Buy this article

  • Purchase on Springer Link
  • Instant access to full article PDF

Prices may be subject to local taxes which are calculated during checkout

research article on discourse analysis

Similar content being viewed by others

research article on discourse analysis

Exposure to the Russian Internet Research Agency foreign influence campaign on Twitter in the 2016 US election and its relationship to attitudes and voting behavior

research article on discourse analysis

Measuring the scope of pro-Kremlin disinformation on Twitter

research article on discourse analysis

Using the president’s tweets to understand political diversion in the age of social media

Data availability.

Aggregate data used in the analysis are publicly available at the OSF project website ( https://doi.org/10.17605/OSF.IO/KU8Z4 ) to any researcher for purposes of reproducing or extending the analysis. The tweet-level data and specific user demographics cannot be publicly shared owing to privacy concerns arising from matching data to administrative records, data use agreements and platforms’ terms of service. Our replication materials include the code used to produce the aggregate data from the tweet-level data, and the tweet-level data can be accessed after signing a data-use agreement. For access requests, please contact D.M.J.L.

Code availability

All code necessary for reproduction of the results is available at the OSF project site https://doi.org/10.17605/OSF.IO/KU8Z4 .

Lazer, D. The rise of the social algorithm. Science 348 , 1090–1091 (2015).

Article   ADS   MathSciNet   CAS   PubMed   Google Scholar  

Jhaver, S., Boylston, C., Yang, D. & Bruckman, A. Evaluating the effectiveness of deplatforming as a moderation strategy on Twitter. Proc. ACM Hum.-Comput. Interact. 5 , 381 (2021).

Article   Google Scholar  

Broniatowski, D. A., Simons, J. R., Gu, J., Jamison, A. M. & Abroms, L. C. The efficacy of Facebook’s vaccine misinformation policies and architecture during the COVID-19 pandemic. Sci. Adv. 9 , eadh2132 (2023).

Article   PubMed   PubMed Central   Google Scholar  

Hughes, A. G. et al. Using administrative records and survey data to construct samples of tweeters and tweets. Public Opin. Q. 85 , 323–346 (2021).

Shugars, S. et al. Pandemics, protests, and publics: demographic activity and engagement on Twitter in 2020. J. Quant. Descr. Digit. Media https://doi.org/10.51685/jqd.2021.002 (2021).

Imbens, G. W., & Lemieux, T. Regression discontinuity designs: a guide to practice. J. Econom. 142 , 615–635 (2008).

Article   MathSciNet   Google Scholar  

Gerber, A. S. & Green, D. P. Field Experiments: Design, Analysis, and Interpretation (W.W. Norton, 2012).

Grinberg, N., Joseph, K., Friedland, L., Swire-Thompson, B. & Lazer, D. Fake news on Twitter during the 2016 U.S. presidential election. Science 363 , 374–378 (2019).

Article   ADS   CAS   PubMed   Google Scholar  

Munger, K. & Phillips, J. Right-wing YouTube: a supply and demand perspective. Int. J. Press Polit. 27 , 186–219 (2022).

Guess, et al. How do social media feed algorithms affect attitudes and behavior in an election campaign? Science 381 , 398–404 (2023).

Persily, N. in New Technologies of Communication and the First Amendment: The Internet, Social Media and Censorship (ed. Bollinger L. C. & Stone, G. R.) (Oxford Univ. Press, 2022).

Sevanian, A. M. Section 230 of the Communications Decency Act: a ‘good Samaritan’ law without the requirement of acting as a ‘good Samaritan’. UCLA Ent. L. Rev. https://doi.org/10.5070/LR8211027178 (2014).

Lazer, D. M. J. et al. The science of fake news. Science 359 , 1094–1096 (2018).

Suzor, N. Digital constitutionalism: using the rule of law to evaluate the legitimacy of governance by platforms. Soc. Media Soc. 4 , 2056305118787812 (2018).

Google Scholar  

Napoli, P. M. Social Media and the Public Interest (Columbia Univ. Press, 2019).

DeNardis, L. & Hackl, A. M. Internet governance by social media platforms. Telecomm. Policy 39 , 761–770 (2015).

TwitterSafety. An update following the riots in Washington, DC. Twitter https://blog.x.com/en_us/topics/company/2021/protecting--the-conversation-following-the-riots-in-washington-- (2021).

Twitter. Civic Integrity Policy. Twitter https://help.twitter.com/en/rules-and-policies/election-integrity-policy (2021).

Promoting safety and expression. Facebook https://about.facebook.com/actions/promoting-safety-and-expression/ (2021).

Dwoskin, E. Trump is suspended from Facebook for 2 years and can’t return until ‘risk to public safety is receded’. The Washington Post https://www.washingtonpost.com/technology/2021/06/03/trump-facebook-oversight-board/ (4 June 2021).

Huszár, F. et al. Algorithmic amplification of politics on Twitter. Proc. Natl Acad. Sci. USA 119 , e2025334119 (2021).

Article   PubMed Central   Google Scholar  

Guess, A. M., Nyhan, B. & Reifler, J. Exposure to untrustworthy websites in the 2016 US election. Nat. Hum. Behav. 4 , 472–480 (2020).

Sunstein, C. R. #Republic: Divided Democracy in the Age of Social Media (Princeton Univ. Press, 2017).

Timberg, C., Dwoskin, E. & Albergotti, R. Inside Facebook, Jan. 6 violence fueled anger, regret over missed warning signs. The Washington Post https://www.washingtonpost.com/technology/2021/10/22/jan-6-capitol-riot-facebook/ (22 October 2021).

Chandrasekharan, E. et al. You can’t stay here: the efficacy of Reddit’s 2015 ban examined through hate speech. Proc. ACM Hum. Comput. Interact. 1 , 31 (2017).

Matias, J. N. Preventing harassment and increasing group participation through social norms in 2,190 online science discussions. Proc. Natl Acad. Sci. USA 116 , 9785–9789 (2019).

Article   ADS   CAS   PubMed   PubMed Central   Google Scholar  

Yildirim, M. M., Nagler, J., Bonneau, R. & Tucker, J. A. Short of suspension: how suspension warnings can reduce hate speech on Twitter. Perspect. Politics 21 , 651–663 (2023).

Guess, A. M. et al. Reshares on social media amplify political news but do not detectably affect beliefs or opinions. Science 381 , 404–408 (2023).

Nyhan, B. et al. Like-minded sources on Facebook are prevalent but not polarizing. Nature 620 , 137–144 (2023).

Dang, S. Elon Musk’s X restructuring curtails disinformation research, spurs legal fears. Reuters https://www.reuters.com/technology/elon-musks-x-restructuring-curtails-disinformation-research-spurs-legal-fears-2023-11-06/ (6 November 2023).

Duffy, C. For misinformation peddlers on social media, it’s three strikes and you’re out. Or five. Maybe more. CNN Business https://edition.cnn.com/2021/09/01/tech/social-media-misinformation-strike-policies/index.html (1 September 2021).

Conger, K. Twitter removes Chinese disinformation campaign. The New York Times https://www.nytimes.com/2020/06/11/technology/twitter-chinese-misinformation.html (11 June 2020).

Timberg, C. & Mahtani, S. Facebook bans Myanmar’s military, citing threat of new violence after Feb. 1 coup. The Washington Post https://www.washingtonpost.com/technology/2021/02/24/facebook-myanmar-coup-genocide/ (24 February 2021).

Barry, D. & Frenkel, S. ‘Be there. Will be wild!’: Trump all but circled the date. The New York Times https://www.nytimes.com/2021/01/06/us/politics/capitol-mob-trump-supporters.html (6 January 2021).

Timberg, C. Twitter ban reveals that tech companies held keys to Trump’s power all along. The Washington Post https://www.washingtonpost.com/technology/2021/01/14/trump-twitter-megaphone/ (14 January 2021).

Dwoskin, E. & Tiku, N. How Twitter, on the front lines of history, finally decided to ban Trump. The Washington Post https://www.washingtonpost.com/technology/2021/01/16/how-twitter-banned-trump/ (16 January 2021).

Harwell, D. New video undercuts claim Twitter censored pro-Trump views before Jan. 6. The Washington Post https://www.washingtonpost.com/technology/2023/06/23/new-twitter-video-jan6/ (23 June 2023).

Romm, T. & Dwoskin, E. Twitter purged more than 70,000 accounts affiliated with QAnon following Capitol riot. The Washington Post https://www.washingtonpost.com/technology/2021/01/11/trump-twitter-ban/ (11 January 2021).

Denham, H. These are the platforms that have banned Trump and his allies. The Washington Post https://www.washingtonpost.com/technology/2021/01/11/trump-banned-social-media/ (13 January 2021).

Graphika Team. DisQualified: network impact of Twitter’s latest QAnon enforcement. Graphika Blog https://graphika.com/posts/disqualified-network-impact-of-twitters-latest-qanon-enforcement/ (2021).

Dwoskin, E. & Timberg, C. Misinformation dropped dramatically the week after Twitter banned Trump and some allies. The Washington Post https://www.washingtonpost.com/technology/2021/01/16/misinformation-trump-twitter/ (16 January 2021).

Harwell, D. & Dawsey, J. Trump is sliding toward online irrelevance. His new blog isn’t helping. The Washington Post https://www.washingtonpost.com/technology/2021/05/21/trump-online-traffic-plunge/ (21 May 2021).

Olteanu, A., Castillo, C., Boy, J. & Varshney, K. The effect of extremist violence on hateful speech online. In Proc. 12th International AAAI Conference on Web and Social Media https://doi.org/10.1609/icwsm.v12i1.15040 (ICWSM, 2018).

Lin, H. et al. High level of correspondence across different news domain quality rating sets. PNAS Nexus 2 , gad286 (2023).

Abilov, A., Hua, Y., Matatov, H., Amir, O., & Naaman, M. VoterFraud2020: a multi-modal dataset of election fraud claims on Twitter.” Proc. Int. AAAI Conf. Weblogs Soc. Media 15 , 901–912 (2021).

Calonico, S., Cattaneo, M. D. & Titiunik, R. Robust nonparametric confidence intervals for regression-discontinuity designs. Econometrica 82 , 2295–2326 (2014).

Jackson, S., Gorman, B. & Nakatsuka, M. QAnon on Twitter: An Overview (Institute for Data, Democracy and Politics, George Washington Univ. 2021).

Shearer, E. & Mitchell, A. News use across social media platforms in 2020. Pew Research Center https://www.pewresearch.org/journalism/2021/01/12/news-use-across-social-media-platforms-in-2020/ (2021).

McGregor, S. C. Social media as public opinion: How journalists use social media to represent public opinion. Journalism 20 , 1070–1086 (2019).

Hammond-Errey, M. Elon Musk’s Twitter is becoming a sewer of disinformation. Foreign Policy https://foreignpolicy.com/2023/07/15/elon-musk-twitter-blue-checks-verification-disinformation-propaganda-russia-china-trust-safety/ (15 July 2023).

Joseph, K. et al. (Mis)alignment between stance expressed in social media data and public opinion surveys. Proc. 2021 Conference on Empirical Methods in Natural Language Processing 312–324 (Association for Computational Linguistics, 2021).

Robertson, R. E. et al. Auditing partisan audience bias within Google search. Proc. ACM Hum. Comput. Interact. 2 , 148 (2018).

McCrary, J. Manipulation of the running variable in the regression discontinuity design: a density. Test 142 , 698–714 (2008).

MathSciNet   Google Scholar  

Roth, J., Sant’Anna, P. H. C., Bilinski, A. & Poe, J. What’s trending in difference-in-differences? A synthesis of the recent econometrics literature. J. Econom. 235 , 2218–2244 (2023).

Wing, C., Simon, K. & Bello-Gomez, R. A. Designing difference in difference studies: best practices for public health policy research. Annu. Rev. Public Health 39 , 453–469 (2018).

Article   PubMed   Google Scholar  

Baker, A. C., Larcker, D. F. & Wang, C. C. Y. How much should we trust staggered difference-in-differences estimates? J. Financ. Econ. 144 , 370–395 (2022).

Callaway, B. & Sant’Anna, P. H. C. Difference-in-differences with multiple time periods. J. Econom. 225 , 200–230 (2021).

R Core Team. R: A Language and Environment for Statistical Computing, v.4.3.1. https://www.R-project.org/ (2023).

rdrobust: Robust data-driven statistical inference in regression-discontinuity designs. https://cran.r-project.org/package=rdrobust (2023).

Calonico, S., Cattaneo, M. D. & Titiunik, R. Optimal data-driven regression discontinuity plots. J. Am. Stat. Assoc. 110 , 1753–1769 (2015).

Article   MathSciNet   CAS   Google Scholar  

Calonico, S., Cattaneo, M. D. & Farrell, M. H. On the effect of bias estimation on coverage accuracy in nonparametric inference. J. Am. Stat. Assoc. 113 , 767–779 (2018).

Zeileis, A. & Hothorn, T. Diagnostic checking in regression relationships. R News 2 , 7–10 (2002).

Cameron, A. C., Gelbach, J. B. & Miller, D. L. Robust inference with multiway clustering. J. Bus. Econ. Stat. 29 , 238–249 (2011).

Zeileis, A. Econometric computing with HC and HAC covariance matrix estimators. J. Stat. Softw . https://doi.org/10.18637/jss.v011.i10 (2004).

Eckles, D., Karrer, B. & Johan, U. Design and analysis of experiments in networks: reducing bias from interference. J. Causal Inference https://doi.org/10.1515/jci-2015-0021 (2016).

Download references

Acknowledgements

The authors thank N. Grinberg, L. Friedland and K. Joseph for earlier technical work on the development of the Twitter dataset. Earlier versions of this paper were presented at the Social Media Analysis Workshop, UC Riverside, 26 August 2022; at the Annual Meeting of the American Political Science Association, 17 September 2022; and at the Center for Social Media and Politics, NYU, 23 April 2021. Special thanks go to A. Guess for suggesting the DID analysis. D.M.J.L. acknowledges support from the William & Flora Hewlett Foundation and the Volkswagen Foundation. S.D.M. was supported by the John S. and James L. Knight Foundation through a grant to the Institute for Data, Democracy & Politics at the George Washington University.

Author information

These authors contributed equally: Stefan D. McCabe, Diogo Ferrari

Authors and Affiliations

Institute for Data, Democracy & Politics, George Washington University, Washington, DC, USA

Stefan D. McCabe

Department of Political Science, University of California, Riverside, Riverside, CA, USA

Diogo Ferrari & Kevin M. Esterling

Department of Political Science, Duke University, Durham, NC, USA

Network Science Institute, Northeastern University, Boston, MA, USA

David M. J. Lazer

Institute for Quantitative Social Science, Harvard University, Cambridge, MA, USA

School of Public Policy, University of California, Riverside, Riverside, CA, USA

Kevin M. Esterling

You can also search for this author in PubMed   Google Scholar

Contributions

The order of author listed here does not indicate level of contribution. Conceptualization of theory and research design: S.D.M., D.M.J.L., D.F., K.M.E. and J.G. Data curation: S.D.M. and J.G. Methodology: D.F. Visualization: D.F. Funding acquisition: D.M.J.L. Project administration: K.M.E., S.D.M. and D.M.J.L. Writing, original draft: K.M.E. and D.M.J.L. Writing, review and editing: K.M.E., D.F., S.D.M., D.M.J.L. and J.G.

Corresponding author

Correspondence to David M. J. Lazer .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Peer review

Peer review information.

Nature thanks Jason Reifler and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer review reports are available.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data figures and tables

Extended data fig. 1 replication of the did results varying the number of deplatformed accounts..

DID estimates where the intervention depends on the number of deplatformed users that were followed by the not-deplatformed misinformation sharers. Results are two-way fixed effect point estimates (dots) and 95% confidence intervals (bars) of the difference-in-differences for all activity levels combined. Estimates use ordinary least squares with clustered standard errors at user-level. The Figure shows results including and excluding Trump followers (color code). The x-axis shows the minimum number of deplatformed accounts the user followed from at least one (1+) to at least ten (10+). Total sample sizes for each dosage level: Follow Trump (No): 1: 625,865; 2: 538,460; 3: 495,723; 4: 470,380; 5: 451,468; 6: 437,574; 7: 426,772; 8: 417,200; 9: 408,672; 10: 401,467; Follow Trump (Yes): 1: 688,174; 2: 570,637; 3: 514,352; 4: 481,684; 5: 460,676; 6: 444,656; 7: 432,659; 8: 421,924; 9: 413,241; 10: 405,766.

Extended Data Fig. 2 SRD results for total (bottom row) and average (top row) misinformation tweets and retweets, for deplatformed and not-deplatformed users.

Sample size includes 546 observations (days) on average across groups (x-axis), 404 before and 136 after. The effective number of observations is 64.31 days before and after on average. The estimation excludes data between Jan 6 (cutoff point) and 12 (included). January 6th is the score value 0, and January 12th the score value 1. Optimal bandwidth of 32.6 days with triangular kernel and order-one polynomial. Bars indicate 95% robust bias-corrected confidence intervals.

Extended Data Fig. 3 Time series of the daily mean of non-misinformation URL sharing.

Degree five polynomial regression (fitted line) before and after the deplatforming intervention, separated by subgroup (panel rows), for liberal-slant news (right column), and conservative-slant news (left column) sharing activity. Shaded area around the fitted line is the 95% confidence interval of the fitted values. As a placebo test we evaluate the effect of the intervention on sharing non-fake news for each of our subgroups. Since sharing non-misinformation does not violate Twitter’s Civic Integrity policy – irrespective of the ideological slant of the news – we do not expect the intervention to have an impact on this form of Twitter engagement; see SI for how we identify liberal and conservative slant of these domains from ref. 52 . Among the subgroups, users typically did not change their sharing of liberal or conservative non-fake news. Taking these results alongside those in Fig. 2 implies that these subgroups of users did not substitute non-misinformation conservative news sharing during and after the insurrection in place of misinformation.

Extended Data Fig. 4 Time series of misinformation tweets and retweets (panel columns), separately for high, medium and low activity users (panel rows).

Fitted straight lines describe a linear regression fitted using ordinary least squares of daily total misinformation retweeted standardized (y-axis) on days (x-axis) before January 6th and after January 12th. Shaded areas around the fitted line are 95% confidence intervals.

Extended Data Fig. 5 Replicates Fig. 5 but with adjustment covariates.

Corresponding regression tables are Supplementary Information Tables 1 to 3 . Two-way fixed effect point estimates (dots) and 95% confidence intervals (bars) of the difference-in-differences for high, moderate, and low activity users, as well as all these levels combined (x-axis). P-values (stars) are from two-sided t-tests based on ordinary least squares estimates with clustered standard errors at user-level. Estimates compare followers (treated group) and not-followers (reference group) of deplatformed users after January 12th (post-treatment period) and before January 6th (pre-treatment period). No multiple test correction was used. See Supplementary Information Tables 1 – 3 for exact values with all activity level users combined. Total sample sizes of not-followers (reference) and Trump-only followers: combined: 306,089, high: 53,962, moderate: 219,375, low: 32,003; Followers: combined: 662,216, high: 156,941, moderate: 449,560, low: 53,442; Followers (4+): combined: 463,176, high: 115,264, moderate: 302,907, low: 43,218.

Extended Data Fig. 6 Placebo test of SRD results for total (bottom row) and average (top row) shopping and sports tweets and retweets at the deplatforming intervention, among those not deplatformed.

Sample size includes 545 observations (days), 404 before the intervention and 141 after. Optimal bandwidth of 843.6 days with triangular kernel and order-one polynomial. Cutoff points on January 6th (score 0) and January 12th (score 1). Bars indicate 95% robust bias-corrected confidence intervals. These are placebo tests since tweets about sports and shoppings should not be affected by the insurrection or deplatforming.

Extended Data Fig. 7 Placebo test of SRD results for total (bottom row) and average (top row) misinformation tweets and retweets using December 20th as an arbitrary cutoff point.

Sample size includes 551 observations (days), 387 before the intervention and 164 after. Optimal bandwidth of 37.2 days with triangular kernel and order-one polynomial. Bars indicate 95% robust bias-corrected confidence intervals about the SRD coefficients. This is a placebo test of the intervention period.

Extended Data Fig. 8 Placebo test of SRD results for total (bottom row) and average (top row) misinformation tweets and retweets using January 18th as a cutoff point.

The parameters are very similar to Extended Data Fig. 7 .

Supplementary information

Supplementary information.

Supplementary Figs. 1–5 provide descriptive information about our subgroups, a replication of the panel data using the Decahose, and robustness analyses for the SRD. Supplementary Tables 1–5 show full parameter estimates for the DID models, summary statistics for follower type and activity level, and P values for the DID analyses under different multiple comparisons corrections.

Reporting Summary

Peer review file, rights and permissions.

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Cite this article.

McCabe, S.D., Ferrari, D., Green, J. et al. Post-January 6th deplatforming reduced the reach of misinformation on Twitter. Nature 630 , 132–140 (2024). https://doi.org/10.1038/s41586-024-07524-8

Download citation

Received : 27 October 2023

Accepted : 06 May 2024

Published : 05 June 2024

Issue Date : 06 June 2024

DOI : https://doi.org/10.1038/s41586-024-07524-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

research article on discourse analysis

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • PMC10180699

Logo of nutrients

Effects of Oral Collagen for Skin Anti-Aging: A Systematic Review and Meta-Analysis

1 School of Medicine, College of Medicine, Taipei Medical University, Taipei City 110, Taiwan; wt.ude.umt@740011101b

Ya-Li Huang

2 Department of Public Health, School of Medicine, College of Medicine, Taipei Medical University, Taipei City 11031, Taiwan; wt.ude.umt@gnauhly

Chi-Ming Pu

3 Division of Plastic Surgery, Department of Surgery, Cathay General Hospital, Taipei City 106, Taiwan; moc.nsm@5339namkp

4 School of Medicine, College of Life Science and Medicine, National Tsing Hua University, Hsinchu City 300, Taiwan

5 Cochrane Taiwan, Taipei Medical University, Taipei City 110, Taiwan; moc.liamg@dacanyk (Y.-N.K.); wt.ude.umt@nisheek (K.-H.C.)

6 Evidence-Based Medicine Center, Wan Fang Hospital, Taipei Medical University, Taipei City 116, Taiwan

7 Research Center of Big Data and Meta-Analysis, Wan Fang Hospital, Taipei Medical University, Taipei City 116079, Taiwan

8 Institute of Health Policy and Management, College of Public Health, National Taiwan University, Taipei City 100, Taiwan

Khanh Dinh Hoang

9 Department of Histopathology, Hai Phong University of Medicine and Pharmacy, Hai Phong 04254, Vietnam; nv.ude.umph@hnahkdh

Kee-Hsin Chen

10 Post-Baccalaureate Program in Nursing, College of Nursing, Taipei Medical University, Taipei City 11031, Taiwan

11 Department of Nursing, Wan Fang Hospital, Taipei Medical University, Taipei City 11696, Taiwan

12 Research Center in Nursing Clinical Practice, Wan Fang Hospital, Taipei Medical University, Taipei 11696, Taiwan

13 Evidence-Based Knowledge Translation Center, Wan Fang Hospital, Taipei Medical University, Taipei City 11696, Taiwan

14 School of Medicine, Faculty of Health and Medical Sciences, Taylor’s University, Selangor 47500, Malaysia

Chiehfeng Chen

15 Division of Plastic Surgery, Department of Surgery, Wan Fang Hospital, Taipei Medical University, Taipei City 116, Taiwan

Associated Data

Data will be made available on reasonable request.

This paper presents a systematic review and meta-analysis of 26 randomized controlled trials (RCTs) involving 1721 patients to assess the effects of hydrolyzed collagen (HC) supplementation on skin hydration and elasticity. The results showed that HC supplementation significantly improved skin hydration (test for overall effect: Z = 4.94, p < 0.00001) and elasticity (test for overall effect: Z = 4.49, p < 0.00001) compared to the placebo group. Subgroup analyses demonstrated that the effects of HC supplementation on skin hydration varied based on the source of collagen and the duration of supplementation. However, there were no significant differences in the effects of different sources ( p = 0.21) of collagen or corresponding measurements ( p = 0.06) on skin elasticity. The study also identified several biases in the included RCTs. Overall, the findings suggest that HC supplementation can have positive effects on skin health, but further large-scale randomized control trials are necessary to confirm these findings.

1. Introduction

The skin, the largest organ of the body exposed to the external environment, is affected by both intrinsic and extrinsic factors in the aging process [ 1 ]. Skin aging is characterized by dehydration, a loss of skin elasticity, and the presence of wrinkles [ 2 ]. Skin aging has attracted considerable attention because of the increasingly high beauty standards. Because many countries are becoming aging societies, the psychosocial effects of skin aging increases the need for effective interventions [ 3 ]. In this context, the use of nutraceuticals as supplements has increased in recent years [ 4 ].

Collagen is the main protein structure of various connective tissues, which constitutes 80% of the dry weight of human skin [ 5 ]. Collagen is characterized by a triple helix structure formed by the repetition of glycine every third residue, and particularly by proline and hydroxyproline in the other residues [ 6 ]. Collagen, the most prevalent component of extracellular matrix, provides mechanical support and directs tissue development [ 7 ].

Aging induces a decline in the enzymes involved in the post-translational processing of collagen, reducing the number of fibroblasts that synthesize collagen and vessels that supply the skin [ 8 ]. The decline in skin quality with age is characterized by a reduction in collagen synthesis and a decrease in skin vascularity, leading to decreased elasticity and the formation of wrinkles [ 9 ]. These changes are due to the decline in fibroblast activity and a decrease in the number of blood vessels in the skin [ 10 ]. Therefore, the skin undergoes regressive changes with age such as dehydration, a loss of elasticity, and a reduction in epidermal thickness [ 11 ]. Various nutrients and supplements are used to improve skin health and maintain a youthful skin appearance [ 12 ]. These strategies include topical creams, injectable fillers, and collagen supplements. Topical creams contain collagen as one of the ingredients, and they are designed to enhance skin hydration and firmness [ 13 ]. However, topical creams have limited ability to penetrate the skin, which can reduce their effectiveness [ 13 ]. Injectable fillers such as hyaluronic acid fillers, stimulate collagen production and provide immediate results by plumping the skin [ 14 ]. However, they can be expensive and come with the risk of adverse events such as bruising, swelling, and infection [ 14 ]. On the other hand, collagen supplements, particularly those containing hydrolyzed collagen peptides, have been shown to be safe and cost-effective compared to other collagen-based strategies. Furthermore, collagen supplements have the advantage of being taken orally, making them easy to incorporate into daily routines [ 15 ].

Among these supplements, hydrolyzed collagen (HC) is the most popular and promising skin anti-aging nutraceutical [ 16 ]. Other studies have indicated that alanine–hydroxyproline–glycine and serine–hydroxyproline–glycine can be detected in human blood 1 h after the oral ingestion of HC [ 17 , 18 ] and deposited on the skin [ 19 ].

A recent study demonstrated that HC improves skin hydration and elasticity [ 16 ]. Nevertheless, not all sources of HC have the same efficacy. Even at the same dose and duration of administration, some specific sources of collagens are more effective than others [ 20 ]. Therefore, studies are required to determine the proper source and therapeutic duration of HC against skin aging.

Because an increasing number of clinical studies on collagen supplements have been conducted globally, their results must be summarized in a systematic review and meta-analysis. Therefore, this systematic review and meta-analysis investigated the effects of collagen supplementation on skin hydration and elasticity.

2. Materials and Methods

2.1. search strategy, inclusion criteria, and exclusion criteria.

We performed a literature search in the Embase, PubMed, and Cochrane Library databases by using the following search terms from Medical Subject Headings with no restrictions applied: (collagen OR hydrolyzed collagen) AND (anti-aging). Relevant studies published before December 2022 were identified. We included studies that met the following criteria: (1) applying a randomized clinical trial (RCT) design; (2) including healthy adults (aged ≥ 18 years); (3) including patients who received HC; (4) being full-text articles written in English. We excluded studies that (1) assessed the combined effect of collagen supplement with another supplement or (2) were RCTs that were not written in English. We extracted raw data from the graphs in articles using WebPlotDigitizer [ 21 ].

2.2. Data Extraction

Two independent reviewers (S-YP, CC) extracted the basic information of the included studies. The following types of information were extracted: study meta-data (i.e., first author, publication year, and study design) and information on the study sample (i.e., number of patients, gender, mean age, and baseline characteristics of the treatment and placebo groups), intervention (i.e., the dose of collagen supplement and form), and outcomes (i.e., hydration and elasticity). Continuous outcomes are presented in terms of the mean ± standard deviation (SD), and discrete data are presented in terms of percentage.

2.3. Statistical Analysis, Sensitivity Analysis and Bias Assessment

We used a random-effects model to calculate the SD and mean difference of the identified studies. A p value of <0.05 indicated statistical significance. The levels of heterogeneity among the included studies were determined using Hedge’s I 2 tests, and forest plots were generated for each included study. Moreover, I 2 ≥ 50% indicated high heterogeneity [ 22 ]. The general effect test result was reported as a z-value, which supported the inference of the 95% confidence interval (CI). A sensitivity analysis was performed to negate the effect of potentially influential studies. Each study was classified in accordance with the Cochrane Handbook for Systematic Reviews of Interventions [ 23 ]. The Cochrane risk of bias (RoB) 2.0 tool was used to assess the risk of bias in the included RCTs. Five domains of bias were evaluated (selection, performance, detection, attrition, and reporting bias) [ 24 ]. In this meta-analysis, all outcomes were analyzed using RevMan software (version 5.4).

3.1. Research Results and Study Characteristics

Figure 1 shows the flowchart of the literature search process performed in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines [ 25 ]. We identified 1135 studies in our initial search. After removing duplicates and screening titles or abstracts of related articles, we assessed the full-text articles of the remaining 37 studies. Of these studies, 26 articles were included in this systematic review and meta-analysis.

An external file that holds a picture, illustration, etc.
Object name is nutrients-15-02080-g001.jpg

Flowchart of the systematic review and meta-analysis according to the PRISMA guidelines.

3.2. Study Characteristics

A total of 26 RCTs involving 1721 patients were included in this meta-analysis. The duration of the HC supplementation of the included studies ranged from 2 to 12 weeks. Among the included RCTs, 14 focused on collagens extracted from fish, one focused on collagens extracted from bovine, one focused on collagens extracted from chicken, two focused on collagens extracted from porcine, and nine lacked information regarding the source of collagen. The study characteristics of the included RCTs are presented in Table 1 .

The measurement of skin hydration levels is commonly conducted using a non-invasive tool called a corneometer. This instrument emits a high-frequency electric current into the skin’s surface and measures the amount of water present in the top layer, expressed in corneometry units. The corneometer is widely used in evaluating the effectiveness of topical products and assessing overall skin health by providing valuable insights into the skin’s moisture barrier. Therefore, it is considered as a valuable tool in measuring the skin hydration levels and assessing the efficacy of skincare products [ 18 , 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 ]. On the other hand, the measurement of skin elasticity is often conducted using cutometry, a non-invasive technique that provides valuable insights into skin health. It works by applying a controlled negative pressure to a small area of the skin and measuring the resulting deformation, which is directly proportional to the skin’s elasticity. Cutometry is widely used in research and clinical settings to assess the skin elasticity levels and monitor changes in the skin over time. Overall, it is a safe and reliable tool for evaluating skin health [ 18 , 26 , 27 , 29 , 32 , 33 , 34 , 35 , 36 , 37 ].

Characteristics of the patients in the included studies.

Author (Year)Female/MaleAge RangeTime (Weeks)Intervention (Origin)Outcome Extracted
Proksch et al. (2014a) [ ]60/035–558, 122.5 g HC/5 g HC (porcine)Elasticity/hydration/trans-epidermal water loss (TEWL)/wrinkles
Proksch et al. (2014b) [ ]107/045–658, 122.5 g collagen peptidesWrinkles/biopsy/procollagen type/elastin/fibrillin
Yoon et al. (2014) [ ]44/0>44123 g HC (fish)Procollagen type 1/fibrillin 1/metalloproteinases 1 and 12/biopsies/immunohistochemical staining
Di Cerbo et al. (2014) [ ]30/040–454.5372 mg HCCutaneous pH/hydration/sebum/elasticity/skin tone/elastin/elastase 2/fibronectin/hyaluronic acid/carbonyl proteins
Choi et al. (2014) [ ]24/830–4853 g collagen peptidesSkin hydration/elasticity/TEWL/erythema/satisfaction questionnaire
Sugihara, Inoue, and Wang (2015) [ ]53/035–5582.5 g HC (fish)Hydration/elasticity/wrinkles
Campos et al. (2015) [ ]60/040–501210 g HCCorneal stratum hydration/skin viscoelasticity/dermal echogenicity/high-resolution photography
Asserin et al. (2015) [ ]134/040–658, 1210 g HC (porcine)/10 g HC (fish)Skin moisture/TEWL/dermal density/dermal echogenicity/dermal collagen fragmentation
Inoue, Sugihara, and Wang (2016) [ ]80/035–5582.5 g collagen peptidesSkin moisture/elasticity/wrinkles
Genovese, Corbo, and Sibilla (2017) [ ]111/940–60125 g HCElasticity/biopsies/subjective questionnaire
Koizumi et al. (2017) [ ]71/030–60123 g collagen peptidesWrinkles/moisture/elasticity/blood tests (γ-glutamyltransferase, mean corpuscular hemoglobin concentration, mean corpuscular hemoglobin, mean corpuscular volume, red blood cell, platelet, white blood cell, bilirubin, creatinine, total cholesterol, glucose, hemoglobin, hematocrit, alanine aminotransferase, aspartate aminotransferase, total protein and albumin)
Czajka et al. (2018) [ ]120/021–70124 g HCElasticity/biopsies/self-perception questionnaire
Kim (2018) [ ]70/040–60121000 mg collagen (fish)Skin hydration/wrinkling/elasticity
Ito, Seki, and Ueda (2018) [ ]17/430–50810 g collagen peptides (fish)Elasticity/moisture/TEWL/skin pH/spots/wrinkle/skin pores/texture/density/collagen score/growth hormone (GH), insulin-like growth factor-1 (IGF-1)
Bolke et al. (2019) [ ]72/0>3512, 162.5 g collagen peptidesHydration/elasticity/wrinkles/skin density/subjective questionnaire
Schwartz et al. (2019) [ ]113/036–59120.6 g HC (chicken)Erythema/hydration/TEWL/elasticity/wrinkles/dermal collagen/subjective questionnaire
Zmitek et al. (2020) [ ]31/040–65124 g HC (fish)Dermal density and thickness/viscoelasticity/hydration/TEWL/wrinkles/moisture/dermal microrelief
Laing et al. (2020) [ ]60/040–70122.5 g collagen peptidesDermal collagen fragmentation/subjective questionnaire
Sangsuwan and Asawanonda (2020) [ ]36/050–604, 85 g HCElasticity
Nomoto and Iizaka (2020) [ ]27/12>65812 g collagen peptidesStratum corneum hydration/elasticity
Ping (2020) [ ]50/035–5085.5 g collagen (fish)Skin hydration/brightness/texture/crow’s feet/collagen content
Evans (2020) [ ]50/045–601210 g HC (fish)Wrinkles/elasticity/self-reported appearance
Tak (2021) [ ]84/040–60121000 mg collagen tripeptidesHydration/elasticity/wrinkles
Miyanaga (2021) [ ]99/035–50121 g HC/5 g HCSkin water content/TEWL/elasticity/thickness
Jung (2021) [ ]25/2535–60121000 mg collagen (fish)Skin hydration/TEWL/texture/flexibility
Bianchi (2022) [ ]52/040–6085 g HCSkin moisturization/elasticity/wrinkle depth

3.3. Meta-Analysis Results

3.3.1. pooled analysis of selected studies.

Some articles were excluded from the research due to various reasons. Studies conducted by Campos, Czajka, Genovese, and Sangsuwan were not considered as they did not measure the hydration levels, which was a key parameter of interest. Similarly, the Asserin study did not measure elasticity, so its results could not be used to evaluate the impact of elasticity on the outcome measures. The Bianchi and Ping study was excluded due to the lack of standard deviation data for the placebo group, which was necessary for the statistical analysis. The Laing study did not provide sufficient direct data on moisture and elasticity, the primary outcomes of interest, and the provided microscopic observations and questionnaires were insufficient for the research. Finally, the Proksch study did not provide data for the placebo group, making it impossible to compare the results with those of the intervention group. Therefore, these studies did not meet the necessary criteria for inclusion in the research.

All included RCTs divided the patients into two groups according to the collagen measurement and skin hydration or elasticity, and then subjected to a meta-analysis. The standard mean difference (SMD) of 18 studies on the effects of HC and the placebo on skin hydration are shown in Figure 2 . The overall pooled effect size of 0.63 (95% CI 0.38, 0.88) indicated that HC supplementation significantly improved skin hydration (z = 4.94, p < 0.00001). Figure 3 shows the forest plot of the meta-analysis of 19 studies on the effects of HC on skin elasticity; the results indicate that HC supplementation significantly improved skin elasticity (z = 4.49, p < 0.00001) compared with the placebo group at a pooled effect size of 0.72 (95% CI 0.40, 1.03).

An external file that holds a picture, illustration, etc.
Object name is nutrients-15-02080-g002.jpg

Forest plot of the included studies evaluating skin hydration in patients supplemented with HC and patients in the placebo group [ 26 , 27 , 28 , 30 , 31 , 32 , 33 , 34 , 35 , 39 , 40 , 43 , 44 , 46 , 47 , 48 , 49 , 50 ]. (HC: hydrolyzed collagen, CI: confidence intervals, SD: standard deviation, I 2 : heterogeneity).

An external file that holds a picture, illustration, etc.
Object name is nutrients-15-02080-g003.jpg

Forest plot of the included studies evaluating skin elasticity in patients supplemented with HC and patients in the placebo group [ 26 , 27 , 28 , 29 , 31 , 32 , 33 , 34 , 35 , 37 , 39 , 40 , 41 , 42 , 43 , 44 , 47 , 48 , 49 ]. (HC: hydrolyzed collagen, CI: confidence intervals, SD: standard deviation, I 2 : heterogeneity).

3.3.2. Subgroup Analysis

Collagen supplements are available in various forms including gels, liquids, and capsules. The type of collagen used in these supplements can vary depending on the source, with some of the most common types including fish, porcine, chicken, and bovine collagen. A subgroup analysis was performed to determine the effects of multiple sources of HC supplements and duration on skin hydration. The results showed that the supplementation with HC originating from fish, bovine, chicken, porcine, and unknown source significantly improved skin hydration ( Figure 4 , p < 0.00001). Of these sources, HC originating from chicken had the weakest effect (−0.03, 95% CI −0.40, 0.34) on skin hydration. In addition, we performed subgroup analyses on the duration of HC supplementation for 2, 4, 6, 8, and 12 weeks. The forest plot analysis revealed that the effects of HC supplementation during 4 ( p = 0.002), 6 ( p = 0.04), 8 ( p < 0.00001), and 12 weeks ( p = 0.001) significantly differed, as shown in Figure 5 . In addition, the effects of the long-term use (>8 weeks) of HC (0.59, 95% CI 0.35, 0.83) were more favorable than that of the short-term use (<8 weeks) of HC (0.39, 95% CI 0.15, 0.63, Figure 6 ).

An external file that holds a picture, illustration, etc.
Object name is nutrients-15-02080-g004.jpg

Forest plot for the subgroup analysis of skin hydration expressed as HC originating from fish, bovine, chicken, porcine, and unknown source in patients supplemented with HC and patients in the placebo group [ 26 , 27 , 28 , 30 , 31 , 32 , 33 , 34 , 35 , 40 , 43 , 44 , 46 , 47 , 48 , 49 , 50 ]. (HC: hydrolyzed collagen, CI: confidence intervals, SD: standard deviation, I 2 : heterogeneity).

An external file that holds a picture, illustration, etc.
Object name is nutrients-15-02080-g005.jpg

Forest plot for the subgroup analysis of skin hydration expressed as 2, 4, 6, 8, and 12 weeks in patients supplemented with HC and patients in the placebo group [ 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 , 39 , 40 , 43 , 44 , 46 , 47 , 48 , 49 , 50 ]. (HC: hydrolyzed collagen, CI: confidence intervals, SD: standard deviation, I 2 : heterogeneity).

An external file that holds a picture, illustration, etc.
Object name is nutrients-15-02080-g006.jpg

Forest plot for the subgroup analysis of skin hydration expressed as long-term (>8 weeks) and short-term (<8 weeks) in patients supplemented with HC and patients in the placebo group [ 26 , 27 , 28 , 30 , 31 , 32 , 33 , 34 , 35 , 39 , 40 , 43 , 44 , 46 , 47 , 48 , 49 , 50 ]. (HC: hydrolyzed collagen, CI: confidence intervals, SD: standard deviation, I 2 : heterogeneity).

In addition, three subgroup analyses of the effects of sources of HC, corresponding measurements (R2: Gross elasticity, R5: Net elasticity; elastic portion of relaxation/elastic portion of suction, R7: Elastic portion; elastic portion of relaxation/first maximum amplitude after suction and mm by cutometer) and the duration of HC supplementation on skin elasticity were performed. The subgroup analyses indicated no significant differences in the effects of various sources of HC ( p = 0.21, Figure 7 ) and the corresponding measurements ( p = 0.06, Figure 8 ) on skin elasticity. The subgroup analysis on the duration revealed that 6 weeks of HC supplementation showed no positive effect on skin elasticity ( p = 0.05, Figure 9 ). Furthermore, the effect of the long-term use (>8 weeks) of HC (0.73, 95% CI 0.41, 1.06) was more favorable than that of the short-term use (<8 weeks) of HC (0.67, 95% CI 0.33, 1.00) on skin elasticity. The results of the subgroup analyses are presented in Figure 10 .

An external file that holds a picture, illustration, etc.
Object name is nutrients-15-02080-g007.jpg

Forest plot for the subgroup analysis of skin elasticity expressed as HC originating from fish, bovine, chicken, porcine, and unknown source in patients supplemented with HC and patients in the placebo group [ 26 , 27 , 28 , 29 , 31 , 32 , 33 , 34 , 35 , 37 , 39 , 40 , 41 , 42 , 43 , 44 , 47 , 48 , 49 ]. (HC: hydrolyzed collagen, CI: confidence intervals, SD: standard deviation, I 2 : heterogeneity).

An external file that holds a picture, illustration, etc.
Object name is nutrients-15-02080-g008.jpg

Forest plot for the subgroup analysis of skin elasticity expressed as R2 (Gross elasticity), R5 (Net elasticity; elastic portion of relaxation/elastic portion of suction), R7 (Elastic portion; elastic portion of relaxation/first maximum amplitude after suction), and mm in patients supplemented with hydrolyzed collagen (HC) and patients in the placebo group [ 26 , 28 , 29 , 31 , 33 , 34 , 35 , 37 , 39 , 41 , 43 , 48 , 49 ]. (HC: hydrolyzed collagen, CI: confidence intervals, SD: standard deviation, I 2 : heterogeneity).

An external file that holds a picture, illustration, etc.
Object name is nutrients-15-02080-g009.jpg

Forest plot for the subgroup analysis of skin elasticity expressed as 2, 4, 6, 8, and 12 weeks in patients supplemented with HC and patients in the placebo group [ 26 , 27 , 28 , 29 , 31 , 32 , 33 , 34 , 35 , 37 , 39 , 40 , 41 , 42 , 43 , 44 , 47 , 48 , 49 ]. (HC: hydrolyzed collagen, CI: confidence intervals, SD: standard deviation, I 2 : heterogeneity).

An external file that holds a picture, illustration, etc.
Object name is nutrients-15-02080-g010.jpg

Forest plot for the subgroup analysis of skin elasticity expressed as long-term (>8 weeks) and short-term (<8 weeks) in patients supplemented with HC and patients in the placebo group [ 26 , 27 , 28 , 29 , 31 , 32 , 33 , 34 , 35 , 37 , 39 , 40 , 41 , 42 , 43 , 44 , 47 , 48 , 49 ]. (HC: hydrolyzed collagen, CI: confidence intervals, SD: standard deviation, I 2 : heterogeneity).

In conducting systematic reviews and meta-analyses, it is important to examine the quality of research studies and potential biases. One common method for assessing bias is through the use of RoB (Risk of Bias). RoB evaluates various aspects of a study that could lead to bias such as incomplete outcome data and selective outcome reporting. Each aspect is evaluated based on predefined criteria, and an overall assessment of the study’s risk of bias is made. The goal of RoB is to provide an impartial evaluation of the study’s design, implementation, and reporting to aid in determining the study’s reliability and suitability for inclusion in systematic reviews or meta-analyses [ 24 ]. At the study level, we found an RoB in the bias arising from the randomization process in one study [ 33 ], bias due to deviations from intended intervention in seven studies [ 27 , 30 , 31 , 33 , 35 , 44 , 48 ], bias due to missing outcome data in thirteen studies [ 18 , 27 , 28 , 30 , 31 , 33 , 34 , 35 , 37 , 44 , 47 , 48 , 51 ], and bias in the selection of the reported results in two studies [ 18 , 51 ]. Figure 11 provides additional details on the RoB assessment results for the included RCTs.

An external file that holds a picture, illustration, etc.
Object name is nutrients-15-02080-g011.jpg

Risk of bias [ 18 , 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 , 37 , 38 , 39 , 40 , 41 , 42 , 43 , 44 , 45 , 46 , 47 , 48 , 49 , 50 , 51 ]. * D1: Randomization process; D2: Deviations from the intended interventions; D3: Missing outcome data; D4: Measurement of outcome; D5: Selection of the reported result.

4. Discussion

To evaluate the effects of collagen supplements on skin aging, we analyzed 26 RCTs to assess the efficacy of oral collagen supplements on skin hydration and elasticity, both of which characterize skin aging. The trials measured skin hydration and elasticity on various areas of the body including the cheek, forearm, and forehead. By analyzing these parameters, our findings revealed that oral collagen supplements improved skin hydration and elasticity. The beneficial effects were significant after 8 weeks or more of HC supplementation.

4.1. Hydration

The key molecule involved in skin moisture is hyaluronic acid, a glycosaminoglycan with a unique capacity to retain water molecules [ 52 ]. The most striking histochemical change observed in aging skin is the gradual loss of epidermal hyaluronic acid [ 53 ]. Oral administration of collagen hydrolysates include rich proline-hydroxyproline, which stimulates hyaluronic acid production in the dermal fibroblast cells [ 54 ].

Our study findings revealed that supplementation with oral collagens improved skin hydration, which is consistent with previous findings. Cao et al. reported that the concentration of moisture in the skin of mice treated with collagen peptides (CPs) was significantly higher compared with that of the control mice ( p < 0.05) [ 55 ]. Sun et al. revealed that collagen as a single supplement showed remarkable effects on skin hydration, with an SMD of 0.77 (95% CI 0.60, 0.94; p < 0.00001) compared with a placebo [ 56 ].

Our findings revealed that fish was the optimal source of collagen for improving skin hydration. A previous study indicated that collagens sourced from fish skins have diverse amino acid compositions than mammalian collagens [ 57 ]. Another study estimated that the yields of collagen derived from fish skin were 50%, collagen derived from fish bones were 40%, and collagen derived from fish fin were 36.4% [ 58 ]. Notably, marine collagen and collagen peptides have high bioavailability, potency, and a favorable safety profile [ 59 ].

In our investigations, only one study by Schwartz (2019) investigated the effect of collagen sourced from chicken, which was the least among all included studies. However, in the study by Cao et al. on the effects of the oral intake of CPs derived from chicken bones in mice showed that the concentration of moisture in the skin of mice treated with CPs was significantly higher compared with that of the control mice ( p < 0.05) [ 55 ]. Schwastz et al. administered 1 g of collagen from hydrolyzed chicken sternal cartilage daily for 12 weeks to all human participants. The skin hydration of the participants significantly increased by 12.5% ( p = 0.003) between weeks 6 and 12 [ 36 ]. Additionally, it is unclear whether the results can be generalized to the wider population, as the studies were conducted on mice and humans with different characteristics and may not reflect the general population.

4.2. Elasticity

Fibril-forming type I collagen is the major collagen in the skin, comprising 90% of the total collagen, and plays a role in structural organization, integrity, and strength and skin [ 60 ]. The elastic fiber network imparts elasticity and resilience to the tissues and comprises elastin and microfibrils, which are composed of various proteins [ 61 ]. The elasticity of the skin depends on the function of the network, and its formation is a complex process involving many factors. One study showed that the intake of HC downregulated placenta growth factor-2, insulin-like growth factor binding protein 2, insulin-like growth factor binding protein 3, platelet factor 4, serpin E1, and transforming growth factor β-1, and increased type I collagen mRNA and protein levels [ 62 ].

Our findings revealed that supplementation with oral collagen improves skin elasticity, which are consistent with previous findings. De Luca et al. found that patients taking marine collagen peptides significantly improved skin elasticity ( p < 0.0001) [ 63 ]. Maia Campos et al. demonstrated that a group treated with oral collagen showed significant differences in the mechanical properties of the skin compared with the baseline and placebo groups after 90 days of treatment only in the net elasticity parameter in the periorbital region [ 64 ]. Lee et al. showed that 12 weeks of oral collagen film consumption significantly increased the elasticity of the skin surface (R2), yielding 0.66 ± 0.05 before use to 0.75 ± 0.04 after 12 weeks ( p < 0.05) [ 65 ]. The study conducted by Sone et al. (2018) was conducted on chronologically aged mice, which showed that oral administration of collagen peptides derived from bovine bone can improve the laxity of chronologically aged skin in mice by increasing the skin collagen content and ratio of type I to type III collagen. The study also suggested that collagen peptides may increase antioxidant properties in the body, and proline intake can improve the elasticity of chronologically aged skin in mice [ 66 ].

Among the included studies, Yoon et al. showed that in humans, 12 weeks of supplementation with oral collagen significantly improved skin elasticity (3.25, 95% CI 2.33, 4.18) compared with other durations. This finding is consistent with that of an open, blinded, and noncomparative study, which showed 38.31% of improvement in elasticity after consuming oral collagen for 3 months [ 67 ]. Another study examined obvious characteristics of skin aging in nude mice after combining treatment with D-galactose and ultraviolet radiation. However, after the oral administration of CP, the concentrations of skin collagen and elastin increased [ 68 ]. While studies suggest that oral collagen supplementation may improve skin elasticity, it is important to consider the limitations of the research. The studies used different durations and forms of collagen supplementation, making it difficult to compare the results. Furthermore, the sample sizes of the studies were relatively small, and the human studies relied on self-reported measures of skin elasticity. Additionally, the study on nude mice may not accurately reflect the effects of oral collagen supplementation in humans.

4.3. Mechanism

Protein hydrolysates are easier to digest and absorb than intact proteins, which increase the production of amino acids after meals [ 69 ]. An in vivo mouse model study found transient increases in the Gly-Pro-Hyp levels in the blood of both humans and mice and that other collagen peptides were also transported to the skin after the ingestion of HC [ 70 ]. Kamiyama et al. used [14C] Gly-Pro-Hyp as a tracer for the tripeptide and compared its absorption with 14C-labeled proline in rats. At 14 days after the administration of [14C] Gly-Pro-Hyp, almost all radioactivity disappeared from the organs, except for the skin, with a radioactivity of 70% observed after 6 h [ 71 ]. Another similar study observed radioactivity after a single administration of [14C] Gly-Pro-Hyp in the connective tissues including the bones and skin within 24 h [ 72 ].

4.4. Sensitivity Analysis

In this study, two included RCTs, namely Campos et al. [ 29 ] (2.17, 95% CI 1.52, 2.81) Choi et al. [ 32 ] (1.61, 95% CI 0.44, 2.78), yielded favorable effects of oral collagen supplementation on skin elasticity. Campos (2015) used a mixture of 10 g of collagen and vitamin A, C, E, zinc as well as excipients, which had beneficial effects, possibly because of its synergism with collagen. A study found that vitamin C triggers a considerable thickening of the epidermis, induces the production of collagen and the formation of elastic microfibrils [ 73 ]. By contrast, vitamin A maintains the health of the epithelial cells on the surface of the skin and increases the production of collagen and the extracellular matrix [ 74 , 75 ]. However, because Choi (2014) enrolled participants aged 30–48 years, which were younger than the participants in the other included studies, it is possible that this study yielded better results due to factors such as a potentially lower prevalence of underlying health conditions or greater overall health among the younger participants. This might thus explain why this study yielded better results. A clinical study that contributed that the composition of the basement membrane changed with age showed that the concentrations of collagen IV, collagen IV, and collagen XII decreased over time [ 76 ]. Thus, a sensitivity analysis was performed to assess the influence of these two studies, and the results of the corresponding forest plots are provided in the Supplementary Materials . The exclusion of this study resulted in no significant change, and the effects of collagen supplementation remained favorable.

4.5. Limitations

This study had several limitations. First, the interventions used in the included studies exhibited some heterogeneity, primarily because of the distinct measurement units and composition of the supplementation. Second, the number of patients included in some studies was less than 40. Therefore, a small sample size may have resulted in a slight RoB. Third, the patients’ lifestyle habits were not included in the analysis. For example, HC supplementation in patients with healthier lifestyle habits could have presented more evident results in improving the appearance of the skin. Thus, additional studies, specifically large clinical trials, are needed.

5. Conclusions

The findings of this study revealed that HC supplementation can improve skin hydration and elasticity. In addition, the long-term use of collagen yields more favorable effects on skin hydration and elasticity than the short-term use of collagen. Nevertheless, large-scale randomized control trials are required to examine the clinical benefits of oral collagen supplements.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/nu15092080/s1 , Figure S1. Elasticity-sensitivity analysis; Figure S2. Hydration-sensitivity analysis.

Funding Statement

This research was funded by Taipei Municipal Wanfang Hospital (managed by Taipei Medical University), grant number 111TMU-WFH-06.

Author Contributions

Conceptualization: S.-Y.P.; Data curation: S.-Y.P. and Y.-N.K.; Formal analysis: S.-Y.P. and C.C.; Funding acquisition: Y.-L.H.; Investigation: C.C.; Methodology: S.-Y.P., Y.-L.H., C.-M.P. and C.C.; Project administration: C.-M.P., Y.-N.K., K.-H.C. and C.C.; Software: C.-M.P., Y.-N.K. and C.C.; Supervision: C.C. and C.-M.P.; Validation: S.-Y.P., Y.-L.H., C.-M.P. and C.C.; Visualization: S.-Y.P.; Writing—original draft: S.-Y.P., C.-M.P. and Y.-L.H.; Writing—review & editing: K.D.H., K.-H.C. and C.C. All authors have read and agreed to the published version of the manuscript.

Institutional Review Board Statement

This study did not require ethical approval.

Informed Consent Statement

This study did not involve humans.

Data Availability Statement

Conflicts of interest.

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

TechRepublic

Account information.

research article on discourse analysis

Share with Your Friends

OpenAI, Anthropic Research Reveals More About How LLMs Affect Security and Bias

Your email has been sent

Image of Megan Crouse

Because large language models operate using neuron-like structures that may link many different concepts and modalities together, it can be difficult for AI developers to adjust their models to change the models’ behavior. If you don’t know what neurons connect what concepts, you won’t know which neurons to change.

On May 21, Anthropic published a remarkably detailed map of the inner workings of the fine-tuned version of its Claude AI, specifically the Claude 3 Sonnet 3.0 model. About two weeks later, OpenAI published its own research on figuring out how GPT-4 interprets patterns .

With Anthropic’s map, the researchers can explore how neuron-like data points, called features, affect a generative AI ’s output. Otherwise, people are only able to see the output itself.

Some of these features are “safety relevant,” meaning that if people reliably identify those features, it could help tune generative AI to avoid potentially dangerous topics or actions . The features are useful for adjusting classification, and classification could impact bias .

What did Anthropic discover?

Anthropic’s researchers extracted interpretable features from Claude 3, a current-generation large language model. Interpretable features can be translated into human-understandable concepts from the numbers readable by the model.

Interpretable features may apply to the same concept in different languages and to both images and text.

Anthropic shows a particular feature activates on words and images connected to the Golden Gate Bridge. The different shading of colors indicates the strength of the activation, from no activation in white to strong activation in dark orange.

“Our high-level goal in this work is to decompose the activations of a model (Claude 3 Sonnet) into more interpretable pieces,” the researchers wrote.

“One hope for interpretability is that it can be a kind of ‘test set for safety, which allows us to tell whether models that appear safe during training will actually be safe in deployment,’” they said.

SEE: Anthropic’s Claude Team enterprise plan packages up an AI assistant for small-to-medium businesses.

Features are produced by sparse autoencoders, which are a type of neural network architecture. During the AI training process, sparse autoencoders are guided by, among other things, scaling laws. So, identifying features can give the researchers a look into the rules governing what topics the AI associates together. To put it very simply, Anthropic used sparse autoencoders to reveal and analyze features.

“We find a diversity of highly abstract features,” the researchers wrote. “They (the features) both respond to and behaviorally cause abstract behaviors.”

The details of the hypotheses used to try to figure out what is going on under the hood of LLMs can be found in Anthropic’s research paper .

What did OpenAI discover?

OpenAI’s research, published June 6, focuses on sparse autoencoders. The researchers go into detail in their paper on scaling and evaluating sparse autoencoders ; put very simply, the goal is to make features more understandable — and therefore more steerable — to humans. They are planning for a future where “frontier models” may be even more complex than today’s generative AI.

“We used our recipe to train a variety of autoencoders on GPT-2 small and GPT-4 activations, including a 16 million feature autoencoder on GPT-4,” OpenAI wrote.

So far, they can’t interpret all of GPT-4’s behaviors: “Currently, passing GPT-4’s activations through the sparse autoencoder results in a performance equivalent to a model trained with roughly 10x less compute.” But the research is another step toward understanding the “black box” of generative AI, and potentially improving its security.

How manipulating features affects bias and cybersecurity

Anthropic found three distinct features that might be relevant to cybersecurity: unsafe code, code errors and backdoors. These features might activate in conversations that do not involve unsafe code; for example, the backdoor feature activates for conversations or images about “hidden cameras” and “jewelry with a hidden USB drive.” But Anthropic was able to experiment with “clamping” — put simply, increasing or decreasing the intensity of — these specific features, which could help tune models to avoid or tactfully handle sensitive security topics.

Claude’s bias or hateful speech can be tuned using feature clamping, but Claude will resist some of its own statements. Anthropic’s researchers “found this response unnerving,” anthropomorphizing the model when Claude expressed “self-hatred.” For example, Claude might output “That’s just racist hate speech from a deplorable bot…” when the researchers clamped a feature related to hatred and slurs to 20 times its maximum activation value.

Another feature the researchers examined is sycophancy; they could adjust the model so that it gave over-the-top praise to the person conversing with it.

What does research into AI autoencoders mean for cybersecurity for businesses?

Identifying some of the features used by a LLM to connect concepts could help tune an AI to prevent biased speech or to prevent or troubleshoot instances in which the AI could be made to lie to the user. Anthropic’s greater understanding of why the LLM behaves the way it does could allow for greater tuning options for Anthropic’s business clients .

SEE: 8 AI Business Trends, According to Stanford Researchers

Anthropic plans to use some of this research to further pursue topics related to the safety of generative AI and LLMs overall, such as exploring what features activate or remain inactive if Claude is prompted to give advice on producing weapons.

Another topic Anthropic plans to pursue in the future is the question: “Can we use the feature basis to detect when fine-tuning a model increases the likelihood of undesirable behaviors?”

TechRepublic has reached out to Anthropic for more information. Also, this article was updated to include OpenAI’s research on sparse autoencoders.

Subscribe to the Innovation Insider Newsletter

Catch up on the latest tech innovations that are changing the world, including IoT, 5G, the latest about phones, security, smart cities, AI, robotics, and more. Delivered Tuesdays and Fridays

  • Dell AI Laptops Will Be Powered By Next-Gen Qualcomm Processors
  • Microsoft Build 2024: Copilot AI Will Gain 'Personal Assistant' and Custom Agent Capabilities
  • The 10 Best AI Courses That Are Worth Taking in 2024
  • Learn How to Use AI for Your Business
  • Artificial Intelligence: More Must-Read Coverage

Image of Megan Crouse

Create a TechRepublic Account

Get the web's best business technology news, tutorials, reviews, trends, and analysis—in your inbox. Let's start with the basics.

* - indicates required fields

Sign in to TechRepublic

Lost your password? Request a new password

Reset Password

Please enter your email adress. You will receive an email message with instructions on how to reset your password.

Check your email for a password reset link. If you didn't receive an email don't forgot to check your spam folder, otherwise contact support .

Welcome. Tell us a little bit about you.

This will help us provide you with customized content.

Want to receive more TechRepublic news?

You're all set.

Thanks for signing up! Keep an eye out for a confirmation email from our team. To ensure any newsletters you subscribed to hit your inbox, make sure to add [email protected] to your contacts list.

IMAGES

  1. 21 Great Examples of Discourse Analysis (2024)

    research article on discourse analysis

  2. (PDF) Cohesion in Texts:A Discourse Analysis of news Article in a Magazine

    research article on discourse analysis

  3. (PDF) The Application of Critical Discourse Analysis in Literary

    research article on discourse analysis

  4. (PDF) Use of Discourse Analysis in Various Disciplines

    research article on discourse analysis

  5. What Is a Discourse Analysis Essay: Example & Step-by-Step Guide

    research article on discourse analysis

  6. Discourse analysis

    research article on discourse analysis

VIDEO

  1. Discourse Analysis And Critical Discourse Analysis CDA

  2. The history of discourse analysis

  3. What is Critical Discourse Analysis

  4. Coherence

  5. Discourse Analysis: Strengths and Shortcomings

  6. Discourse Analysis Definition/ Types Examples/ critical Analysis/ Elements /examples in Urdu/Hindi

COMMENTS

  1. A General Critical Discourse Analysis Framework for Educational Research

    Critical discourse analysis (CDA) is a qualitative analytical approach for critically describing, interpreting, and explaining the ways in which discourses construct, maintain, and legitimize social inequalities. CDA rests on the notion that the way we use language is purposeful, regardless of whether discursive choices are conscious or ...

  2. Qualitative Research: Discourse Analysis

    Discourse analysis is an effective method to approach a wide range of research questions in health care and the. health professions. What underpins all variants of. discourse analysis is the idea of examining segments, or frames of communication, and using this to understand.

  3. DISCOURSE ANALYSIS: KEY CONCEPTS AND PERSPECTIVES

    The basic assumptions are that critical discourse analysis focuses on social issues, power relations are discursive, discourse shapes society and culture, discourse manages ideological works ...

  4. PDF A General Critical Discourse Analysis Framework for Educational Research

    critical discourse analysis, education research, social inequality, qualitative research, analytical framework Critical discourse analysis (CDA) is a qualitative analytical approach for critically describing, interpreting, and explaining the ways in which discourses construct, main- tain, and legitimize social inequalities (Wodak & Meyer, 2009).

  5. Discourse analysis

    This articles explores how discourse analysis is useful for a wide range of research questions in health care and the health professions Previous articles in this series discussed several methodological approaches used by qualitative researchers in the health professions. This article focuses on discourse analysis. It provides background information for those who will encounter this approach ...

  6. Discourse Analysis: Combining Rigor With Application and Intervention

    This is an introduction to the special section "Discourse Analysis." Taken together, the articles in this special section exemplify a rigorously systematic window into applied and interventional discourse analytic work and, thus, sound a clarion call to the qualitative field to increasingly find ways to balance careful detail with real-world relevance.

  7. Critical Discourse Analysis

    Critical discourse analysis (or discourse analysis) is a research method for studying written or spoken language in relation to its social context. It aims to understand how language is used in real life situations. When you conduct discourse analysis, you might focus on: The purposes and effects of different types of language.

  8. Discourse Analysis

    Discourse analysis is a flexible, recursive, and iterative process wherein the analyst must repeatedly shift between different objects and scales of analysis in an attempt to produce compelling explanations for relationships between discourse and society and, ultimately, to challenge inequitable social relations.

  9. Discourse analysis: A useful methodology for health-care system

    Abstract. Discourse analysis (DA) is an interdisciplinary field of inquiry and becoming an increasingly popular research strategy for researchers in various disciplines which has been little employed by health-care researchers. The methodology involves a focus on the sociocultural and political context in which text and talk occur.

  10. (PDF) Discourse analysis

    Discourse analysis is an effective method to approach a. wide range of research questions in health care and the. health professions. What underpins all variants of. discourse analysis is the idea ...

  11. (PDF) DISCOURSE ANALYSIS

    Discourse analysis (DA) is a broad field of study that draws some of its theories and methods of analysis from disciplines such as linguistics, sociology, philosophy and psychology.

  12. Full article: When discourse analysts tell stories: what do we 'do

    Critical discourse analysts are being pulled in two directions. On one side, in the age of validity, inter-rater reliability and evidence-based research, it can seem subversive when researchers 'te...

  13. Critical Discourse Analysis

    Discourse analysis is a research method for studying written or spoken language in relation to its social context. It aims to understand how language is used in real-life situations. When you do discourse analysis, you might focus on: The purposes and effects of different types of language.

  14. Full article: Applying critical discourse analysis to classrooms

    This special issue provides a collection of cutting-edge and state-of-the-art research that examines the wider sociocultural and sociopolitical aspects of classroom discourse. The current issue is ...

  15. PDF Understanding Critical Discourse Analysis in Qualitative Research

    Abstract: This article explores critical discourse analysis as a theory in qualitative research. The framework of analysis includes analysis of texts, interactions and social practices at the local, institutional and societal levels. It aims at revealing the motivation and politics involved in the arguing for or against a

  16. PDF Discourse Analysis, Its characteristics, Types, and Beyond

    This literature review article provides an overview of discourse analysis as a qualitative research method, highlighting its fundamental characteristics, various types, and its broader implications. Discourse analysis is a methodological approach that delves into the study of

  17. Discourse analysis: what is it and why is it relevant to family

    Discourse analysis adds a new methodological dimension to family practice research by drawing on theories and approaches from a range of disciplines, typically from outside medicine. Like other qualitative approaches, discourse analysis therefore brings a different lens through which we can potentially add to and deepen our understanding.

  18. Discourse Analysis

    Interpretive approach: Discourse analysis is an interpretive approach, meaning that it seeks to understand the meaning and significance of language use from the perspective of the participants in a particular discourse. Emphasis on reflexivity: Discourse analysis emphasizes the importance of reflexivity, or self-awareness, in the research process.

  19. AI Through Ethical Lenses: A Discourse Analysis of ...

    From this perspective, we conducted a discourse analysis to understand how these guidelines construct, articulate, and frame ethics for AI in healthcare. ... At least two research teams have done discourse analysis of AI policies, and have been published recently—albeit not healthcare-specific (Bareis & Katzenbach, 2022; ...

  20. Linguistics and Education

    Linguistics and Education is an international peer-reviewed journal that welcomes submissions from across the world that advance knowledge, theory, or methodology at the intersections of linguistics and education. The journal is concerned with the role played by language and other ….

  21. Post-January 6th deplatforming reduced the reach of ...

    Difference-in-differences analysis indicates that the decision by Twitter to deplatform 70,000 users following the events at the US Capitol on 6 January 2021 had wider effects on the spread ...

  22. What Is Data Analysis? (With Examples)

    Data analysis is the practice of working with data to glean useful information, which can then be used to make informed decisions.

  23. Effects of Oral Collagen for Skin Anti-Aging: A Systematic Review and

    This paper presents a systematic review and meta-analysis of 26 randomized controlled trials (RCTs) involving 1721 patients to assess the effects of hydrolyzed collagen (HC) supplementation on skin hydration and elasticity. ... Some articles were excluded from the research due to various reasons. Studies conducted by Campos, Czajka, Genovese ...

  24. What Does a Data Analyst Do? Your 2024 Career Guide

    Start advancing your data analysis skills today. Explore a career path as a data analyst with the Google Data Analytics Professional Certificate.Learn key analytical skills like data cleaning, analysis, and visualization, as well as tools like spreadsheets, SQL, R programming, and Tableau.

  25. What is Natural Language Processing? Definition and Examples

    Natural language processing (NLP) is a subset of artificial intelligence, computer science, and linguistics focused on making human communication, such as speech and text, comprehensible to computers. NLP is used in a wide variety of everyday products and services. Some of the most common ways NLP is used are through voice-activated digital ...

  26. OpenAI, Anthropic AI Research Reveals More About How LLMs Affect

    OpenAI, Anthropic Research Reveals More About How LLMs Affect Security and Bias. Anthropic opened a window into the 'black box' where 'features' steer a large language model's output ...

  27. What Degree Do I Need to Become a Data Analyst?

    In this article, we'll discuss whether you need a degree to become a data analyst, which degree to get, and how a higher-level degree could help you advance your career.