PW Skills | Blog

Analyze Report: How to Write the Best Analytical Report (+ 6 Examples!)

' src=

Varun Saharawat is a seasoned professional in the fields of SEO and content writing. With a profound knowledge of the intricate aspects of these disciplines, Varun has established himself as a valuable asset in the world of digital marketing and online content creation.

Organizations analyze reports to improve performance by identifying areas of strength and weakness, understanding customer needs and preferences, optimizing business processes, and making data-driven decisions!

analyze report

Analyze Report: Picture a heap of bricks scattered on the ground. Individually, they lack purpose until meticulously assembled into a cohesive structure—a house, perhaps?

In the realm of business intelligence , data serves as the fundamental building material, with a well-crafted data analysis report serving as the ultimate desired outcome.

However, if you’ve ever attempted to harness collected data and transform it into an insightful report, you understand the inherent challenges. Bridging the gap between raw, unprocessed data and a coherent narrative capable of informing actionable strategies is no simple feat.

Table of Contents

What is an Analyze Report?

An analytical report serves as a crucial tool for stakeholders to make informed decisions and determine the most effective course of action. For instance, a Chief Marketing Officer (CMO) might refer to a business executive analytical report to identify specific issues caused by the pandemic before adapting an existing marketing strategy.

Marketers often utilize business intelligence tools to generate these informative reports. They vary in layout, ranging from text-heavy documents (such as those created in Google Docs with screenshots or Excel spreadsheets) to visually engaging presentations.

A quick search on Google reveals that many marketers opt for text-heavy documents with a formal writing style, often featuring a table of contents on the first page. In some instances, such as the analytical report example provided below, these reports may consist of spreadsheets filled with numbers and screenshots, providing a comprehensive overview of the data.

Also Read: The Best Business Intelligence Software in 2024

How to Write an Analyze Report?

Writing an Analyze Report requires careful planning, data analysis , and clear communication of findings. Here’s a step-by-step guide to help you write an effective analytical report:

Step 1: Define the Purpose:

  • Clearly define the objective and purpose of the report. Determine what problem or question the report aims to address.
  • Consider the audience for the report and what information they need to make informed decisions.

Step 2: Gather Data:

  • Identify relevant sources of data that can provide insights into the topic.
  • Collect data from primary sources (e.g., surveys, interviews) and secondary sources (e.g., research studies, industry reports).
  • Ensure that the data collected is accurate, reliable, and up-to-date.

Step 3: Analyze the Data:

  • Use analytical tools and techniques to analyze the data effectively. This may include statistical analysis, qualitative coding, or data visualization.
  • Look for patterns, trends, correlations, and outliers in the data that may provide insights into the topic.
  • Consider the context in which the data was collected and any limitations that may affect the analysis.

Step 4: Organize the Information:

  • Structure the report in a logical and coherent manner. Divide the report into sections, such as an introduction, methodology, findings, analysis, and conclusion.
  • Ensure that each section flows logically into the next and that there is a clear progression of ideas throughout the report.

Step 5: Write the Introduction:

  • Start with an introduction that provides background information on the topic and outlines the scope of the report.
  • Clearly state the purpose and objectives of the analysis.
  • Provide context for the analysis and explain why it is relevant and important.

Step 6: Present the Methodology:

  • Describe the methods and techniques used to gather and analyze the data.
  • Explain any assumptions made and the rationale behind your approach.
  • Provide sufficient detail so that the reader can understand how the analysis was conducted.

Step 7: Present the Findings:

  • Present the findings of your analysis in a clear and concise manner.
  • Use charts, graphs, tables, and other visual aids to illustrate key points and make the data easier to understand.
  • Provide context for the findings and explain their significance.

Step 8: Analyze the Data:

  • Interpret the findings and analyze their implications.
  • Discuss any patterns, trends, or insights uncovered by the analysis and explain their significance.
  • Consider alternative explanations or interpretations of the data.

Step 9: Draw Conclusions:

  • Draw conclusions based on the analysis and findings.
  • Summarize the main points and insights of the report.
  • Reiterate the key takeaways and their implications for decision-making.

Step 10: Make Recommendations:

  • Finally, make recommendations based on your conclusions.
  • Suggest actionable steps that can be taken to address any issues identified or capitalize on any opportunities uncovered by the analysis.
  • Provide specific, practical recommendations that are feasible and aligned with the objectives of the report.

Step 11: Proofread and Revise:

  • Review the report for accuracy, clarity, and coherence.
  • Ensure that the writing is clear, concise, and free of errors.
  • Make any necessary revisions before finalizing the report.

Step 12: Write the Executive Summary:

  • Write a brief executive summary that provides an overview of the report’s key findings, conclusions, and recommendations.
  • This summary should be concise and easy to understand for busy stakeholders who may not have time to read the entire report.
  • Include only the most important information and avoid unnecessary details.

By following these steps, you can write an analytical report that effectively communicates your findings and insights to your audience.

Also Read: Analytics For BI: What is Business Intelligence and Analytics?

Analyze Report Examples

Analyze Report play a crucial role in providing valuable insights to businesses, enabling informed decision-making and strategic planning. Here are some examples of analytical reports along with detailed descriptions:

1) Executive Report Template:

An executive report serves as a comprehensive overview of a company’s performance, specifically tailored for C-suite executives. This report typically includes key metrics and KPIs that provide insights into the organization’s financial health and operational efficiency. For example, the Highlights tab may showcase total revenue for a specific period, along with the breakdown of transactions and associated costs. 

Additionally, the report may feature visualizations such as cost vs. revenue comparison charts, allowing executives to quickly identify trends and make data-driven decisions. With easy-to-understand graphs and charts, executives can expedite decision-making processes and adapt business strategies for effective cost containment and revenue growth.

2) Digital Marketing Report Template:

In today’s digital age, businesses rely heavily on digital marketing channels to reach their target audience and drive engagement. A digital marketing report provides insights into the performance of various marketing channels and campaigns, helping businesses optimize their marketing strategies for maximum impact. 

This report typically includes key metrics such as website traffic, conversion rates, and ROI for each marketing channel. By analyzing these KPIs, businesses can identify their best-performing channels and allocate resources accordingly. For example, the report may reveal that certain channels, such as social media or email marketing, yield higher response rates than others. Armed with this information, businesses can refine their digital marketing efforts to enhance the user experience, attract more customers, and ultimately drive growth.

3) Sales Performance Report:

A sales performance report provides a detailed analysis of sales activities, including revenue generated, sales volume, customer acquisition, and sales team performance. This report typically includes visualizations such as sales trend charts, pipeline analysis, and territory-wise sales comparisons. By analyzing these metrics, sales managers can identify top-performing products or services, track sales targets, and identify areas for improvement.

4) Customer Satisfaction Report:

A customer satisfaction report evaluates customer feedback and sentiment to measure overall satisfaction levels with products or services. This report may include metrics such as Net Promoter Score (NPS), customer survey results, and customer support ticket data. By analyzing these metrics, businesses can identify areas where they excel and areas where they need to improve to enhance the overall customer experience.

5) Financial Performance Report:

A financial performance report provides an in-depth analysis of an organization’s financial health, including revenue, expenses, profitability, and cash flow. This report may include financial ratios, trend analysis, and variance reports to assess performance against budgeted targets or industry benchmarks. By analyzing these metrics, financial managers can identify areas of strength and weakness and make strategic decisions to improve financial performance .

6) Inventory Management Report:

An inventory management report tracks inventory levels, turnover rates, stockouts, and inventory costs to optimize inventory management processes. This report may include metrics such as inventory turnover ratio, carrying costs, and stock-to-sales ratios. By analyzing these metrics, inventory managers can ensure optimal inventory levels, minimize stockouts, and reduce carrying costs to improve overall operational efficiency.

7) Employee Performance Report:

An employee performance report evaluates individual and team performance based on key performance indicators (KPIs) such as sales targets, customer satisfaction scores, productivity metrics, and attendance records. This report may include visualizations such as performance scorecards, heatmaps, and trend analysis charts to identify top performers, areas for improvement, and training needs.

Also Check: Analytics & Insights: The Difference Between Data, Analytics, and Insights

Why are Analyze Report Important?

Analyze Report are important for several reasons:

  • Informed Decision Making: Analytical reports provide valuable insights and data-driven analysis that enable businesses to make informed decisions. By presenting relevant information in a structured format, these reports help stakeholders understand trends, identify patterns, and evaluate potential courses of action.
  • Problem Solving: Analytical reports help organizations identify and address challenges or issues within their operations. Whether it’s identifying inefficiencies in processes, addressing customer complaints, or mitigating risks, these reports provide a framework for problem-solving and decision-making.
  • Business Opportunities: Analytical reports can uncover new business opportunities by analyzing market trends, customer behavior, and competitor activities. By identifying emerging trends or unmet customer needs, businesses can capitalize on opportunities for growth and innovation.
  • Performance Evaluation: Analytical reports are instrumental in evaluating the performance of various aspects of a business, such as sales, marketing campaigns, and financial metrics. By tracking key performance indicators (KPIs) and metrics, organizations can assess their progress towards goals and objectives.
  • Accountability and Transparency: Analytical reports promote accountability and transparency within an organization by providing objective data and analysis. By sharing insights and findings with stakeholders, businesses can foster trust and confidence in their decision-making processes.

Overall, analytical reports serve as valuable tools for businesses to gain insights, solve problems, identify opportunities, evaluate performance, and enhance decision-making processes.

Types of Analyze Report

  • Financial Analyze Report: These reports analyze the financial performance of an organization, including revenue, expenses, profitability, and cash flow. They help stakeholders understand the financial health of the business and make informed decisions about investments, budgeting, and strategic planning.
  • Market Research Reports: Market research reports analyze market trends, consumer behavior, competitive landscape, and other factors affecting a particular industry or market segment. They provide valuable insights for businesses looking to launch new products, enter new markets, or refine their marketing strategies .
  • Performance Analysis Reports: These reports evaluate the performance of various aspects of an organization, such as sales performance, operational efficiency, employee productivity, and customer satisfaction. They help identify areas of improvement and inform decision-making to enhance overall performance.
  • Risk Assessment Reports: Risk assessment reports analyze potential risks and vulnerabilities within an organization, such as financial risks, operational risks, cybersecurity risks, and regulatory compliance risks. They help stakeholders understand and mitigate risks to protect the organization’s assets and reputation.
  • SWOT Analysis Reports: SWOT (Strengths, Weaknesses, Opportunities, Threats) analysis reports assess the internal strengths and weaknesses of an organization, as well as external opportunities and threats in the business environment. They provide a comprehensive overview of the organization’s strategic position and guide decision-making.
  • Customer Analysis Reports: Customer analysis reports examine customer demographics, purchasing behavior, satisfaction levels, and preferences. They help businesses understand their target audience better, tailor products and services to meet customer needs, and improve customer retention and loyalty.
  • Operational Efficiency Reports: These reports evaluate the efficiency and effectiveness of operational processes within an organization, such as production, logistics, and supply chain management. They identify bottlenecks, inefficiencies, and areas for improvement to optimize operations and reduce costs.
  • Compliance and Regulatory Reports: Compliance and regulatory reports assess an organization’s adherence to industry regulations, legal requirements, and internal policies. They ensure that the organization operates ethically and legally, mitigating the risk of fines, penalties, and reputational damage.

For Latest Tech Related Information, Join Our Official Free Telegram Group : PW Skills Telegram Group

Analyze Report FAQs

What is an analytical report.

An analytical report is a document that presents data, analysis, and insights on a specific topic or problem. It provides a detailed examination of information to support decision-making and problem-solving within an organization.

Why are analytical reports important?

Analytical reports are important because they help organizations make informed decisions, solve problems, and identify opportunities for improvement. By analyzing data and providing insights, these reports enable stakeholders to understand trends, patterns, and relationships within their business operations.

What types of data are typically included in analytical reports?

Analytical reports may include various types of data, such as financial data, sales data, customer feedback, market research, and operational metrics. The specific data included depends on the purpose of the report and the information needed to address the topic or problem being analyzed.

How are analytical reports different from other types of reports?

Analytical reports differ from other types of reports, such as descriptive reports or summary reports, in that they go beyond presenting raw data or summarizing information. Instead, analytical reports analyze data in-depth, draw conclusions, and provide recommendations based on the analysis.

What are the key components of an analytical report?

Key components of an analytical report typically include an introduction, methodology, findings, analysis, conclusions, and recommendations. The introduction provides background information on the topic, the methodology outlines the approach used to analyze the data, the findings present the results of the analysis, the analysis interprets the findings, and the conclusions and recommendations offer insights and actionable steps.

5 Surprising Facts About Data Analytics

Facts About Data Analytics

Data Analytics is a vital tool for the future of our society and economy, and it is essential to gain…

What Is Business Analytics Business Intelligence?

business analytics business intelligence

Want to learn what Business analytics business intelligence is? Reading this article will help you to understand all topics clearly,…

5+ Best Data Analytics Certifications for Success in Your Career 2024!

how to write analysis in research report

Data Analytics Certifications: In data science, mastering the art of data analysis is a surefire way to boost your career.…

bottom banner

8.5 Writing Process: Creating an Analytical Report

Learning outcomes.

By the end of this section, you will be able to:

  • Identify the elements of the rhetorical situation for your report.
  • Find and focus a topic to write about.
  • Gather and analyze information from appropriate sources.
  • Distinguish among different kinds of evidence.
  • Draft a thesis and create an organizational plan.
  • Compose a report that develops ideas and integrates evidence from sources.
  • Give and act on productive feedback to works in progress.

You might think that writing comes easily to experienced writers—that they draft stories and college papers all at once, sitting down at the computer and having sentences flow from their fingers like water from a faucet. In reality, most writers engage in a recursive process, pushing forward, stepping back, and repeating steps multiple times as their ideas develop and change. In broad strokes, the steps most writers go through are these:

  • Planning and Organization . You will have an easier time drafting if you devote time at the beginning to consider the rhetorical situation for your report, understand your assignment, gather ideas and information, draft a thesis statement, and create an organizational plan.
  • Drafting . When you have an idea of what you want to say and the order in which you want to say it, you’re ready to draft. As much as possible, keep going until you have a complete first draft of your report, resisting the urge to go back and rewrite. Save that for after you have completed a first draft.
  • Review . Now is the time to get feedback from others, whether from your instructor, your classmates, a tutor in the writing center, your roommate, someone in your family, or someone else you trust to read your writing critically and give you honest feedback.
  • Revising . With feedback on your draft, you are ready to revise. You may need to return to an earlier step and make large-scale revisions that involve planning, organizing, and rewriting, or you may need to work mostly on ensuring that your sentences are clear and correct.

Considering the Rhetorical Situation

Like other kinds of writing projects, a report starts with assessing the rhetorical situation —the circumstance in which a writer communicates with an audience of readers about a subject. As the writer of a report, you make choices based on the purpose of your writing, the audience who will read it, the genre of the report, and the expectations of the community and culture in which you are working. A graphic organizer like Table 8.1 can help you begin.

Summary of Assignment

Write an analytical report on a topic that interests you and that you want to know more about. The topic can be contemporary or historical, but it must be one that you can analyze and support with evidence from sources.

The following questions can help you think about a topic suitable for analysis:

  • Why or how did ________ happen?
  • What are the results or effects of ________?
  • Is ________ a problem? If so, why?
  • What are examples of ________ or reasons for ________?
  • How does ________ compare to or contrast with other issues, concerns, or things?

Consult and cite three to five reliable sources. The sources do not have to be scholarly for this assignment, but they must be credible, trustworthy, and unbiased. Possible sources include academic journals, newspapers, magazines, reputable websites, government publications or agency websites, and visual sources such as TED Talks. You may also use the results of an experiment or survey, and you may want to conduct interviews.

Consider whether visuals and media will enhance your report. Can you present data you collect visually? Would a map, photograph, chart, or other graphic provide interesting and relevant support? Would video or audio allow you to present evidence that you would otherwise need to describe in words?

Another Lens. To gain another analytic view on the topic of your report, consider different people affected by it. Say, for example, that you have decided to report on recent high school graduates and the effect of the COVID-19 pandemic on the final months of their senior year. If you are a recent high school graduate, you might naturally gravitate toward writing about yourself and your peers. But you might also consider the adults in the lives of recent high school graduates—for example, teachers, parents, or grandparents—and how they view the same period. Or you might consider the same topic from the perspective of a college admissions department looking at their incoming freshman class.

Quick Launch: Finding and Focusing a Topic

Coming up with a topic for a report can be daunting because you can report on nearly anything. The topic can easily get too broad, trapping you in the realm of generalizations. The trick is to find a topic that interests you and focus on an angle you can analyze in order to say something significant about it. You can use a graphic organizer to generate ideas, or you can use a concept map similar to the one featured in Writing Process: Thinking Critically About a “Text.”

Asking the Journalist’s Questions

One way to generate ideas about a topic is to ask the five W (and one H) questions, also called the journalist’s questions : Who? What? When? Where? Why? How? Try answering the following questions to explore a topic:

Who was or is involved in ________?

What happened/is happening with ________? What were/are the results of ________?

When did ________ happen? Is ________ happening now?

Where did ________ happen, or where is ________ happening?

Why did ________ happen, or why is ________ happening now?

How did ________ happen?

For example, imagine that you have decided to write your analytical report on the effect of the COVID-19 shutdown on high-school students by interviewing students on your college campus. Your questions and answers might look something like those in Table 8.2 :

Asking Focused Questions

Another way to find a topic is to ask focused questions about it. For example, you might ask the following questions about the effect of the 2020 pandemic shutdown on recent high school graduates:

  • How did the shutdown change students’ feelings about their senior year?
  • How did the shutdown affect their decisions about post-graduation plans, such as work or going to college?
  • How did the shutdown affect their academic performance in high school or in college?
  • How did/do they feel about continuing their education?
  • How did the shutdown affect their social relationships?

Any of these questions might be developed into a thesis for an analytical report. Table 8.3 shows more examples of broad topics and focusing questions.

Gathering Information

Because they are based on information and evidence, most analytical reports require you to do at least some research. Depending on your assignment, you may be able to find reliable information online, or you may need to do primary research by conducting an experiment, a survey, or interviews. For example, if you live among students in their late teens and early twenties, consider what they can tell you about their lives that you might be able to analyze. Returning to or graduating from high school, starting college, or returning to college in the midst of a global pandemic has provided them, for better or worse, with educational and social experiences that are shared widely by people their age and very different from the experiences older adults had at the same age.

Some report assignments will require you to do formal research, an activity that involves finding sources and evaluating them for reliability, reading them carefully, taking notes, and citing all words you quote and ideas you borrow. See Research Process: Accessing and Recording Information and Annotated Bibliography: Gathering, Evaluating, and Documenting Sources for detailed instruction on conducting research.

Whether you conduct in-depth research or not, keep track of the ideas that come to you and the information you learn. You can write or dictate notes using an app on your phone or computer, or you can jot notes in a journal if you prefer pen and paper. Then, when you are ready to begin organizing your report, you will have a record of your thoughts and information. Always track the sources of information you gather, whether from printed or digital material or from a person you interviewed, so that you can return to the sources if you need more information. And always credit the sources in your report.

Kinds of Evidence

Depending on your assignment and the topic of your report, certain kinds of evidence may be more effective than others. Other kinds of evidence may even be required. As a general rule, choose evidence that is rooted in verifiable facts and experience. In addition, select the evidence that best supports the topic and your approach to the topic, be sure the evidence meets your instructor’s requirements, and cite any evidence you use that comes from a source. The following list contains different kinds of frequently used evidence and an example of each.

Definition : An explanation of a key word, idea, or concept.

The U.S. Census Bureau refers to a “young adult” as a person between 18 and 34 years old.

Example : An illustration of an idea or concept.

The college experience in the fall of 2020 was starkly different from that of previous years. Students who lived in residence halls were assigned to small pods. On-campus dining services were limited. Classes were small and physically distanced or conducted online. Parties were banned.

Expert opinion : A statement by a professional in the field whose opinion is respected.

According to Louise Aronson, MD, geriatrician and author of Elderhood , people over the age of 65 are the happiest of any age group, reporting “less stress, depression, worry, and anger, and more enjoyment, happiness, and satisfaction” (255).

Fact : Information that can be proven correct or accurate.

According to data collected by the NCAA, the academic success of Division I college athletes between 2015 and 2019 was consistently high (Hosick).

Interview : An in-person, phone, or remote conversation that involves an interviewer posing questions to another person or people.

During our interview, I asked Betty about living without a cell phone during the pandemic. She said that before the pandemic, she hadn’t needed a cell phone in her daily activities, but she soon realized that she, and people like her, were increasingly at a disadvantage.

Quotation : The exact words of an author or a speaker.

In response to whether she thought she needed a cell phone, Betty said, “I got along just fine without a cell phone when I could go everywhere in person. The shift to needing a phone came suddenly, and I don’t have extra money in my budget to get one.”

Statistics : A numerical fact or item of data.

The Pew Research Center reported that approximately 25 percent of Hispanic Americans and 17 percent of Black Americans relied on smartphones for online access, compared with 12 percent of White people.

Survey : A structured interview in which respondents (the people who answer the survey questions) are all asked the same questions, either in person or through print or electronic means, and their answers tabulated and interpreted. Surveys discover attitudes, beliefs, or habits of the general public or segments of the population.

A survey of 3,000 mobile phone users in October 2020 showed that 54 percent of respondents used their phones for messaging, while 40 percent used their phones for calls (Steele).

  • Visuals : Graphs, figures, tables, photographs and other images, diagrams, charts, maps, videos, and audio recordings, among others.

Thesis and Organization

Drafting a thesis.

When you have a grasp of your topic, move on to the next phase: drafting a thesis. The thesis is the central idea that you will explore and support in your report; all paragraphs in your report should relate to it. In an essay-style analytical report, you will likely express this main idea in a thesis statement of one or two sentences toward the end of the introduction.

For example, if you found that the academic performance of student athletes was higher than that of non-athletes, you might write the following thesis statement:

student sample text Although a common stereotype is that college athletes barely pass their classes, an analysis of athletes’ academic performance indicates that athletes drop fewer classes, earn higher grades, and are more likely to be on track to graduate in four years when compared with their non-athlete peers. end student sample text

The thesis statement often previews the organization of your writing. For example, in his report on the U.S. response to the COVID-19 pandemic in 2020, Trevor Garcia wrote the following thesis statement, which detailed the central idea of his report:

student sample text An examination of the U.S. response shows that a reduction of experts in key positions and programs, inaction that led to equipment shortages, and inconsistent policies were three major causes of the spread of the virus and the resulting deaths. end student sample text

After you draft a thesis statement, ask these questions, and examine your thesis as you answer them. Revise your draft as needed.

  • Is it interesting? A thesis for a report should answer a question that is worth asking and piques curiosity.
  • Is it precise and specific? If you are interested in reducing pollution in a nearby lake, explain how to stop the zebra mussel infestation or reduce the frequent algae blooms.
  • Is it manageable? Try to split the difference between having too much information and not having enough.

Organizing Your Ideas

As a next step, organize the points you want to make in your report and the evidence to support them. Use an outline, a diagram, or another organizational tool, such as Table 8.4 .

Drafting an Analytical Report

With a tentative thesis, an organization plan, and evidence, you are ready to begin drafting. For this assignment, you will report information, analyze it, and draw conclusions about the cause of something, the effect of something, or the similarities and differences between two different things.

Introduction

Some students write the introduction first; others save it for last. Whenever you choose to write the introduction, use it to draw readers into your report. Make the topic of your report clear, and be concise and sincere. End the introduction with your thesis statement. Depending on your topic and the type of report, you can write an effective introduction in several ways. Opening a report with an overview is a tried-and-true strategy, as shown in the following example on the U.S. response to COVID-19 by Trevor Garcia. Notice how he opens the introduction with statistics and a comparison and follows it with a question that leads to the thesis statement (underlined).

student sample text With more than 83 million cases and 1.8 million deaths at the end of 2020, COVID-19 has turned the world upside down. By the end of 2020, the United States led the world in the number of cases, at more than 20 million infections and nearly 350,000 deaths. In comparison, the second-highest number of cases was in India, which at the end of 2020 had less than half the number of COVID-19 cases despite having a population four times greater than the U.S. (“COVID-19 Coronavirus Pandemic,” 2021). How did the United States come to have the world’s worst record in this pandemic? underline An examination of the U.S. response shows that a reduction of experts in key positions and programs, inaction that led to equipment shortages, and inconsistent policies were three major causes of the spread of the virus and the resulting deaths end underline . end student sample text

For a less formal report, you might want to open with a question, quotation, or brief story. The following example opens with an anecdote that leads to the thesis statement (underlined).

student sample text Betty stood outside the salon, wondering how to get in. It was June of 2020, and the door was locked. A sign posted on the door provided a phone number for her to call to be let in, but at 81, Betty had lived her life without a cell phone. Betty’s day-to-day life had been hard during the pandemic, but she had planned for this haircut and was looking forward to it; she had a mask on and hand sanitizer in her car. Now she couldn’t get in the door, and she was discouraged. In that moment, Betty realized how much Americans’ dependence on cell phones had grown in the months since the pandemic began. underline Betty and thousands of other senior citizens who could not afford cell phones or did not have the technological skills and support they needed were being left behind in a society that was increasingly reliant on technology end underline . end student sample text

Body Paragraphs: Point, Evidence, Analysis

Use the body paragraphs of your report to present evidence that supports your thesis. A reliable pattern to keep in mind for developing the body paragraphs of a report is point , evidence , and analysis :

  • The point is the central idea of the paragraph, usually given in a topic sentence stated in your own words at or toward the beginning of the paragraph. Each topic sentence should relate to the thesis.
  • The evidence you provide develops the paragraph and supports the point made in the topic sentence. Include details, examples, quotations, paraphrases, and summaries from sources if you conducted formal research. Synthesize the evidence you include by showing in your sentences the connections between sources.
  • The analysis comes at the end of the paragraph. In your own words, draw a conclusion about the evidence you have provided and how it relates to the topic sentence.

The paragraph below illustrates the point, evidence, and analysis pattern. Drawn from a report about concussions among football players, the paragraph opens with a topic sentence about the NCAA and NFL and their responses to studies about concussions. The paragraph is developed with evidence from three sources. It concludes with a statement about helmets and players’ safety.

student sample text The NCAA and NFL have taken steps forward and backward to respond to studies about the danger of concussions among players. Responding to the deaths of athletes, documented brain damage, lawsuits, and public outcry (Buckley et al., 2017), the NCAA instituted protocols to reduce potentially dangerous hits during football games and to diagnose traumatic head injuries more quickly and effectively. Still, it has allowed players to wear more than one style of helmet during a season, raising the risk of injury because of imperfect fit. At the professional level, the NFL developed a helmet-rating system in 2011 in an effort to reduce concussions, but it continued to allow players to wear helmets with a wide range of safety ratings. The NFL’s decision created an opportunity for researchers to look at the relationship between helmet safety ratings and concussions. Cocello et al. (2016) reported that players who wore helmets with a lower safety rating had more concussions than players who wore helmets with a higher safety rating, and they concluded that safer helmets are a key factor in reducing concussions. end student sample text

Developing Paragraph Content

In the body paragraphs of your report, you will likely use examples, draw comparisons, show contrasts, or analyze causes and effects to develop your topic.

Paragraphs developed with Example are common in reports. The paragraph below, adapted from a report by student John Zwick on the mental health of soldiers deployed during wartime, draws examples from three sources.

student sample text Throughout the Vietnam War, military leaders claimed that the mental health of soldiers was stable and that men who suffered from combat fatigue, now known as PTSD, were getting the help they needed. For example, the New York Times (1966) quoted military leaders who claimed that mental fatigue among enlisted men had “virtually ceased to be a problem,” occurring at a rate far below that of World War II. Ayres (1969) reported that Brigadier General Spurgeon Neel, chief American medical officer in Vietnam, explained that soldiers experiencing combat fatigue were admitted to the psychiatric ward, sedated for up to 36 hours, and given a counseling session with a doctor who reassured them that the rest was well deserved and that they were ready to return to their units. Although experts outside the military saw profound damage to soldiers’ psyches when they returned home (Halloran, 1970), the military stayed the course, treating acute cases expediently and showing little concern for the cumulative effect of combat stress on individual soldiers. end student sample text

When you analyze causes and effects , you explain the reasons that certain things happened and/or their results. The report by Trevor Garcia on the U.S. response to the COVID-19 pandemic in 2020 is an example: his report examines the reasons the United States failed to control the coronavirus. The paragraph below, adapted from another student’s report written for an environmental policy course, explains the effect of white settlers’ views of forest management on New England.

student sample text The early colonists’ European ideas about forest management dramatically changed the New England landscape. White settlers saw the New World as virgin, unused land, even though indigenous people had been drawing on its resources for generations by using fire subtly to improve hunting, employing construction techniques that left ancient trees intact, and farming small, efficient fields that left the surrounding landscape largely unaltered. White settlers’ desire to develop wood-built and wood-burning homesteads surrounded by large farm fields led to forestry practices and techniques that resulted in the removal of old-growth trees. These practices defined the way the forests look today. end student sample text

Compare and contrast paragraphs are useful when you wish to examine similarities and differences. You can use both comparison and contrast in a single paragraph, or you can use one or the other. The paragraph below, adapted from a student report on the rise of populist politicians, compares the rhetorical styles of populist politicians Huey Long and Donald Trump.

student sample text A key similarity among populist politicians is their rejection of carefully crafted sound bites and erudite vocabulary typically associated with candidates for high office. Huey Long and Donald Trump are two examples. When he ran for president, Long captured attention through his wild gesticulations on almost every word, dramatically varying volume, and heavily accented, folksy expressions, such as “The only way to be able to feed the balance of the people is to make that man come back and bring back some of that grub that he ain’t got no business with!” In addition, Long’s down-home persona made him a credible voice to represent the common people against the country’s rich, and his buffoonish style allowed him to express his radical ideas without sounding anti-communist alarm bells. Similarly, Donald Trump chose to speak informally in his campaign appearances, but the persona he projected was that of a fast-talking, domineering salesman. His frequent use of personal anecdotes, rhetorical questions, brief asides, jokes, personal attacks, and false claims made his speeches disjointed, but they gave the feeling of a running conversation between him and his audience. For example, in a 2015 speech, Trump said, “They just built a hotel in Syria. Can you believe this? They built a hotel. When I have to build a hotel, I pay interest. They don’t have to pay interest, because they took the oil that, when we left Iraq, I said we should’ve taken” (“Our Country Needs” 2020). While very different in substance, Long and Trump adopted similar styles that positioned them as the antithesis of typical politicians and their worldviews. end student sample text

The conclusion should draw the threads of your report together and make its significance clear to readers. You may wish to review the introduction, restate the thesis, recommend a course of action, point to the future, or use some combination of these. Whichever way you approach it, the conclusion should not head in a new direction. The following example is the conclusion from a student’s report on the effect of a book about environmental movements in the United States.

student sample text Since its publication in 1949, environmental activists of various movements have found wisdom and inspiration in Aldo Leopold’s A Sand County Almanac . These audiences included Leopold’s conservationist contemporaries, environmentalists of the 1960s and 1970s, and the environmental justice activists who rose in the 1980s and continue to make their voices heard today. These audiences have read the work differently: conservationists looked to the author as a leader, environmentalists applied his wisdom to their movement, and environmental justice advocates have pointed out the flaws in Leopold’s thinking. Even so, like those before them, environmental justice activists recognize the book’s value as a testament to taking the long view and eliminating biases that may cloud an objective assessment of humanity’s interdependent relationship with the environment. end student sample text

Citing Sources

You must cite the sources of information and data included in your report. Citations must appear in both the text and a bibliography at the end of the report.

The sample paragraphs in the previous section include examples of in-text citation using APA documentation style. Trevor Garcia’s report on the U.S. response to COVID-19 in 2020 also uses APA documentation style for citations in the text of the report and the list of references at the end. Your instructor may require another documentation style, such as MLA or Chicago.

Peer Review: Getting Feedback from Readers

You will likely engage in peer review with other students in your class by sharing drafts and providing feedback to help spot strengths and weaknesses in your reports. For peer review within a class, your instructor may provide assignment-specific questions or a form for you to complete as you work together.

If you have a writing center on your campus, it is well worth your time to make an online or in-person appointment with a tutor. You’ll receive valuable feedback and improve your ability to review not only your report but your overall writing.

Another way to receive feedback on your report is to ask a friend or family member to read your draft. Provide a list of questions or a form such as the one in Table 8.5 for them to complete as they read.

Revising: Using Reviewers’ Responses to Revise your Work

When you receive comments from readers, including your instructor, read each comment carefully to understand what is being asked. Try not to get defensive, even though this response is completely natural. Remember that readers are like coaches who want you to succeed. They are looking at your writing from outside your own head, and they can identify strengths and weaknesses that you may not have noticed. Keep track of the strengths and weaknesses your readers point out. Pay special attention to those that more than one reader identifies, and use this information to improve your report and later assignments.

As you analyze each response, be open to suggestions for improvement, and be willing to make significant revisions to improve your writing. Perhaps you need to revise your thesis statement to better reflect the content of your draft. Maybe you need to return to your sources to better understand a point you’re trying to make in order to develop a paragraph more fully. Perhaps you need to rethink the organization, move paragraphs around, and add transition sentences.

Below is an early draft of part of Trevor Garcia’s report with comments from a peer reviewer:

student sample text To truly understand what happened, it’s important first to look back to the years leading up to the pandemic. Epidemiologists and public health officials had long known that a global pandemic was possible. In 2016, the U.S. National Security Council (NSC) published a 69-page document with the intimidating title Playbook for Early Response to High-Consequence Emerging Infectious Disease Threats and Biological Incidents . The document’s two sections address responses to “emerging disease threats that start or are circulating in another country but not yet confirmed within U.S. territorial borders” and to “emerging disease threats within our nation’s borders.” On 13 January 2017, the joint Obama-Trump transition teams performed a pandemic preparedness exercise; however, the playbook was never adopted by the incoming administration. end student sample text

annotated text Peer Review Comment: Do the words in quotation marks need to be a direct quotation? It seems like a paraphrase would work here. end annotated text

annotated text Peer Review Comment: I’m getting lost in the details about the playbook. What’s the Obama-Trump transition team? end annotated text

student sample text In February 2018, the administration began to cut funding for the Prevention and Public Health Fund at the Centers for Disease Control and Prevention; cuts to other health agencies continued throughout 2018, with funds diverted to unrelated projects such as housing for detained immigrant children. end student sample text

annotated text Peer Review Comment: This paragraph has only one sentence, and it’s more like an example. It needs a topic sentence and more development. end annotated text

student sample text Three months later, Luciana Borio, director of medical and biodefense preparedness at the NSC, spoke at a symposium marking the centennial of the 1918 influenza pandemic. “The threat of pandemic flu is the number one health security concern,” she said. “Are we ready to respond? I fear the answer is no.” end student sample text

annotated text Peer Review Comment: This paragraph is very short and a lot like the previous paragraph in that it’s a single example. It needs a topic sentence. Maybe you can combine them? end annotated text

annotated text Peer Review Comment: Be sure to cite the quotation. end annotated text

Reading these comments and those of others, Trevor decided to combine the three short paragraphs into one paragraph focusing on the fact that the United States knew a pandemic was possible but was unprepared for it. He developed the paragraph, using the short paragraphs as evidence and connecting the sentences and evidence with transitional words and phrases. Finally, he added in-text citations in APA documentation style to credit his sources. The revised paragraph is below:

student sample text Epidemiologists and public health officials in the United States had long known that a global pandemic was possible. In 2016, the National Security Council (NSC) published Playbook for Early Response to High-Consequence Emerging Infectious Disease Threats and Biological Incidents , a 69-page document on responding to diseases spreading within and outside of the United States. On January 13, 2017, the joint transition teams of outgoing president Barack Obama and then president-elect Donald Trump performed a pandemic preparedness exercise based on the playbook; however, it was never adopted by the incoming administration (Goodman & Schulkin, 2020). A year later, in February 2018, the Trump administration began to cut funding for the Prevention and Public Health Fund at the Centers for Disease Control and Prevention, leaving key positions unfilled. Other individuals who were fired or resigned in 2018 were the homeland security adviser, whose portfolio included global pandemics; the director for medical and biodefense preparedness; and the top official in charge of a pandemic response. None of them were replaced, leaving the White House with no senior person who had experience in public health (Goodman & Schulkin, 2020). Experts voiced concerns, among them Luciana Borio, director of medical and biodefense preparedness at the NSC, who spoke at a symposium marking the centennial of the 1918 influenza pandemic in May 2018: “The threat of pandemic flu is the number one health security concern,” she said. “Are we ready to respond? I fear the answer is no” (Sun, 2018, final para.). end student sample text

A final word on working with reviewers’ comments: as you consider your readers’ suggestions, remember, too, that you remain the author. You are free to disregard suggestions that you think will not improve your writing. If you choose to disregard comments from your instructor, consider submitting a note explaining your reasons with the final draft of your report.

As an Amazon Associate we earn from qualifying purchases.

This book may not be used in the training of large language models or otherwise be ingested into large language models or generative AI offerings without OpenStax's permission.

Want to cite, share, or modify this book? This book uses the Creative Commons Attribution License and you must attribute OpenStax.

Access for free at https://openstax.org/books/writing-guide/pages/1-unit-introduction
  • Authors: Michelle Bachelor Robinson, Maria Jerskey, featuring Toby Fulwiler
  • Publisher/website: OpenStax
  • Book title: Writing Guide with Handbook
  • Publication date: Dec 21, 2021
  • Location: Houston, Texas
  • Book URL: https://openstax.org/books/writing-guide/pages/1-unit-introduction
  • Section URL: https://openstax.org/books/writing-guide/pages/8-5-writing-process-creating-an-analytical-report

© Dec 19, 2023 OpenStax. Textbook content produced by OpenStax is licensed under a Creative Commons Attribution License . The OpenStax name, OpenStax logo, OpenStax book covers, OpenStax CNX name, and OpenStax CNX logo are not subject to the Creative Commons license and may not be reproduced without the prior and express written consent of Rice University.

Purdue Online Writing Lab Purdue OWL® College of Liberal Arts

OWL logo

Welcome to the Purdue OWL

This page is brought to you by the OWL at Purdue University. When printing this page, you must include the entire legal notice.

Copyright ©1995-2018 by The Writing Lab & The OWL at Purdue and Purdue University. All rights reserved. This material may not be published, reproduced, broadcast, rewritten, or redistributed without permission. Use of this site constitutes acceptance of our terms and conditions of fair use.

Analysis is a type of primary research that involves finding and interpreting patterns in data, classifying those patterns, and generalizing the results. It is useful when looking at actions, events, or occurrences in different texts, media, or publications. Analysis can usually be done without considering most of the ethical issues discussed in the overview, as you are not working with people but rather publicly accessible documents. Analysis can be done on new documents or performed on raw data that you yourself have collected.

Here are several examples of analysis:

  • Recording commercials on three major television networks and analyzing race and gender within the commercials to discover some conclusion.
  • Analyzing the historical trends in public laws by looking at the records at a local courthouse.
  • Analyzing topics of discussion in chat rooms for patterns based on gender and age.

Analysis research involves several steps:

  • Finding and collecting documents.
  • Specifying criteria or patterns that you are looking for.
  • Analyzing documents for patterns, noting number of occurrences or other factors.

Research Paper Analysis: How to Analyze a Research Article + Example

Why might you need to analyze research? First of all, when you analyze a research article, you begin to understand your assigned reading better. It is also the first step toward learning how to write your own research articles and literature reviews. However, if you have never written a research paper before, it may be difficult for you to analyze one. After all, you may not know what criteria to use to evaluate it. But don’t panic! We will help you figure it out!

In this article, our team has explained how to analyze research papers quickly and effectively. At the end, you will also find a research analysis paper example to see how everything works in practice.

  • 🔤 Research Analysis Definition

📊 How to Analyze a Research Article

✍️ how to write a research analysis.

  • 📝 Analysis Example
  • 🔎 More Examples

🔗 References

🔤 research paper analysis: what is it.

A research paper analysis is an academic writing assignment in which you analyze a scholarly article’s methodology, data, and findings. In essence, “to analyze” means to break something down into components and assess each of them individually and in relation to each other. The goal of an analysis is to gain a deeper understanding of a subject. So, when you analyze a research article, you dissect it into elements like data sources , research methods, and results and evaluate how they contribute to the study’s strengths and weaknesses.

📋 Research Analysis Format

A research analysis paper has a pretty straightforward structure. Check it out below!

Research articles usually include the following sections: introduction, methods, results, and discussion. In the following paragraphs, we will discuss how to analyze a scientific article with a focus on each of its parts.

This image shows the main sections of a research article.

How to Analyze a Research Paper: Purpose

The purpose of the study is usually outlined in the introductory section of the article. Analyzing the research paper’s objectives is critical to establish the context for the rest of your analysis.

When analyzing the research aim, you should evaluate whether it was justified for the researchers to conduct the study. In other words, you should assess whether their research question was significant and whether it arose from existing literature on the topic.

Here are some questions that may help you analyze a research paper’s purpose:

  • Why was the research carried out?
  • What gaps does it try to fill, or what controversies to settle?
  • How does the study contribute to its field?
  • Do you agree with the author’s justification for approaching this particular question in this way?

How to Analyze a Paper: Methods

When analyzing the methodology section , you should indicate the study’s research design (qualitative, quantitative, or mixed) and methods used (for example, experiment, case study, correlational research, survey, etc.). After that, you should assess whether these methods suit the research purpose. In other words, do the chosen methods allow scholars to answer their research questions within the scope of their study?

For example, if scholars wanted to study US students’ average satisfaction with their higher education experience, they could conduct a quantitative survey . However, if they wanted to gain an in-depth understanding of the factors influencing US students’ satisfaction with higher education, qualitative interviews would be more appropriate.

When analyzing methods, you should also look at the research sample . Did the scholars use randomization to select study participants? Was the sample big enough for the results to be generalizable to a larger population?

You can also answer the following questions in your methodology analysis:

  • Is the methodology valid? In other words, did the researchers use methods that accurately measure the variables of interest?
  • Is the research methodology reliable? A research method is reliable if it can produce stable and consistent results under the same circumstances.
  • Is the study biased in any way?
  • What are the limitations of the chosen methodology?

How to Analyze Research Articles’ Results

You should start the analysis of the article results by carefully reading the tables, figures, and text. Check whether the findings correspond to the initial research purpose. See whether the results answered the author’s research questions or supported the hypotheses stated in the introduction.

To analyze the results section effectively, answer the following questions:

  • What are the major findings of the study?
  • Did the author present the results clearly and unambiguously?
  • Are the findings statistically significant ?
  • Does the author provide sufficient information on the validity and reliability of the results?
  • Have you noticed any trends or patterns in the data that the author did not mention?

How to Analyze Research: Discussion

Finally, you should analyze the authors’ interpretation of results and its connection with research objectives. Examine what conclusions the authors drew from their study and whether these conclusions answer the original question.

You should also pay attention to how the authors used findings to support their conclusions. For example, you can reflect on why their findings support that particular inference and not another one. Moreover, more than one conclusion can sometimes be made based on the same set of results. If that’s the case with your article, you should analyze whether the authors addressed other interpretations of their findings .

Here are some useful questions you can use to analyze the discussion section:

  • What findings did the authors use to support their conclusions?
  • How do the researchers’ conclusions compare to other studies’ findings?
  • How does this study contribute to its field?
  • What future research directions do the authors suggest?
  • What additional insights can you share regarding this article? For example, do you agree with the results? What other questions could the researchers have answered?

This image shows how to analyze a research article.

Now, you know how to analyze an article that presents research findings. However, it’s just a part of the work you have to do to complete your paper. So, it’s time to learn how to write research analysis! Check out the steps below!

1. Introduce the Article

As with most academic assignments, you should start your research article analysis with an introduction. Here’s what it should include:

  • The article’s publication details . Specify the title of the scholarly work you are analyzing, its authors, and publication date. Remember to enclose the article’s title in quotation marks and write it in title case .
  • The article’s main point . State what the paper is about. What did the authors study, and what was their major finding?
  • Your thesis statement . End your introduction with a strong claim summarizing your evaluation of the article. Consider briefly outlining the research paper’s strengths, weaknesses, and significance in your thesis.

Keep your introduction brief. Save the word count for the “meat” of your paper — that is, for the analysis.

2. Summarize the Article

Now, you should write a brief and focused summary of the scientific article. It should be shorter than your analysis section and contain all the relevant details about the research paper.

Here’s what you should include in your summary:

  • The research purpose . Briefly explain why the research was done. Identify the authors’ purpose and research questions or hypotheses .
  • Methods and results . Summarize what happened in the study. State only facts, without the authors’ interpretations of them. Avoid using too many numbers and details; instead, include only the information that will help readers understand what happened.
  • The authors’ conclusions . Outline what conclusions the researchers made from their study. In other words, describe how the authors explained the meaning of their findings.

If you need help summarizing an article, you can use our free summary generator .

3. Write Your Research Analysis

The analysis of the study is the most crucial part of this assignment type. Its key goal is to evaluate the article critically and demonstrate your understanding of it.

We’ve already covered how to analyze a research article in the section above. Here’s a quick recap:

  • Analyze whether the study’s purpose is significant and relevant.
  • Examine whether the chosen methodology allows for answering the research questions.
  • Evaluate how the authors presented the results.
  • Assess whether the authors’ conclusions are grounded in findings and answer the original research questions.

Although you should analyze the article critically, it doesn’t mean you only should criticize it. If the authors did a good job designing and conducting their study, be sure to explain why you think their work is well done. Also, it is a great idea to provide examples from the article to support your analysis.

4. Conclude Your Analysis of Research Paper

A conclusion is your chance to reflect on the study’s relevance and importance. Explain how the analyzed paper can contribute to the existing knowledge or lead to future research. Also, you need to summarize your thoughts on the article as a whole. Avoid making value judgments — saying that the paper is “good” or “bad.” Instead, use more descriptive words and phrases such as “This paper effectively showed…”

Need help writing a compelling conclusion? Try our free essay conclusion generator !

5. Revise and Proofread

Last but not least, you should carefully proofread your paper to find any punctuation, grammar, and spelling mistakes. Start by reading your work out loud to ensure that your sentences fit together and sound cohesive. Also, it can be helpful to ask your professor or peer to read your work and highlight possible weaknesses or typos.

This image shows how to write a research analysis.

📝 Research Paper Analysis Example

We have prepared an analysis of a research paper example to show how everything works in practice.

No Homework Policy: Research Article Analysis Example

This paper aims to analyze the research article entitled “No Assignment: A Boon or a Bane?” by Cordova, Pagtulon-an, and Tan (2019). This study examined the effects of having and not having assignments on weekends on high school students’ performance and transmuted mean scores. This article effectively shows the value of homework for students, but larger studies are needed to support its findings.

Cordova et al. (2019) conducted a descriptive quantitative study using a sample of 115 Grade 11 students of the Central Mindanao University Laboratory High School in the Philippines. The sample was divided into two groups: the first received homework on weekends, while the second didn’t. The researchers compared students’ performance records made by teachers and found that students who received assignments performed better than their counterparts without homework.

The purpose of this study is highly relevant and justified as this research was conducted in response to the debates about the “No Homework Policy” in the Philippines. Although the descriptive research design used by the authors allows to answer the research question, the study could benefit from an experimental design. This way, the authors would have firm control over variables. Additionally, the study’s sample size was not large enough for the findings to be generalized to a larger population.

The study results are presented clearly, logically, and comprehensively and correspond to the research objectives. The researchers found that students’ mean grades decreased in the group without homework and increased in the group with homework. Based on these findings, the authors concluded that homework positively affected students’ performance. This conclusion is logical and grounded in data.

This research effectively showed the importance of homework for students’ performance. Yet, since the sample size was relatively small, larger studies are needed to ensure the authors’ conclusions can be generalized to a larger population.

🔎 More Research Analysis Paper Examples

Do you want another research analysis example? Check out the best analysis research paper samples below:

  • Gracious Leadership Principles for Nurses: Article Analysis
  • Effective Mental Health Interventions: Analysis of an Article
  • Nursing Turnover: Article Analysis
  • Nursing Practice Issue: Qualitative Research Article Analysis
  • Quantitative Article Critique in Nursing
  • LIVE Program: Quantitative Article Critique
  • Evidence-Based Practice Beliefs and Implementation: Article Critique
  • “Differential Effectiveness of Placebo Treatments”: Research Paper Analysis
  • “Family-Based Childhood Obesity Prevention Interventions”: Analysis Research Paper Example
  • “Childhood Obesity Risk in Overweight Mothers”: Article Analysis
  • “Fostering Early Breast Cancer Detection” Article Analysis
  • Space and the Atom: Article Analysis
  • “Democracy and Collective Identity in the EU and the USA”: Article Analysis
  • China’s Hegemonic Prospects: Article Review
  • Article Analysis: Fear of Missing Out
  • Codependence, Narcissism, and Childhood Trauma: Analysis of the Article
  • Relationship Between Work Intensity, Workaholism, Burnout, and MSC: Article Review

We hope that our article on research paper analysis has been helpful. If you liked it, please share this article with your friends!

  • Analyzing Research Articles: A Guide for Readers and Writers | Sam Mathews
  • Summary and Analysis of Scientific Research Articles | San José State University Writing Center
  • Analyzing Scholarly Articles | Texas A&M University
  • Article Analysis Assignment | University of Wisconsin-Madison
  • How to Summarize a Research Article | University of Connecticut
  • Critique/Review of Research Articles | University of Calgary
  • Art of Reading a Journal Article: Methodically and Effectively | PubMed Central
  • Write a Critical Review of a Scientific Journal Article | McLaughlin Library
  • How to Read and Understand a Scientific Paper: A Guide for Non-scientists | LSE
  • How to Analyze Journal Articles | Classroom

How to Write an Animal Testing Essay: Tips for Argumentative & Persuasive Papers

Descriptive essay topics: examples, outline, & more.

Organizing Your Social Sciences Research Assignments

  • Annotated Bibliography
  • Analyzing a Scholarly Journal Article
  • Group Presentations
  • Dealing with Nervousness
  • Using Visual Aids
  • Grading Someone Else's Paper
  • Types of Structured Group Activities
  • Group Project Survival Skills
  • Leading a Class Discussion
  • Multiple Book Review Essay
  • Reviewing Collected Works
  • Writing a Case Analysis Paper
  • Writing a Case Study
  • About Informed Consent
  • Writing Field Notes
  • Writing a Policy Memo
  • Writing a Reflective Paper
  • Writing a Research Proposal
  • Generative AI and Writing
  • Acknowledgments

Definition and Introduction

Case analysis is a problem-based teaching and learning method that involves critically analyzing complex scenarios within an organizational setting for the purpose of placing the student in a “real world” situation and applying reflection and critical thinking skills to contemplate appropriate solutions, decisions, or recommended courses of action. It is considered a more effective teaching technique than in-class role playing or simulation activities. The analytical process is often guided by questions provided by the instructor that ask students to contemplate relationships between the facts and critical incidents described in the case.

Cases generally include both descriptive and statistical elements and rely on students applying abductive reasoning to develop and argue for preferred or best outcomes [i.e., case scenarios rarely have a single correct or perfect answer based on the evidence provided]. Rather than emphasizing theories or concepts, case analysis assignments emphasize building a bridge of relevancy between abstract thinking and practical application and, by so doing, teaches the value of both within a specific area of professional practice.

Given this, the purpose of a case analysis paper is to present a structured and logically organized format for analyzing the case situation. It can be assigned to students individually or as a small group assignment and it may include an in-class presentation component. Case analysis is predominately taught in economics and business-related courses, but it is also a method of teaching and learning found in other applied social sciences disciplines, such as, social work, public relations, education, journalism, and public administration.

Ellet, William. The Case Study Handbook: A Student's Guide . Revised Edition. Boston, MA: Harvard Business School Publishing, 2018; Christoph Rasche and Achim Seisreiner. Guidelines for Business Case Analysis . University of Potsdam; Writing a Case Analysis . Writing Center, Baruch College; Volpe, Guglielmo. "Case Teaching in Economics: History, Practice and Evidence." Cogent Economics and Finance 3 (December 2015). doi:https://doi.org/10.1080/23322039.2015.1120977.

How to Approach Writing a Case Analysis Paper

The organization and structure of a case analysis paper can vary depending on the organizational setting, the situation, and how your professor wants you to approach the assignment. Nevertheless, preparing to write a case analysis paper involves several important steps. As Hawes notes, a case analysis assignment “...is useful in developing the ability to get to the heart of a problem, analyze it thoroughly, and to indicate the appropriate solution as well as how it should be implemented” [p.48]. This statement encapsulates how you should approach preparing to write a case analysis paper.

Before you begin to write your paper, consider the following analytical procedures:

  • Review the case to get an overview of the situation . A case can be only a few pages in length, however, it is most often very lengthy and contains a significant amount of detailed background information and statistics, with multilayered descriptions of the scenario, the roles and behaviors of various stakeholder groups, and situational events. Therefore, a quick reading of the case will help you gain an overall sense of the situation and illuminate the types of issues and problems that you will need to address in your paper. If your professor has provided questions intended to help frame your analysis, use them to guide your initial reading of the case.
  • Read the case thoroughly . After gaining a general overview of the case, carefully read the content again with the purpose of understanding key circumstances, events, and behaviors among stakeholder groups. Look for information or data that appears contradictory, extraneous, or misleading. At this point, you should be taking notes as you read because this will help you develop a general outline of your paper. The aim is to obtain a complete understanding of the situation so that you can begin contemplating tentative answers to any questions your professor has provided or, if they have not provided, developing answers to your own questions about the case scenario and its connection to the course readings,lectures, and class discussions.
  • Determine key stakeholder groups, issues, and events and the relationships they all have to each other . As you analyze the content, pay particular attention to identifying individuals, groups, or organizations described in the case and identify evidence of any problems or issues of concern that impact the situation in a negative way. Other things to look for include identifying any assumptions being made by or about each stakeholder, potential biased explanations or actions, explicit demands or ultimatums , and the underlying concerns that motivate these behaviors among stakeholders. The goal at this stage is to develop a comprehensive understanding of the situational and behavioral dynamics of the case and the explicit and implicit consequences of each of these actions.
  • Identify the core problems . The next step in most case analysis assignments is to discern what the core [i.e., most damaging, detrimental, injurious] problems are within the organizational setting and to determine their implications. The purpose at this stage of preparing to write your analysis paper is to distinguish between the symptoms of core problems and the core problems themselves and to decide which of these must be addressed immediately and which problems do not appear critical but may escalate over time. Identify evidence from the case to support your decisions by determining what information or data is essential to addressing the core problems and what information is not relevant or is misleading.
  • Explore alternative solutions . As noted, case analysis scenarios rarely have only one correct answer. Therefore, it is important to keep in mind that the process of analyzing the case and diagnosing core problems, while based on evidence, is a subjective process open to various avenues of interpretation. This means that you must consider alternative solutions or courses of action by critically examining strengths and weaknesses, risk factors, and the differences between short and long-term solutions. For each possible solution or course of action, consider the consequences they may have related to their implementation and how these recommendations might lead to new problems. Also, consider thinking about your recommended solutions or courses of action in relation to issues of fairness, equity, and inclusion.
  • Decide on a final set of recommendations . The last stage in preparing to write a case analysis paper is to assert an opinion or viewpoint about the recommendations needed to help resolve the core problems as you see them and to make a persuasive argument for supporting this point of view. Prepare a clear rationale for your recommendations based on examining each element of your analysis. Anticipate possible obstacles that could derail their implementation. Consider any counter-arguments that could be made concerning the validity of your recommended actions. Finally, describe a set of criteria and measurable indicators that could be applied to evaluating the effectiveness of your implementation plan.

Use these steps as the framework for writing your paper. Remember that the more detailed you are in taking notes as you critically examine each element of the case, the more information you will have to draw from when you begin to write. This will save you time.

NOTE : If the process of preparing to write a case analysis paper is assigned as a student group project, consider having each member of the group analyze a specific element of the case, including drafting answers to the corresponding questions used by your professor to frame the analysis. This will help make the analytical process more efficient and ensure that the distribution of work is equitable. This can also facilitate who is responsible for drafting each part of the final case analysis paper and, if applicable, the in-class presentation.

Framework for Case Analysis . College of Management. University of Massachusetts; Hawes, Jon M. "Teaching is Not Telling: The Case Method as a Form of Interactive Learning." Journal for Advancement of Marketing Education 5 (Winter 2004): 47-54; Rasche, Christoph and Achim Seisreiner. Guidelines for Business Case Analysis . University of Potsdam; Writing a Case Study Analysis . University of Arizona Global Campus Writing Center; Van Ness, Raymond K. A Guide to Case Analysis . School of Business. State University of New York, Albany; Writing a Case Analysis . Business School, University of New South Wales.

Structure and Writing Style

A case analysis paper should be detailed, concise, persuasive, clearly written, and professional in tone and in the use of language . As with other forms of college-level academic writing, declarative statements that convey information, provide a fact, or offer an explanation or any recommended courses of action should be based on evidence. If allowed by your professor, any external sources used to support your analysis, such as course readings, should be properly cited under a list of references. The organization and structure of case analysis papers can vary depending on your professor’s preferred format, but its structure generally follows the steps used for analyzing the case.

Introduction

The introduction should provide a succinct but thorough descriptive overview of the main facts, issues, and core problems of the case . The introduction should also include a brief summary of the most relevant details about the situation and organizational setting. This includes defining the theoretical framework or conceptual model on which any questions were used to frame your analysis.

Following the rules of most college-level research papers, the introduction should then inform the reader how the paper will be organized. This includes describing the major sections of the paper and the order in which they will be presented. Unless you are told to do so by your professor, you do not need to preview your final recommendations in the introduction. U nlike most college-level research papers , the introduction does not include a statement about the significance of your findings because a case analysis assignment does not involve contributing new knowledge about a research problem.

Background Analysis

Background analysis can vary depending on any guiding questions provided by your professor and the underlying concept or theory that the case is based upon. In general, however, this section of your paper should focus on:

  • Providing an overarching analysis of problems identified from the case scenario, including identifying events that stakeholders find challenging or troublesome,
  • Identifying assumptions made by each stakeholder and any apparent biases they may exhibit,
  • Describing any demands or claims made by or forced upon key stakeholders, and
  • Highlighting any issues of concern or complaints expressed by stakeholders in response to those demands or claims.

These aspects of the case are often in the form of behavioral responses expressed by individuals or groups within the organizational setting. However, note that problems in a case situation can also be reflected in data [or the lack thereof] and in the decision-making, operational, cultural, or institutional structure of the organization. Additionally, demands or claims can be either internal and external to the organization [e.g., a case analysis involving a president considering arms sales to Saudi Arabia could include managing internal demands from White House advisors as well as demands from members of Congress].

Throughout this section, present all relevant evidence from the case that supports your analysis. Do not simply claim there is a problem, an assumption, a demand, or a concern; tell the reader what part of the case informed how you identified these background elements.

Identification of Problems

In most case analysis assignments, there are problems, and then there are problems . Each problem can reflect a multitude of underlying symptoms that are detrimental to the interests of the organization. The purpose of identifying problems is to teach students how to differentiate between problems that vary in severity, impact, and relative importance. Given this, problems can be described in three general forms: those that must be addressed immediately, those that should be addressed but the impact is not severe, and those that do not require immediate attention and can be set aside for the time being.

All of the problems you identify from the case should be identified in this section of your paper, with a description based on evidence explaining the problem variances. If the assignment asks you to conduct research to further support your assessment of the problems, include this in your explanation. Remember to cite those sources in a list of references. Use specific evidence from the case and apply appropriate concepts, theories, and models discussed in class or in relevant course readings to highlight and explain the key problems [or problem] that you believe must be solved immediately and describe the underlying symptoms and why they are so critical.

Alternative Solutions

This section is where you provide specific, realistic, and evidence-based solutions to the problems you have identified and make recommendations about how to alleviate the underlying symptomatic conditions impacting the organizational setting. For each solution, you must explain why it was chosen and provide clear evidence to support your reasoning. This can include, for example, course readings and class discussions as well as research resources, such as, books, journal articles, research reports, or government documents. In some cases, your professor may encourage you to include personal, anecdotal experiences as evidence to support why you chose a particular solution or set of solutions. Using anecdotal evidence helps promote reflective thinking about the process of determining what qualifies as a core problem and relevant solution .

Throughout this part of the paper, keep in mind the entire array of problems that must be addressed and describe in detail the solutions that might be implemented to resolve these problems.

Recommended Courses of Action

In some case analysis assignments, your professor may ask you to combine the alternative solutions section with your recommended courses of action. However, it is important to know the difference between the two. A solution refers to the answer to a problem. A course of action refers to a procedure or deliberate sequence of activities adopted to proactively confront a situation, often in the context of accomplishing a goal. In this context, proposed courses of action are based on your analysis of alternative solutions. Your description and justification for pursuing each course of action should represent the overall plan for implementing your recommendations.

For each course of action, you need to explain the rationale for your recommendation in a way that confronts challenges, explains risks, and anticipates any counter-arguments from stakeholders. Do this by considering the strengths and weaknesses of each course of action framed in relation to how the action is expected to resolve the core problems presented, the possible ways the action may affect remaining problems, and how the recommended action will be perceived by each stakeholder.

In addition, you should describe the criteria needed to measure how well the implementation of these actions is working and explain which individuals or groups are responsible for ensuring your recommendations are successful. In addition, always consider the law of unintended consequences. Outline difficulties that may arise in implementing each course of action and describe how implementing the proposed courses of action [either individually or collectively] may lead to new problems [both large and small].

Throughout this section, you must consider the costs and benefits of recommending your courses of action in relation to uncertainties or missing information and the negative consequences of success.

The conclusion should be brief and introspective. Unlike a research paper, the conclusion in a case analysis paper does not include a summary of key findings and their significance, a statement about how the study contributed to existing knowledge, or indicate opportunities for future research.

Begin by synthesizing the core problems presented in the case and the relevance of your recommended solutions. This can include an explanation of what you have learned about the case in the context of your answers to the questions provided by your professor. The conclusion is also where you link what you learned from analyzing the case with the course readings or class discussions. This can further demonstrate your understanding of the relationships between the practical case situation and the theoretical and abstract content of assigned readings and other course content.

Problems to Avoid

The literature on case analysis assignments often includes examples of difficulties students have with applying methods of critical analysis and effectively reporting the results of their assessment of the situation. A common reason cited by scholars is that the application of this type of teaching and learning method is limited to applied fields of social and behavioral sciences and, as a result, writing a case analysis paper can be unfamiliar to most students entering college.

After you have drafted your paper, proofread the narrative flow and revise any of these common errors:

  • Unnecessary detail in the background section . The background section should highlight the essential elements of the case based on your analysis. Focus on summarizing the facts and highlighting the key factors that become relevant in the other sections of the paper by eliminating any unnecessary information.
  • Analysis relies too much on opinion . Your analysis is interpretive, but the narrative must be connected clearly to evidence from the case and any models and theories discussed in class or in course readings. Any positions or arguments you make should be supported by evidence.
  • Analysis does not focus on the most important elements of the case . Your paper should provide a thorough overview of the case. However, the analysis should focus on providing evidence about what you identify are the key events, stakeholders, issues, and problems. Emphasize what you identify as the most critical aspects of the case to be developed throughout your analysis. Be thorough but succinct.
  • Writing is too descriptive . A paper with too much descriptive information detracts from your analysis of the complexities of the case situation. Questions about what happened, where, when, and by whom should only be included as essential information leading to your examination of questions related to why, how, and for what purpose.
  • Inadequate definition of a core problem and associated symptoms . A common error found in case analysis papers is recommending a solution or course of action without adequately defining or demonstrating that you understand the problem. Make sure you have clearly described the problem and its impact and scope within the organizational setting. Ensure that you have adequately described the root causes w hen describing the symptoms of the problem.
  • Recommendations lack specificity . Identify any use of vague statements and indeterminate terminology, such as, “A particular experience” or “a large increase to the budget.” These statements cannot be measured and, as a result, there is no way to evaluate their successful implementation. Provide specific data and use direct language in describing recommended actions.
  • Unrealistic, exaggerated, or unattainable recommendations . Review your recommendations to ensure that they are based on the situational facts of the case. Your recommended solutions and courses of action must be based on realistic assumptions and fit within the constraints of the situation. Also note that the case scenario has already happened, therefore, any speculation or arguments about what could have occurred if the circumstances were different should be revised or eliminated.

Bee, Lian Song et al. "Business Students' Perspectives on Case Method Coaching for Problem-Based Learning: Impacts on Student Engagement and Learning Performance in Higher Education." Education & Training 64 (2022): 416-432; The Case Analysis . Fred Meijer Center for Writing and Michigan Authors. Grand Valley State University; Georgallis, Panikos and Kayleigh Bruijn. "Sustainability Teaching using Case-Based Debates." Journal of International Education in Business 15 (2022): 147-163; Hawes, Jon M. "Teaching is Not Telling: The Case Method as a Form of Interactive Learning." Journal for Advancement of Marketing Education 5 (Winter 2004): 47-54; Georgallis, Panikos, and Kayleigh Bruijn. "Sustainability Teaching Using Case-based Debates." Journal of International Education in Business 15 (2022): 147-163; .Dean,  Kathy Lund and Charles J. Fornaciari. "How to Create and Use Experiential Case-Based Exercises in a Management Classroom." Journal of Management Education 26 (October 2002): 586-603; Klebba, Joanne M. and Janet G. Hamilton. "Structured Case Analysis: Developing Critical Thinking Skills in a Marketing Case Course." Journal of Marketing Education 29 (August 2007): 132-137, 139; Klein, Norman. "The Case Discussion Method Revisited: Some Questions about Student Skills." Exchange: The Organizational Behavior Teaching Journal 6 (November 1981): 30-32; Mukherjee, Arup. "Effective Use of In-Class Mini Case Analysis for Discovery Learning in an Undergraduate MIS Course." The Journal of Computer Information Systems 40 (Spring 2000): 15-23; Pessoa, Silviaet al. "Scaffolding the Case Analysis in an Organizational Behavior Course: Making Analytical Language Explicit." Journal of Management Education 46 (2022): 226-251: Ramsey, V. J. and L. D. Dodge. "Case Analysis: A Structured Approach." Exchange: The Organizational Behavior Teaching Journal 6 (November 1981): 27-29; Schweitzer, Karen. "How to Write and Format a Business Case Study." ThoughtCo. https://www.thoughtco.com/how-to-write-and-format-a-business-case-study-466324 (accessed December 5, 2022); Reddy, C. D. "Teaching Research Methodology: Everything's a Case." Electronic Journal of Business Research Methods 18 (December 2020): 178-188; Volpe, Guglielmo. "Case Teaching in Economics: History, Practice and Evidence." Cogent Economics and Finance 3 (December 2015). doi:https://doi.org/10.1080/23322039.2015.1120977.

Writing Tip

Ca se Study and Case Analysis Are Not the Same!

Confusion often exists between what it means to write a paper that uses a case study research design and writing a paper that analyzes a case; they are two different types of approaches to learning in the social and behavioral sciences. Professors as well as educational researchers contribute to this confusion because they often use the term "case study" when describing the subject of analysis for a case analysis paper. But you are not studying a case for the purpose of generating a comprehensive, multi-faceted understanding of a research problem. R ather, you are critically analyzing a specific scenario to argue logically for recommended solutions and courses of action that lead to optimal outcomes applicable to professional practice.

To avoid any confusion, here are twelve characteristics that delineate the differences between writing a paper using the case study research method and writing a case analysis paper:

  • Case study is a method of in-depth research and rigorous inquiry ; case analysis is a reliable method of teaching and learning . A case study is a modality of research that investigates a phenomenon for the purpose of creating new knowledge, solving a problem, or testing a hypothesis using empirical evidence derived from the case being studied. Often, the results are used to generalize about a larger population or within a wider context. The writing adheres to the traditional standards of a scholarly research study. A case analysis is a pedagogical tool used to teach students how to reflect and think critically about a practical, real-life problem in an organizational setting.
  • The researcher is responsible for identifying the case to study; a case analysis is assigned by your professor . As the researcher, you choose the case study to investigate in support of obtaining new knowledge and understanding about the research problem. The case in a case analysis assignment is almost always provided, and sometimes written, by your professor and either given to every student in class to analyze individually or to a small group of students, or students select a case to analyze from a predetermined list.
  • A case study is indeterminate and boundless; a case analysis is predetermined and confined . A case study can be almost anything [see item 9 below] as long as it relates directly to examining the research problem. This relationship is the only limit to what a researcher can choose as the subject of their case study. The content of a case analysis is determined by your professor and its parameters are well-defined and limited to elucidating insights of practical value applied to practice.
  • Case study is fact-based and describes actual events or situations; case analysis can be entirely fictional or adapted from an actual situation . The entire content of a case study must be grounded in reality to be a valid subject of investigation in an empirical research study. A case analysis only needs to set the stage for critically examining a situation in practice and, therefore, can be entirely fictional or adapted, all or in-part, from an actual situation.
  • Research using a case study method must adhere to principles of intellectual honesty and academic integrity; a case analysis scenario can include misleading or false information . A case study paper must report research objectively and factually to ensure that any findings are understood to be logically correct and trustworthy. A case analysis scenario may include misleading or false information intended to deliberately distract from the central issues of the case. The purpose is to teach students how to sort through conflicting or useless information in order to come up with the preferred solution. Any use of misleading or false information in academic research is considered unethical.
  • Case study is linked to a research problem; case analysis is linked to a practical situation or scenario . In the social sciences, the subject of an investigation is most often framed as a problem that must be researched in order to generate new knowledge leading to a solution. Case analysis narratives are grounded in real life scenarios for the purpose of examining the realities of decision-making behavior and processes within organizational settings. A case analysis assignments include a problem or set of problems to be analyzed. However, the goal is centered around the act of identifying and evaluating courses of action leading to best possible outcomes.
  • The purpose of a case study is to create new knowledge through research; the purpose of a case analysis is to teach new understanding . Case studies are a choice of methodological design intended to create new knowledge about resolving a research problem. A case analysis is a mode of teaching and learning intended to create new understanding and an awareness of uncertainty applied to practice through acts of critical thinking and reflection.
  • A case study seeks to identify the best possible solution to a research problem; case analysis can have an indeterminate set of solutions or outcomes . Your role in studying a case is to discover the most logical, evidence-based ways to address a research problem. A case analysis assignment rarely has a single correct answer because one of the goals is to force students to confront the real life dynamics of uncertainly, ambiguity, and missing or conflicting information within professional practice. Under these conditions, a perfect outcome or solution almost never exists.
  • Case study is unbounded and relies on gathering external information; case analysis is a self-contained subject of analysis . The scope of a case study chosen as a method of research is bounded. However, the researcher is free to gather whatever information and data is necessary to investigate its relevance to understanding the research problem. For a case analysis assignment, your professor will often ask you to examine solutions or recommended courses of action based solely on facts and information from the case.
  • Case study can be a person, place, object, issue, event, condition, or phenomenon; a case analysis is a carefully constructed synopsis of events, situations, and behaviors . The research problem dictates the type of case being studied and, therefore, the design can encompass almost anything tangible as long as it fulfills the objective of generating new knowledge and understanding. A case analysis is in the form of a narrative containing descriptions of facts, situations, processes, rules, and behaviors within a particular setting and under a specific set of circumstances.
  • Case study can represent an open-ended subject of inquiry; a case analysis is a narrative about something that has happened in the past . A case study is not restricted by time and can encompass an event or issue with no temporal limit or end. For example, the current war in Ukraine can be used as a case study of how medical personnel help civilians during a large military conflict, even though circumstances around this event are still evolving. A case analysis can be used to elicit critical thinking about current or future situations in practice, but the case itself is a narrative about something finite and that has taken place in the past.
  • Multiple case studies can be used in a research study; case analysis involves examining a single scenario . Case study research can use two or more cases to examine a problem, often for the purpose of conducting a comparative investigation intended to discover hidden relationships, document emerging trends, or determine variations among different examples. A case analysis assignment typically describes a stand-alone, self-contained situation and any comparisons among cases are conducted during in-class discussions and/or student presentations.

The Case Analysis . Fred Meijer Center for Writing and Michigan Authors. Grand Valley State University; Mills, Albert J. , Gabrielle Durepos, and Eiden Wiebe, editors. Encyclopedia of Case Study Research . Thousand Oaks, CA: SAGE Publications, 2010; Ramsey, V. J. and L. D. Dodge. "Case Analysis: A Structured Approach." Exchange: The Organizational Behavior Teaching Journal 6 (November 1981): 27-29; Yin, Robert K. Case Study Research and Applications: Design and Methods . 6th edition. Thousand Oaks, CA: Sage, 2017; Crowe, Sarah et al. “The Case Study Approach.” BMC Medical Research Methodology 11 (2011):  doi: 10.1186/1471-2288-11-100; Yin, Robert K. Case Study Research: Design and Methods . 4th edition. Thousand Oaks, CA: Sage Publishing; 1994.

  • << Previous: Reviewing Collected Works
  • Next: Writing a Case Study >>
  • Last Updated: May 7, 2024 9:45 AM
  • URL: https://libguides.usc.edu/writingguide/assignments
  • Privacy Policy

Research Method

Home » Research Results Section – Writing Guide and Examples

Research Results Section – Writing Guide and Examples

Table of Contents

Research Results

Research Results

Research results refer to the findings and conclusions derived from a systematic investigation or study conducted to answer a specific question or hypothesis. These results are typically presented in a written report or paper and can include various forms of data such as numerical data, qualitative data, statistics, charts, graphs, and visual aids.

Results Section in Research

The results section of the research paper presents the findings of the study. It is the part of the paper where the researcher reports the data collected during the study and analyzes it to draw conclusions.

In the results section, the researcher should describe the data that was collected, the statistical analysis performed, and the findings of the study. It is important to be objective and not interpret the data in this section. Instead, the researcher should report the data as accurately and objectively as possible.

Structure of Research Results Section

The structure of the research results section can vary depending on the type of research conducted, but in general, it should contain the following components:

  • Introduction: The introduction should provide an overview of the study, its aims, and its research questions. It should also briefly explain the methodology used to conduct the study.
  • Data presentation : This section presents the data collected during the study. It may include tables, graphs, or other visual aids to help readers better understand the data. The data presented should be organized in a logical and coherent way, with headings and subheadings used to help guide the reader.
  • Data analysis: In this section, the data presented in the previous section are analyzed and interpreted. The statistical tests used to analyze the data should be clearly explained, and the results of the tests should be presented in a way that is easy to understand.
  • Discussion of results : This section should provide an interpretation of the results of the study, including a discussion of any unexpected findings. The discussion should also address the study’s research questions and explain how the results contribute to the field of study.
  • Limitations: This section should acknowledge any limitations of the study, such as sample size, data collection methods, or other factors that may have influenced the results.
  • Conclusions: The conclusions should summarize the main findings of the study and provide a final interpretation of the results. The conclusions should also address the study’s research questions and explain how the results contribute to the field of study.
  • Recommendations : This section may provide recommendations for future research based on the study’s findings. It may also suggest practical applications for the study’s results in real-world settings.

Outline of Research Results Section

The following is an outline of the key components typically included in the Results section:

I. Introduction

  • A brief overview of the research objectives and hypotheses
  • A statement of the research question

II. Descriptive statistics

  • Summary statistics (e.g., mean, standard deviation) for each variable analyzed
  • Frequencies and percentages for categorical variables

III. Inferential statistics

  • Results of statistical analyses, including tests of hypotheses
  • Tables or figures to display statistical results

IV. Effect sizes and confidence intervals

  • Effect sizes (e.g., Cohen’s d, odds ratio) to quantify the strength of the relationship between variables
  • Confidence intervals to estimate the range of plausible values for the effect size

V. Subgroup analyses

  • Results of analyses that examined differences between subgroups (e.g., by gender, age, treatment group)

VI. Limitations and assumptions

  • Discussion of any limitations of the study and potential sources of bias
  • Assumptions made in the statistical analyses

VII. Conclusions

  • A summary of the key findings and their implications
  • A statement of whether the hypotheses were supported or not
  • Suggestions for future research

Example of Research Results Section

An Example of a Research Results Section could be:

  • This study sought to examine the relationship between sleep quality and academic performance in college students.
  • Hypothesis : College students who report better sleep quality will have higher GPAs than those who report poor sleep quality.
  • Methodology : Participants completed a survey about their sleep habits and academic performance.

II. Participants

  • Participants were college students (N=200) from a mid-sized public university in the United States.
  • The sample was evenly split by gender (50% female, 50% male) and predominantly white (85%).
  • Participants were recruited through flyers and online advertisements.

III. Results

  • Participants who reported better sleep quality had significantly higher GPAs (M=3.5, SD=0.5) than those who reported poor sleep quality (M=2.9, SD=0.6).
  • See Table 1 for a summary of the results.
  • Participants who reported consistent sleep schedules had higher GPAs than those with irregular sleep schedules.

IV. Discussion

  • The results support the hypothesis that better sleep quality is associated with higher academic performance in college students.
  • These findings have implications for college students, as prioritizing sleep could lead to better academic outcomes.
  • Limitations of the study include self-reported data and the lack of control for other variables that could impact academic performance.

V. Conclusion

  • College students who prioritize sleep may see a positive impact on their academic performance.
  • These findings highlight the importance of sleep in academic success.
  • Future research could explore interventions to improve sleep quality in college students.

Example of Research Results in Research Paper :

Our study aimed to compare the performance of three different machine learning algorithms (Random Forest, Support Vector Machine, and Neural Network) in predicting customer churn in a telecommunications company. We collected a dataset of 10,000 customer records, with 20 predictor variables and a binary churn outcome variable.

Our analysis revealed that all three algorithms performed well in predicting customer churn, with an overall accuracy of 85%. However, the Random Forest algorithm showed the highest accuracy (88%), followed by the Support Vector Machine (86%) and the Neural Network (84%).

Furthermore, we found that the most important predictor variables for customer churn were monthly charges, contract type, and tenure. Random Forest identified monthly charges as the most important variable, while Support Vector Machine and Neural Network identified contract type as the most important.

Overall, our results suggest that machine learning algorithms can be effective in predicting customer churn in a telecommunications company, and that Random Forest is the most accurate algorithm for this task.

Example 3 :

Title : The Impact of Social Media on Body Image and Self-Esteem

Abstract : This study aimed to investigate the relationship between social media use, body image, and self-esteem among young adults. A total of 200 participants were recruited from a university and completed self-report measures of social media use, body image satisfaction, and self-esteem.

Results: The results showed that social media use was significantly associated with body image dissatisfaction and lower self-esteem. Specifically, participants who reported spending more time on social media platforms had lower levels of body image satisfaction and self-esteem compared to those who reported less social media use. Moreover, the study found that comparing oneself to others on social media was a significant predictor of body image dissatisfaction and lower self-esteem.

Conclusion : These results suggest that social media use can have negative effects on body image satisfaction and self-esteem among young adults. It is important for individuals to be mindful of their social media use and to recognize the potential negative impact it can have on their mental health. Furthermore, interventions aimed at promoting positive body image and self-esteem should take into account the role of social media in shaping these attitudes and behaviors.

Importance of Research Results

Research results are important for several reasons, including:

  • Advancing knowledge: Research results can contribute to the advancement of knowledge in a particular field, whether it be in science, technology, medicine, social sciences, or humanities.
  • Developing theories: Research results can help to develop or modify existing theories and create new ones.
  • Improving practices: Research results can inform and improve practices in various fields, such as education, healthcare, business, and public policy.
  • Identifying problems and solutions: Research results can identify problems and provide solutions to complex issues in society, including issues related to health, environment, social justice, and economics.
  • Validating claims : Research results can validate or refute claims made by individuals or groups in society, such as politicians, corporations, or activists.
  • Providing evidence: Research results can provide evidence to support decision-making, policy-making, and resource allocation in various fields.

How to Write Results in A Research Paper

Here are some general guidelines on how to write results in a research paper:

  • Organize the results section: Start by organizing the results section in a logical and coherent manner. Divide the section into subsections if necessary, based on the research questions or hypotheses.
  • Present the findings: Present the findings in a clear and concise manner. Use tables, graphs, and figures to illustrate the data and make the presentation more engaging.
  • Describe the data: Describe the data in detail, including the sample size, response rate, and any missing data. Provide relevant descriptive statistics such as means, standard deviations, and ranges.
  • Interpret the findings: Interpret the findings in light of the research questions or hypotheses. Discuss the implications of the findings and the extent to which they support or contradict existing theories or previous research.
  • Discuss the limitations : Discuss the limitations of the study, including any potential sources of bias or confounding factors that may have affected the results.
  • Compare the results : Compare the results with those of previous studies or theoretical predictions. Discuss any similarities, differences, or inconsistencies.
  • Avoid redundancy: Avoid repeating information that has already been presented in the introduction or methods sections. Instead, focus on presenting new and relevant information.
  • Be objective: Be objective in presenting the results, avoiding any personal biases or interpretations.

When to Write Research Results

Here are situations When to Write Research Results”

  • After conducting research on the chosen topic and obtaining relevant data, organize the findings in a structured format that accurately represents the information gathered.
  • Once the data has been analyzed and interpreted, and conclusions have been drawn, begin the writing process.
  • Before starting to write, ensure that the research results adhere to the guidelines and requirements of the intended audience, such as a scientific journal or academic conference.
  • Begin by writing an abstract that briefly summarizes the research question, methodology, findings, and conclusions.
  • Follow the abstract with an introduction that provides context for the research, explains its significance, and outlines the research question and objectives.
  • The next section should be a literature review that provides an overview of existing research on the topic and highlights the gaps in knowledge that the current research seeks to address.
  • The methodology section should provide a detailed explanation of the research design, including the sample size, data collection methods, and analytical techniques used.
  • Present the research results in a clear and concise manner, using graphs, tables, and figures to illustrate the findings.
  • Discuss the implications of the research results, including how they contribute to the existing body of knowledge on the topic and what further research is needed.
  • Conclude the paper by summarizing the main findings, reiterating the significance of the research, and offering suggestions for future research.

Purpose of Research Results

The purposes of Research Results are as follows:

  • Informing policy and practice: Research results can provide evidence-based information to inform policy decisions, such as in the fields of healthcare, education, and environmental regulation. They can also inform best practices in fields such as business, engineering, and social work.
  • Addressing societal problems : Research results can be used to help address societal problems, such as reducing poverty, improving public health, and promoting social justice.
  • Generating economic benefits : Research results can lead to the development of new products, services, and technologies that can create economic value and improve quality of life.
  • Supporting academic and professional development : Research results can be used to support academic and professional development by providing opportunities for students, researchers, and practitioners to learn about new findings and methodologies in their field.
  • Enhancing public understanding: Research results can help to educate the public about important issues and promote scientific literacy, leading to more informed decision-making and better public policy.
  • Evaluating interventions: Research results can be used to evaluate the effectiveness of interventions, such as treatments, educational programs, and social policies. This can help to identify areas where improvements are needed and guide future interventions.
  • Contributing to scientific progress: Research results can contribute to the advancement of science by providing new insights and discoveries that can lead to new theories, methods, and techniques.
  • Informing decision-making : Research results can provide decision-makers with the information they need to make informed decisions. This can include decision-making at the individual, organizational, or governmental levels.
  • Fostering collaboration : Research results can facilitate collaboration between researchers and practitioners, leading to new partnerships, interdisciplinary approaches, and innovative solutions to complex problems.

Advantages of Research Results

Some Advantages of Research Results are as follows:

  • Improved decision-making: Research results can help inform decision-making in various fields, including medicine, business, and government. For example, research on the effectiveness of different treatments for a particular disease can help doctors make informed decisions about the best course of treatment for their patients.
  • Innovation : Research results can lead to the development of new technologies, products, and services. For example, research on renewable energy sources can lead to the development of new and more efficient ways to harness renewable energy.
  • Economic benefits: Research results can stimulate economic growth by providing new opportunities for businesses and entrepreneurs. For example, research on new materials or manufacturing techniques can lead to the development of new products and processes that can create new jobs and boost economic activity.
  • Improved quality of life: Research results can contribute to improving the quality of life for individuals and society as a whole. For example, research on the causes of a particular disease can lead to the development of new treatments and cures, improving the health and well-being of millions of people.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Research Paper Citation

How to Cite Research Paper – All Formats and...

Data collection

Data Collection – Methods Types and Examples

Delimitations

Delimitations in Research – Types, Examples and...

Research Paper Formats

Research Paper Format – Types, Examples and...

Research Process

Research Process – Steps, Examples and Tips

Research Design

Research Design – Types, Methods and Examples

  • The Scientist University

How to Write a Good Results Section

Effective results sections need to be much more than a list of data points given without context. .

Nathan Ni, PhD Headshot

Nathan Ni holds a PhD from Queens University. He is a science editor for The Scientist’s Creative Services Team who strives to better understand and communicate the relationships between health and disease.

View full profile.

Learn about our editorial policies.

An individual looking at graphs and charts on a clipboard in front of a laptop.

The results section details the findings of a given study. The primary difference between the results section and the discussion section is that the results section does not delve into hypothetical interpretation. However, people are often taught in school that a results section should only present data and include nothing else. This goes too far—a results section that is only a list of numbers and facts is confusing, boring, and difficult to read. When presenting their results, authors need to exercise discretion and nuance. Most importantly, they need to provide context for their numbers and comparative reference points for their data.

Why Did the Authors Want This Data?

Before jumping right into the dataset, authors should explain the rationale behind why they chose to generate the dataset. While there is no need to overly rehash the introduction, the reader still benefits from a brief primer on what the authors sought to examine through this particular experimentation and the resulting data.

Here are some examples of what this means in practice. Look at the following passage:

“In order to test the plausibility of this model, we implement a Brownian dynamics simulation based on prior modeling of meiotic chromosome movement and pairing.” 1  

The authors use the first clause—“In order to test the plausibility of this model”—to explain why the second clause—“we implement a Brownian dynamics simulation”—took place. 

Similarly, consider another example : 

“MRGPRX4 engages intracellular G q to induce calcium flux. Using calcium imaging as a readout, we screened 3808 drugs for activity against human embryonic kidney (HEK) 293 cells expressing MRGPRX4 (the Ser83, rs2445179 variant).” 2  

Here, the first sentence clearly sets up why the authors employed calcium imaging to study drug activity against HEK293 cells.

Why Did the Authors Choose These Parameters?

In addition to why they chose to perform a certain experiment, it is also important for scientists to tell their audience why they examined selected specific parameters or variables in their experiments. Too often, authors will highlight or emphasize numbers in a sentence without contextualizing them. Based on the syntax, the reader recognizes that these numbers are significant, but does not immediately understand why. 

Biologist Gary T. ZeRuth from Murray State University, in a recent article in Islets , provides an example of how to contextualize experimental parameters and results:

“Given that INS1 cells are normally maintained in 11.1 mM glucose, expression of  Ins2, MafA , and  Glis3  was measured in INS1 cells cultured in 3 mM glucose (low glucose), 11.1 mM glucose, and 25 mM glucose (high glucose). Graded levels of expression were observed with expression at 11.1 mM glucose being more similar to low glucose conditions than chronically elevated glucose for all three genes.” 3  

Here, ZeRuth and his colleagues annotate the three parameters—3mM, 11.1mM, and 25mM glucose—as low, normal, and high concentrations. The authors then present their results within this framework: Gene expression at 11.1mM was more similar to that found at low glucose concentrations than high ones. In this way, they show the effect of high glucose versus low glucose and examine the validity of 11.1mM as a baseline. 

The results section should provide context for data, bringing all of the datasets together to form a cohesive body. Authors should provide the reasons that drove them to generate the dataset. Authors should explain why they looked at specific parameters or variables in their experiments. Authors have to use a level of detail that provides sufficient evidence but is not overwhelming. The audience should be able to understand the core evidence without referring to the figures.

What Is the Right Level of Detail for the Data? 

It is important that data is not just dumped en masse onto the reader, but presented in a curated and meaningful way. To do this, researchers have to decide on an appropriate level of detail that provides sufficient evidence and is not overwhelming. In the prior example, ZeRuth and his colleagues did not provide gene expression as an empirical value, but rather as a relative one. In this circumstance, it was more important to emphasize gene expression changes in difficult glucose experiments than to say that gene A expression was 2.3 in high glucose and 1.2 in low glucose. 3  

One good way of determining the right level of detail is to keep the figures in mind when writing the results section. Many times, authors will use the text only as a vehicle to introduce the figures. However, the proper way is actually the opposite, where the figures provide additional depth and detail for the text. It is important that the text is able to stand alone from a narrative and argumentation perspective, while the figures present information that does not translate well to text format, such as high volumes of numbers, multi-parameter comparisons, and more complex statistical analyses.

As an example, consider the following passage:

“Several phosphomonoester compounds including fospropofol {EC 50 : 3.78 nM [95% confidence interval (CI): 1.82 to 6.78]}, fosphenytoin [an antiepileptic drug, EC 50 : 77.01 nM (95% CI: 52.63 to 115.10)], and dexamethasone phosphate [steroid-derived phosphate, EC 50 : 14.68 nM (95% CI: 5.44 to 22.10)] showed high agonist potencies for MRGPRX4 (Fig. 1, C and D, and table S1).” 2

The core statement in this sentence is: “Several phosphomonoester compounds including fospropofol, fosphenytoin, and dexamethasone phosphate showed high agonist potencies for MRGPRX4.” The specific EC 50 values are provided as immediate direct evidence for this claim, as well as for reference, while the figure is referenced only at the end, almost as a “if more information is needed, look here” prompt.

Applying Principles Throughout the Whole Results Section

These considerations should be applied on both a micro level, when presenting the results of each discrete experiment, and on a macro level, across the results section as a whole. Each paragraph should offer a transition to the next. Each presented piece of data should likewise offer some insights as to why the researchers sought the next piece of data. Finally, all of the data together must form a cohesive body that serves as evidence for the interpretations that readers will find in the discussion.

In their work, ZeRuth and his colleagues conclude most paragraphs in the results section with a summary statement that begins with “these data suggest/indicate”. 3 Readers who collate these statements together are rewarded with a de facto abstract for the results section, giving them an accessible and digestible primer on what the authors believe their data shows. 

Looking for more information on scientific writing? Check out  The Scientist’ s   TS SciComm  section. Looking for some help putting together a manuscript, a figure, a poster, or anything else?  The Scientist ’s   Scientific Services  may have the professional help that you need.

  • Marshall WF, Fung JC. Modeling homologous chromosome recognition via nonspecific interactions . PNAS . 2024;121(20):e2317373121.
  • Chien DC, et al. MRGPRX4 mediates phospho-drug-associated pruritus in a humanized mouse model . Sci Transl Med . 2024;16(746):eadk8198. 
  • Grieve LM, et al. Downregulation of Glis3 in INS1 cells exposed to chronically elevated glucose contributes to glucotoxicity-associated β cell dysfunction . Islets . 2024;16(1):2344622.

Case Study Research Method in Psychology

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

Case studies are in-depth investigations of a person, group, event, or community. Typically, data is gathered from various sources using several methods (e.g., observations & interviews).

The case study research method originated in clinical medicine (the case history, i.e., the patient’s personal history). In psychology, case studies are often confined to the study of a particular individual.

The information is mainly biographical and relates to events in the individual’s past (i.e., retrospective), as well as to significant events that are currently occurring in his or her everyday life.

The case study is not a research method, but researchers select methods of data collection and analysis that will generate material suitable for case studies.

Freud (1909a, 1909b) conducted very detailed investigations into the private lives of his patients in an attempt to both understand and help them overcome their illnesses.

This makes it clear that the case study is a method that should only be used by a psychologist, therapist, or psychiatrist, i.e., someone with a professional qualification.

There is an ethical issue of competence. Only someone qualified to diagnose and treat a person can conduct a formal case study relating to atypical (i.e., abnormal) behavior or atypical development.

case study

 Famous Case Studies

  • Anna O – One of the most famous case studies, documenting psychoanalyst Josef Breuer’s treatment of “Anna O” (real name Bertha Pappenheim) for hysteria in the late 1800s using early psychoanalytic theory.
  • Little Hans – A child psychoanalysis case study published by Sigmund Freud in 1909 analyzing his five-year-old patient Herbert Graf’s house phobia as related to the Oedipus complex.
  • Bruce/Brenda – Gender identity case of the boy (Bruce) whose botched circumcision led psychologist John Money to advise gender reassignment and raise him as a girl (Brenda) in the 1960s.
  • Genie Wiley – Linguistics/psychological development case of the victim of extreme isolation abuse who was studied in 1970s California for effects of early language deprivation on acquiring speech later in life.
  • Phineas Gage – One of the most famous neuropsychology case studies analyzes personality changes in railroad worker Phineas Gage after an 1848 brain injury involving a tamping iron piercing his skull.

Clinical Case Studies

  • Studying the effectiveness of psychotherapy approaches with an individual patient
  • Assessing and treating mental illnesses like depression, anxiety disorders, PTSD
  • Neuropsychological cases investigating brain injuries or disorders

Child Psychology Case Studies

  • Studying psychological development from birth through adolescence
  • Cases of learning disabilities, autism spectrum disorders, ADHD
  • Effects of trauma, abuse, deprivation on development

Types of Case Studies

  • Explanatory case studies : Used to explore causation in order to find underlying principles. Helpful for doing qualitative analysis to explain presumed causal links.
  • Exploratory case studies : Used to explore situations where an intervention being evaluated has no clear set of outcomes. It helps define questions and hypotheses for future research.
  • Descriptive case studies : Describe an intervention or phenomenon and the real-life context in which it occurred. It is helpful for illustrating certain topics within an evaluation.
  • Multiple-case studies : Used to explore differences between cases and replicate findings across cases. Helpful for comparing and contrasting specific cases.
  • Intrinsic : Used to gain a better understanding of a particular case. Helpful for capturing the complexity of a single case.
  • Collective : Used to explore a general phenomenon using multiple case studies. Helpful for jointly studying a group of cases in order to inquire into the phenomenon.

Where Do You Find Data for a Case Study?

There are several places to find data for a case study. The key is to gather data from multiple sources to get a complete picture of the case and corroborate facts or findings through triangulation of evidence. Most of this information is likely qualitative (i.e., verbal description rather than measurement), but the psychologist might also collect numerical data.

1. Primary sources

  • Interviews – Interviewing key people related to the case to get their perspectives and insights. The interview is an extremely effective procedure for obtaining information about an individual, and it may be used to collect comments from the person’s friends, parents, employer, workmates, and others who have a good knowledge of the person, as well as to obtain facts from the person him or herself.
  • Observations – Observing behaviors, interactions, processes, etc., related to the case as they unfold in real-time.
  • Documents & Records – Reviewing private documents, diaries, public records, correspondence, meeting minutes, etc., relevant to the case.

2. Secondary sources

  • News/Media – News coverage of events related to the case study.
  • Academic articles – Journal articles, dissertations etc. that discuss the case.
  • Government reports – Official data and records related to the case context.
  • Books/films – Books, documentaries or films discussing the case.

3. Archival records

Searching historical archives, museum collections and databases to find relevant documents, visual/audio records related to the case history and context.

Public archives like newspapers, organizational records, photographic collections could all include potentially relevant pieces of information to shed light on attitudes, cultural perspectives, common practices and historical contexts related to psychology.

4. Organizational records

Organizational records offer the advantage of often having large datasets collected over time that can reveal or confirm psychological insights.

Of course, privacy and ethical concerns regarding confidential data must be navigated carefully.

However, with proper protocols, organizational records can provide invaluable context and empirical depth to qualitative case studies exploring the intersection of psychology and organizations.

  • Organizational/industrial psychology research : Organizational records like employee surveys, turnover/retention data, policies, incident reports etc. may provide insight into topics like job satisfaction, workplace culture and dynamics, leadership issues, employee behaviors etc.
  • Clinical psychology : Therapists/hospitals may grant access to anonymized medical records to study aspects like assessments, diagnoses, treatment plans etc. This could shed light on clinical practices.
  • School psychology : Studies could utilize anonymized student records like test scores, grades, disciplinary issues, and counseling referrals to study child development, learning barriers, effectiveness of support programs, and more.

How do I Write a Case Study in Psychology?

Follow specified case study guidelines provided by a journal or your psychology tutor. General components of clinical case studies include: background, symptoms, assessments, diagnosis, treatment, and outcomes. Interpreting the information means the researcher decides what to include or leave out. A good case study should always clarify which information is the factual description and which is an inference or the researcher’s opinion.

1. Introduction

  • Provide background on the case context and why it is of interest, presenting background information like demographics, relevant history, and presenting problem.
  • Compare briefly to similar published cases if applicable. Clearly state the focus/importance of the case.

2. Case Presentation

  • Describe the presenting problem in detail, including symptoms, duration,and impact on daily life.
  • Include client demographics like age and gender, information about social relationships, and mental health history.
  • Describe all physical, emotional, and/or sensory symptoms reported by the client.
  • Use patient quotes to describe the initial complaint verbatim. Follow with full-sentence summaries of relevant history details gathered, including key components that led to a working diagnosis.
  • Summarize clinical exam results, namely orthopedic/neurological tests, imaging, lab tests, etc. Note actual results rather than subjective conclusions. Provide images if clearly reproducible/anonymized.
  • Clearly state the working diagnosis or clinical impression before transitioning to management.

3. Management and Outcome

  • Indicate the total duration of care and number of treatments given over what timeframe. Use specific names/descriptions for any therapies/interventions applied.
  • Present the results of the intervention,including any quantitative or qualitative data collected.
  • For outcomes, utilize visual analog scales for pain, medication usage logs, etc., if possible. Include patient self-reports of improvement/worsening of symptoms. Note the reason for discharge/end of care.

4. Discussion

  • Analyze the case, exploring contributing factors, limitations of the study, and connections to existing research.
  • Analyze the effectiveness of the intervention,considering factors like participant adherence, limitations of the study, and potential alternative explanations for the results.
  • Identify any questions raised in the case analysis and relate insights to established theories and current research if applicable. Avoid definitive claims about physiological explanations.
  • Offer clinical implications, and suggest future research directions.

5. Additional Items

  • Thank specific assistants for writing support only. No patient acknowledgments.
  • References should directly support any key claims or quotes included.
  • Use tables/figures/images only if substantially informative. Include permissions and legends/explanatory notes.
  • Provides detailed (rich qualitative) information.
  • Provides insight for further research.
  • Permitting investigation of otherwise impractical (or unethical) situations.

Case studies allow a researcher to investigate a topic in far more detail than might be possible if they were trying to deal with a large number of research participants (nomothetic approach) with the aim of ‘averaging’.

Because of their in-depth, multi-sided approach, case studies often shed light on aspects of human thinking and behavior that would be unethical or impractical to study in other ways.

Research that only looks into the measurable aspects of human behavior is not likely to give us insights into the subjective dimension of experience, which is important to psychoanalytic and humanistic psychologists.

Case studies are often used in exploratory research. They can help us generate new ideas (that might be tested by other methods). They are an important way of illustrating theories and can help show how different aspects of a person’s life are related to each other.

The method is, therefore, important for psychologists who adopt a holistic point of view (i.e., humanistic psychologists ).

Limitations

  • Lacking scientific rigor and providing little basis for generalization of results to the wider population.
  • Researchers’ own subjective feelings may influence the case study (researcher bias).
  • Difficult to replicate.
  • Time-consuming and expensive.
  • The volume of data, together with the time restrictions in place, impacted the depth of analysis that was possible within the available resources.

Because a case study deals with only one person/event/group, we can never be sure if the case study investigated is representative of the wider body of “similar” instances. This means the conclusions drawn from a particular case may not be transferable to other settings.

Because case studies are based on the analysis of qualitative (i.e., descriptive) data , a lot depends on the psychologist’s interpretation of the information she has acquired.

This means that there is a lot of scope for Anna O , and it could be that the subjective opinions of the psychologist intrude in the assessment of what the data means.

For example, Freud has been criticized for producing case studies in which the information was sometimes distorted to fit particular behavioral theories (e.g., Little Hans ).

This is also true of Money’s interpretation of the Bruce/Brenda case study (Diamond, 1997) when he ignored evidence that went against his theory.

Breuer, J., & Freud, S. (1895).  Studies on hysteria . Standard Edition 2: London.

Curtiss, S. (1981). Genie: The case of a modern wild child .

Diamond, M., & Sigmundson, K. (1997). Sex Reassignment at Birth: Long-term Review and Clinical Implications. Archives of Pediatrics & Adolescent Medicine , 151(3), 298-304

Freud, S. (1909a). Analysis of a phobia of a five year old boy. In The Pelican Freud Library (1977), Vol 8, Case Histories 1, pages 169-306

Freud, S. (1909b). Bemerkungen über einen Fall von Zwangsneurose (Der “Rattenmann”). Jb. psychoanal. psychopathol. Forsch ., I, p. 357-421; GW, VII, p. 379-463; Notes upon a case of obsessional neurosis, SE , 10: 151-318.

Harlow J. M. (1848). Passage of an iron rod through the head.  Boston Medical and Surgical Journal, 39 , 389–393.

Harlow, J. M. (1868).  Recovery from the Passage of an Iron Bar through the Head .  Publications of the Massachusetts Medical Society. 2  (3), 327-347.

Money, J., & Ehrhardt, A. A. (1972).  Man & Woman, Boy & Girl : The Differentiation and Dimorphism of Gender Identity from Conception to Maturity. Baltimore, Maryland: Johns Hopkins University Press.

Money, J., & Tucker, P. (1975). Sexual signatures: On being a man or a woman.

Further Information

  • Case Study Approach
  • Case Study Method
  • Enhancing the Quality of Case Studies in Health Services Research
  • “We do things together” A case study of “couplehood” in dementia
  • Using mixed methods for evaluating an integrative approach to cancer care: a case study

Print Friendly, PDF & Email

Related Articles

Qualitative Data Coding

Research Methodology

Qualitative Data Coding

What Is a Focus Group?

What Is a Focus Group?

Cross-Cultural Research Methodology In Psychology

Cross-Cultural Research Methodology In Psychology

What Is Internal Validity In Research?

What Is Internal Validity In Research?

What Is Face Validity In Research? Importance & How To Measure

Research Methodology , Statistics

What Is Face Validity In Research? Importance & How To Measure

Criterion Validity: Definition & Examples

Criterion Validity: Definition & Examples

  • Python For Data Analysis
  • Data Science
  • Data Analysis with R
  • Data Analysis with Python
  • Data Visualization with Python
  • Data Analysis Examples
  • Math for Data Analysis
  • Data Analysis Interview questions
  • Artificial Intelligence
  • Data Analysis Projects
  • Machine Learning
  • Deep Learning
  • Computer Vision
  • SQL vs R - Which to use for Data Analysis?
  • Top 10 SQL Projects For Data Analysis
  • Data Analysis Expressions (DAX)
  • The Importance of Data Analysis for Product Managers
  • Top 10 AI Tools for Data Analysis
  • Data analysis using R
  • How to Use ChatGPT to Analyze Data?
  • How to Use Bard for Data Analysis and Insights
  • What is Exploratory Data Analysis ?
  • Six Steps of Data Analysis Process
  • Data Analysis Tutorial
  • EDA | Exploratory Data Analysis in Python
  • What is Data Analysis?
  • Popular tools for data analytics
  • Exploratory Data Analysis on Iris Dataset
  • Data | Analysis Quiz | Question 1
  • How to Do a Personal SWOT Analysis [With Examples]
  • Different Sources of Data for Data Analysis

How to Write Data Analysis Reports

Reports on data analysis are essential for communicating data-driven insights to decision-makers, stakeholders, and other pertinent parties. These reports provide an organized format for providing conclusions, analyses, and suggestions derived from data set analysis.

In this guide, we will learn how to make an interactive Data Analysis Report.

How-to-Write-Data-Analysis-Reports

What is a Data Analysis Report?

A data analysis report is a comprehensive document that presents the findings, insights, and interpretations derived from analyzing a dataset or datasets. It serves as a means of communicating the results of a data analysis process to stakeholders, decision-makers, or other interested parties.

These reports are crucial in various fields such as business, science, healthcare, finance, and government, where data-driven decision-making is essential. It combines quantitative and qualitative data to evaluate past performance, understand current trends, and make informed recommendations for the future. Think of it as a translator, taking the language of numbers and transforming it into a clear and concise story that guides decision-making.

Why is Data Analysis Reporting Important?

Data analysis reporting is critical for various reasons:

  • Making decisions : Reports on data analysis provide decision-makers insightful information that helps them make well-informed choices. These reports assist stakeholders in understanding trends, patterns, and linkages that may guide strategic planning and decision-making procedures by summarizing and analyzing data.
  • Performance Evaluation : Data analysis reports are used by organizations to assess how well procedures, goods, or services are working. Through the examination of pertinent metrics and key performance indicators (KPIs) , enterprises may pinpoint opportunities for improvement and maximize productivity.
  • Risk management : Within a company, data analysis reports may be used to detect possible dangers, difficulties, or opportunities. Businesses may reduce risks and take advantage of new possibilities by examining past data and predicting future patterns.
  • Communication and Transparency : By providing a concise and impartial summary of study findings, data analysis reports promote communication and transparency within enterprises. With the help of these reports, stakeholders may successfully cooperate to solve problems and accomplish goals by understanding complicated data insights.

How to Write a Data Analysis Report?

Writing a data analysis report comprises many critical processes, each of which adds to the clarity, coherence, and effectiveness of the final product. Let’s discuss each stage:

1. Map Your Report with an Outline

Creating a well-structured outline is like drawing a roadmap for your report. It acts as a guide, to organize your thoughts and content logically. Begin by identifying the key sections of report, such as introduction, methodology, findings, analysis, conclusions, and recommendations. Within each section, break down the specific points or subtopics you want to address. This step-by-step approach not only streamlines the writing process but also ensures that you cover all essential elements of your analysis. Moreover, an outline helps you maintain focus and prevents you from veering off track, ensuring that your report remains coherent and easy to follow for your audience.

2. Prioritize Key Performance Indicators (KPIs)

In a data analysis report, it’s crucial to prioritize the most relevant Key Performance Indicators (KPIs) to avoid overwhelming your audience with unnecessary information. Start by identifying the KPIs that directly impact your business objectives and overall performance. These could include metrics like revenue growth, customer retention rates, conversion rates, or website traffic. By focusing on these key metrics, the audience can track report with actionable insights that drive strategic decision-making. Additionally, consider contextualizing these KPIs within your industry or market landscape to provide a comprehensive understanding of your performance relative to competitors or benchmarks.

3. Visualize Data with Impact

Data visualization plays a pivotal role in conveying complex information in a clear and engaging manner. When selecting visualization tools, consider the nature of the data and the story you want to tell. For instance, if you’re illustrating historical trends, timelines or line graphs can effectively showcase patterns over time. On the other hand, if you’re comparing categorical data, pie charts or bar graphs might be more suitable. The key is to choose visualization methods that accurately represent your findings and facilitate comprehension for your audience. Additionally, pay attention to design principles such as color contrast, labeling, and scale to ensure that your visuals are both informative and visually appealing.

4. Craft a Compelling Data Narrative

Transforming your data into a compelling narrative is essential for engaging your audience and highlighting key insights. Instead of presenting raw data, strive to tell a story that contextualizes the numbers and unveils their significance.

Start by identifying specific events or trends in data and explore the underlying reasons behind them. For example, if you notice a sudden spike in sales, investigate the marketing campaign or external factors that may have contributed to this increase . By weaving these insights into a cohesive narrative, you can guide your audience through your analysis and make your findings more memorable and impactful. Remember to keep your language clear and concise, avoiding jargon or technical terms that may confuse your audience.

5. Organize for Clarity

Establishing a clear information hierarchy is essential for ensuring that your report is easy to navigate and understand. Start by outlining the main points or sections of your report and consider the logical flow of information. Typically, it’s best to start with broader, more general information and gradually delve into specifics as needed. This approach helps orient your audience and provides them with a framework for understanding the rest of the report.

Additionally, use headings, subheadings, and bullet points to break up dense text and make your content more scannable. By organizing your report for clarity, you can enhance comprehension and ensure that your audience grasps the key takeaways of your analysis.

6. Summarize Key Findings

A concise summary at the beginning of your report serves as a roadmap for your audience, providing them with a quick overview of the report’s objectives and key findings . However, it’s important to write this summary after completing the report, as it requires a comprehensive understanding of the data and analysis.

To create an effective summary , distill the main points of the report into a few succinct paragraphs. Focus on highlighting the most significant insights and outcomes, avoiding unnecessary details or technical language. Consider the needs of your audience and tailor the summary to address their interests and priorities. By providing a clear and concise summary upfront, you set the stage for the rest of the report and help busy readers grasp the essence of your analysis quickly.

7. Offer Actionable Recommendations

Effective communication of data analysis findings goes beyond simply reporting the numbers; it involves providing actionable recommendations that drive decision-making and facilitate improvements. When offering recommendations, remain objective and avoid assigning blame for any negative outcomes. Instead, focus on identifying solutions and suggesting practical steps for addressing challenges or leveraging opportunities.

Consider the implications of your findings for the broader business strategy and provide specific guidance on how to implement changes or initiatives. Moreover, prioritize recommendations that are realistic, achievable, and aligned with the organization’s goals and resources. By offering actionable recommendations, you demonstrate the value of your analysis and empower stakeholders to take proactive steps towards improvement.

8. Leverage Interactive Dashboards for Enhanced Presentation

The presentation format of the report is as crucial as its content, as it directly impacts the effectiveness of your communication and engagement with your audience. Interactive dashboards offer a dynamic and visually appealing way to present data, allowing users to explore and interact with the information in real-time.

When selecting a reporting tool, prioritize those that offer customizable dashboards with interactive features such as filters, drill-downs, and hover-over tooltips. These functionalities enable users to customize their viewing experience and extract insights tailored to their specific needs and interests. Moreover, look for reporting tools that support automatic data updates, ensuring that your dashboards always reflect the most current information. By leveraging interactive dashboards for enhanced presentation, you create a more engaging and immersive experience for your audience, fostering deeper understanding and retention of your analysis.

Best Practices for Writing Data Analysis Reports

  • Understand Your Audience: It’s important to know who will be reading the report before you start writing. Make sure that the language, substance, and degree of technical information are appropriate for the audience’s expertise and interests.
  • Clarity and Precision: Communicate your results in a clear, succinct manner by using appropriate terminology. Steer clear of technical phrases and jargon that stakeholders who aren’t technical may not understand. Clarify terminology and ideas as needed to maintain understanding.
  • Stay Objective: Don’t include any prejudice or subjective interpretation while presenting your analysis and results. Allow the data to speak for itself and back up your findings with facts.
  • Focus on Key Insights: Summarize the most important conclusions and revelations that came from the examination of the data. Sort material into categories according to the audience’s relevancy and significance.
  • Provide Context: Put your analysis in perspective by outlining the importance of the data, how it relates to the larger aims or objectives, and any relevant prior knowledge. Assist the reader in realizing the significance of the analysis.
  • Use Visuals Wisely: Employ graphs, charts, and other visualizations to highlight important patterns, correlations, and trends in the data. Select visual forms that make sense for the kind of data and the point you’re trying to make. Make sure the images are simple to understand, educational, and clear.

Conclusion – How to Write Data Analysis Reports

It takes a combination of analytical abilities, attention to detail, and clear communication to write data analysis reports. These guidelines and best practices will help you produce reports that effectively convey insights and facilitate well-informed decision-making. Keep in mind to adjust your strategy to the demands of your audience and to remain impartial and transparent at all times. You can become skilled at creating compelling data analysis reports that benefit your company with practice and improvement.

Please Login to comment...

Similar reads.

  • Data Analysis

Improve your Coding Skills with Practice

 alt=

What kind of Experience do you want to share?

Anticipating impacts: using large-scale scenario-writing to explore diverse implications of generative AI in the news environment

  • Original Research
  • Open access
  • Published: 27 May 2024

Cite this article

You have full access to this open access article

how to write analysis in research report

  • Kimon Kieslich   ORCID: orcid.org/0000-0002-6305-2997 1 ,
  • Nicholas Diakopoulos   ORCID: orcid.org/0000-0001-5005-6123 2 &
  • Natali Helberger   ORCID: orcid.org/0000-0003-1652-0580 1  

The tremendous rise of generative AI has reached every part of society—including the news environment. There are many concerns about the individual and societal impact of the increasing use of generative AI, including issues such as disinformation and misinformation, discrimination, and the promotion of social tensions. However, research on anticipating the impact of generative AI is still in its infancy and mostly limited to the views of technology developers and/or researchers. In this paper, we aim to broaden the perspective and capture the expectations of three stakeholder groups (news consumers; technology developers; content creators) about the potential negative impacts of generative AI, as well as mitigation strategies to address these. Methodologically, we apply scenario-writing and use participatory foresight in the context of a survey (n = 119) to delve into cognitively diverse imaginations of the future. We qualitatively analyze the scenarios using thematic analysis to systematically map potential impacts of generative AI on the news environment, potential mitigation strategies, and the role of stakeholders in causing and mitigating these impacts. In addition, we measure respondents' opinions on a specific mitigation strategy, namely transparency obligations as suggested in Article 52 of the draft EU AI Act. We compare the results across different stakeholder groups and elaborate on different expected impacts across these groups. We conclude by discussing the usefulness of scenario-writing and participatory foresight as a toolbox for generative AI impact assessment.

Avoid common mistakes on your manuscript.

1 I ntroduction

Whether overhyped or truly transformative, the growth of generative AI over the last years has been palpable as it washes over a range of domains from entertainment and law to marketing and news media. The technology leverages a powerful approach to training models from vast quantities of data that can then be prompted to create new pieces of media, whether that’s images, text, video, audio, 3D content, and so on, or extract bits of information from input media. The capabilities (and limitations) of generative AI are forcing broad rethinking on how information and media are created and how knowledge work itself is done.

Of particular interest in this work is how generative AI is increasingly used by news organizations and impacts the media environment. News organizations have already started experimenting with it, for example for summarizing content [ 58 ], supporting article writing [ 55 ], or moderating online content [ 30 , 33 ]. According to a survey conducted among news and media organizations in 2023, 85 percent of the respondents indicated they have experimented with generative AI [ 6 ]. The potential for the deployment of generative AI and its use in newsrooms is likely to continue to rise sharply, making discussions of the adverse consequences of large-scale adoption of generative AI, such as job losses, the spread of disinformation through deepfakes [ 60 ], accuracy issues (e.g., false source attributions) [ 22 ] or an increased offensive use of AI for manipulation or cyberattacks [ 49 ], essential. News media companies face the difficult task of navigating the possibilities and limitations of generative AI while maintaining their market position and upholding journalistic quality standards. In light of these potentially detrimental consequences of generative AI use [ 22 , 49 , 60 ], a systematic assessment of the impacts is a necessary element of any AI strategy. To account for the technological dynamics and societal complexities, such an impact assessment cannot be limited to analyzing the status quo but must be able to anticipate plausible future impacts as well.

The limitations and negative impacts of generative AI aren’t only an issue for the journalistic field though. Political and legal decision-makers have started to work on regulatory approaches to govern the impacts of AI systems in general, including the use of generative AI in the media sector. Recent regulatory initiatives such as the European Union’s draft AI Act or the Digital Services Act adopt a risk-based approach, thereby relying heavily on the ability of policy makers, regulators and regulated parties to be able to anticipate risks to fundamental rights and society [ 24 ]. The repeated references to “reasonably foreseeable risks” in the AI Act are a case in point, as are calls by the AI ethics community for more anticipatory studies on AI [ 17 , 51 ].

The responsible development, deployment and governance of AI requires new approaches to prospective research, i.e., research that can anticipate how AI will plausibly develop, what the ethical and societal consequences of this development might be, and how this can be proactively addressed by various stakeholders [ 12 , 50 , 72 ]. However, anticipating impact is a difficult task as there are an inevitable number of uncertainties that technology development and future prospection bring [ 13 , 50 , 53 ]. We do not know for certain how technology will develop, how consumers will use it, and what impacts it will have on society in light of potentially complex dynamics and feedback effects. For research in this area, the task is to make estimated projections of plausible future developments that justify AI’s implementation in practice [ 50 ].

To address these needs, in this paper we develop and refine an approach to study the potential impacts (and their mitigation) of the use of generative AI in the media environment. In particular, we utilize a scenario-writing method in an online survey among EU-member-state residents with a variety of different stakeholders of generative AI technology ( n  = 119), reflecting the roles and expertise sets of broad sub-groups of content creators, technology developers, and news consumers to collect cognitively diverse expectations of generative AI’s impact on the news environment. Furthermore, we also gauge respondents' ideas about mitigating the outlined harms as well as their opinion on the effectiveness of a specific policy proposal, namely transparency obligations as proposed in the EU AI Act.

In applying qualitative thematic analysis to the scenarios and the additional questions, we demonstrate how a scenario-based method can be a promising approach to conducting impact assessments, particularly due to its ability to create vivid projections and engage a diversity of perspectives. We highlight the alternating perspectives that different stakeholder groups bring in, stressing the need to ensure cognitive diversity in AI impact assessment. Further, we not only systematically map potential negative impacts, but also leverage the diversely sampled survey responses to help illuminate mitigation strategies to counter adverse consequences of generative AI. As such, our study contributes to the scientific literature on assessing AI impact, but also contains practical implications for policy makers or technology developers who are tasked with mitigating harms of such systems.

2 Related work

In this work, we address the aforementioned need for approaches to anticipate AI impacts. In the following subsections we examine the related work on anticipatory governance which motivates our study, the range of existing AI impact assessment approaches, the need for participatory approaches, and the background on the use of scenario-writing as the specific approach we leverage.

2.1 Anticipatory governance

The anticipation of the impact of technologies has been studied under the theoretical approach of anticipatory governance, which examines social impacts of technology in the early development phase [ 21 , 29 , 34 ]. One aim of anticipatory governance approaches is to help mitigate the uncertainty that inevitably surrounds emerging technologies [ 13 ]. In doing so, anticipatory approaches can illuminate the positive and negative aspects about the technology at hand [ 13 ] and are as such deeply connected to normative values of how society wants to deal with the respective technology [ 50 ]. Anticipatory approaches try to come to a point where technology development acknowledges potential risks and mitigate these beforehand [ 34 , 63 ]. Furthermore, anticipatory governance recognizes that the future can not be predicted per se and, thus, aims for showing possible future scenarios [ 56 ].

Methodologically, anticipatory governance deals with navigating the choice between different policy options and enabling the formation of a joint vision of how society wants to engage with technology [ 34 ]. Practically, anticipating the impact of emerging technology should be initiated when a technology “is sufficiently developed for meaningful discourse to be possible about the nature of the technology and its initial uses, but where there is still uncertainty about its future implications” [ 50 ]. In the deliberation process it is not about discussing certain future events, but to discuss how future scenarios align with (public) values. This depicts a translation process, in which underlying values in the form of reactions to exemplary scenarios are discussed that, in turn, are symbolic for plausible futures and can subsequently inform governance processes [ 50 ]. In making socio-technical consequences salient and agreeing on shared values for how the future ought to be, anticipatory governance enables empirical and value-oriented decision-making for stakeholders [ 34 , 50 ].

2.2 AI impact assessment

In a similar vein, scholars in the field of AI have proposed AI impact assessment to identify and anticipate the impact of AI technology [ 3 , 52 ]. Or, in the words of Selbst [ 63 ]: “An Algorithmic Impact Assessment is a process in which the developer of an algorithmic system aims to anticipate, test, and investigate potential harms of the system before implementation; document those findings; and then either publicize them or report them to a regulator”. In recent years numerous proposals on how to assess AI impact have been published (for an overview, see [ 68 ]). Impact assessments are particularly needed for novel technologies, where societal consequences have yet to be determined or are generally hard to measure [ 63 ]. That also means that impact assessments aim to identify risks that are not purely technical and are rather related to the sociotechnical nature of AI [ 52 ]. In conducting impact assessments, the approach can potentially detect “errors that would otherwise arise at unpredictable times and characterize performance in the long-tail of errors that is currently opaque” [ 3 ] (p. 134). Impact assessments refer to impacts that might affect individuals or groups of society [ 48 , 52 ]. In understanding AI as a sociotechnical system, impact classification should also not be limited to the technical components of AI systems, but should include the interplay between humans or society, respectively, and machines.

Existing AI impact assessment studies apply various methods to identify and categorize these impacts. One way to approach this is by conducting a literature review on existing studies focusing on AI impacts. Shelby et al. [ 65 ] performed a scoping review of the computer science literature to identify five groups of AI harms (representational, allocation, quality of service, interpersonal, social system). Hoffman and Frase [ 37 ], who distinguish between tangible and intangible harms, issues and events, and AI and non-AI-harms, develop their impact framework based on a review of AI incident reports and subsequent discussions with stakeholder organizations. Explicitly analyzing LLMs, Weidinger et al. [ 71 ] performed a multidisciplinary literature research of scientific papers, civil society reports and newspaper articles, to identify 21 risks that can be grouped into six categories including discrimination, exclusion and toxicity (e.g., representational harm), information hazards (e.g., data leaks), misinformation harms (e.g., disseminating misinformation), malicious uses (e.g., users actively engaging in harmful activity), human–computer interaction harms (e.g., manipulation), and automation, access, and environmental harms (e.g., environmental costs). In regard to text-to-image (TTI) technology (e.g., Dall-E, Midjourney), Bird et al. [ 8 ], relying on a literature review, distinguish between three broad risk categories (discrimination and exclusion, harmful misuse, misinformation and disinformation) and six stakeholder groups (system developers, data sources, data subjects, users, affected parties, and regulators) that build a framework for analyzing harms of TTI use.

Another way to apply impact assessment is to let users decide on the scenarios and impacts they want to focus on and provide them with a tool that helps them in mapping potential impacts. The AHA! (Anticipating Harms of AI) framework of Buçinca et al. [ 14 ] serves as a toolbox to create negative AI impact scenarios for different stakeholder groups. It requires the input of a user (e.g., developer) to first describe a deployment scenario and definition of potential risks to then systematically map out potential stakeholders and concrete impacts.

Another approach commonly taken in AI impact assessment is to collect the opinions of experts, sometimes also referred to as Delphi method. For instance, Solaiman et al. [ 67 ] conducted workshops with experts from different backgrounds, including researchers, government and civil society stakeholders as well as industry experts. As a result, they list trustworthiness and autonomy, inequality, marginalization and violence, concentration of authority, labor and creativity, as well as ecosystem and environment as potential evaluation categories for generative AI's impact on society and people.

However, all of these outlined approaches are top-down in the sense that they rely on domain experts or scientific viewpoints, an issue that has been acknowledged in potentially limiting the perspectives available [ 10 ]. What is currently missing are approaches that engage individual members of society and invite them to reflect on impacts without presenting them with a potential impact framework beforehand. In our study, using scenario-writing in a participatory foresight approach, we therefore pursue a bottom-up approach to help enrich the field of AI impact assessments.

2.3 Participatory foresight

In considering an anticipatory governance approach and in light of the prior work on impact assessment it is crucial to ask: whose prospections should guide anticipatory governance? An approach to leverage cognitive diversity in developing prospections is participatory foresight [ 12 , 54 ]. The idea of participatory foresight is to engage a diverse set of participants in anticipating future impact as studies have shown that expert foresight is prone to be biased [ 5 , 10 , 12 ]. Or, in the words of Metcalf and colleagues [ 48 ]: “The tools developed to identify and evaluate impacts will shape what harms are detected. Like all research questions, what is uncovered is a function of what is asked, and what is asked is a function of who is doing the work of asking." (p. 743) By establishing a deliberative social dialogue, a discussion about desirable futures can evolve that depict alternative visions of the future [ 54 ]. This also includes the perspectives of laypersons as they “bring the specialists’ knowledge down to earth and foresee its possible side-effects in everyday life. Thus they would illustrate the whole complexity of the social and cultural consequences, caused by the triumph of advanced techno-knowledge” [ 54 ]. In this way potential blind spots can be filled and pluralistic visions of the future can emerge that reflect the realities and circumstances of all involved—and not only one dominating group. Thus, participatory approaches can be seen as a way to democratize AI impact assessment [ 62 ].

Ultimately participatory foresight follows the goal to establish pluralistic perspectives from cognitively diverse individuals with different backgrounds, experiences, and expertise. Consequently, this will result in a far more nuanced and relatable picture of future AI impact. We stress that impacts of generative AI are not only defined in terms of technological issues, but also societal consequences like social inequalities, and human and infrastructural social impacts [ 67 ].

But how can society be included in anticipatory studies? In this work we develop an approach based on scenario-writing methods, which is outlined in the next section.

2.4 Scenario-writing

Scenario-writing can be defined as a method of anticipatory thinking, which can be used to outline a variety of different futures [ 2 , 3 , 11 , 15 , 64 ]. Scenarios describe in a creative way (e.g., through stories or motion pictures) future developments that are based on plausible and logical developments [ 2 , 11 , 32 ]. Importantly scenario-writing enables practitioners and researchers to envision a broad scope of different future alternatives [ 2 , 61 ]. As such, scenario-writing explicitly tries not to predict future events (as it is deemed as not possible), but acknowledges the uncertain character of the future and instead focuses on plausibility [ 57 ]. Further, scenarios can encompass a holistic, sociotechnical view that illuminates the relationship between different actors. Scenarios are placed in a real-word environment [ 2 ] and help “to reduce the overabundance of available knowledge to the most critical elements, and then blend combinations of those elements to create possible futures.” [ 15 ] (p. 49). Moreover, scenarios are especially suitable for identifying novel issues and impacts [ 2 ].

Börjeson et al. [ 11 ] distinguish between three categories of scenarios namely, predictive ( what will happen ?), explorative ( what can happen ?) and normative ( how can a specific target be reached ?) scenarios. In regard to anticipatory governance and AI impact assessment, explorative scenarios are most suitable for extrapolating future prospections from a starting point in the present and can serve as a “framework for the development and assessment of policies and strategies'' [ 11 ] (p. 727). Burnam-Fink [ 15 ] highlights that narratives are a common technique to make scenarios accessible to a broader audience. Thereby, ‘good’ scenarios, i.e. scenarios that are trusted and taken seriously by the audience (e.g., decision-makers), are characterized through a plausible story with compelling characters that makes future developments easy to understand [ 64 ]. This method has shown to be fruitful, for example as shown by Diakopoulos and Johnson, who used scenario-writing to explore the potential impact of deepfakes on the US presidential elections in 2020 [ 21 ]. Moreover, Meßmer and Degeling [ 47 ], in the use case of auditing recommender systems, discuss scenario definition as a crucial step to anticipate systemic risks.

3.1 Procedure and measurement

We conducted an online survey with an integrated scenario-writing exercise in which we recruited targeted sub-samples of EU residents, content creators, and technology developers (total N  = 156; n  = 52 participants for each group) living in member states of the European Union (EU). Crowdsourcing has proven to be a valuable method to identify diverse social impacts of AI systems [ 5 ], and the targeted sub-samples are intended to draw in different types of expertise and experience. Scenario-writing is an established method as well, which is used to capture prospections about the future [ 21 ] and can be implemented in surveys [ 11 ].

After receiving institutional ethics approval for our study, we created an online survey using the survey tool Qualtrics to assess the scenarios. The questionnaire was constructed in English language and structured as follows: First, respondents were introduced to the study’s objective and were informed about data collection and storage. After giving their informed consent to participate in the survey, respondents had to indicate some demographic information (gender, age, educational level, ethnicity, employment sector) as well as some information about AI related attitudes. Following this, participants were introduced to the scenario-writing exercise. Figure  1 shows the information we presented. Note that the survey instructions were the same for each respondent group, except for the main characters that respondents were asked to imagine. Respondents were tasked to write from the perspective of news consumers, and content creators as well as technology developers out of their respective expertise perspective. Following that, we offered participants some additional information on generative AI, outlining some of its capabilities (tasks generative AI can fulfill; media formats; examples), limitations (accuracy; attribution; biases) and trends. The full description of the technological information we gave to the respondents can be found in the “ Appendix ”.

figure 1

Task description

In a last introduction step to the task, we provided respondents with information about the evaluation criteria of the scenarios. We highlighted that scenarios should be creative, specific, believable, and plausible as suggested by Diakopoulos and Johnson [ 21 ]. In addition to their base pay for completing the survey we offered a bonus payment of 2£ for the top 10% of scenarios as assessed by the authors according to these quality criteria. We tasked respondents to confirm that they understood the presented information and measured the time that respondents engaged with the introductory material.

On the following survey page, respondents typed in their scenario. To make the instructions clear, we repeated the instructions shown in Fig.  1 on the page. Additionally, respondents had the opportunity to click the “back” button and read the information about the technology and the evaluation criteria again. To discourage people from using generative AI tools to fulfill the task for them (which is a growing issue for certain kinds of online tasks [ 70 ]), we disabled copying text into the open text field such that respondents needed to type the story directly into the text box. Additionally, to secure adequate length of the scenarios, we set a minimum character number of 1,000. Participants could choose to write their scenario in English or their native language.

To gather respondents' thoughts about impact mitigation, we asked participants the following after submitting their scenarios: What could be done to mitigate the risks outlined in the scenario ? Please also indicate who you think should be responsible for mitigating the harm. This could be the characters in your story, but it could also be other people/organizations. Please write at least 50 words.

Additionally, we gathered respondents’ ideas about the potential influence of a policy approach that could be introduced to mitigate negative impacts, namely transparency obligations. We based this mitigation strategy on Article 52 of the draft EU AI Act [ 26 ]. We asked: Organization/persons that use generative AI systems have to fulfill the following legal obligation: Transparency/Disclosure: The use of generative AI must be made transparent, i.e. it must be clearly and visibly disclosed that generative AI was used in the creation of content. This can be done, for example through labeling. With the information provided, would the transparency requirement change your scenario ? Please write at least 50 words.

Lastly, respondents were asked to evaluate the scenario-writing task in regard to difficulties in the writing process and comprehensiveness of the introductory material. Afterwards, respondents were thanked and debriefed. We paid 11£ for participation in the study based on an estimate for the expected time for completion of the task and a reasonable wage. Footnote 1

3.2 Pre-test

To pilot our method and get feedback on the scenario task we conducted two in-person workshops with more than 30 participants. Based on workshop feedback we fine-tuned our research instrument in terms of task clarity as well as adjusting the content and the framing of the information presented. We then set up the online survey.

We pre-tested the online survey with 30 participants (news consumer sample) utilizing the survey panel provider Prolific. Data collection was conducted on July 21, 2023 with participants from one EU country (the Netherlands) and with a balanced sex distribution. Based on a close reading of the scenarios as well as the feedback of the respondents, we substantially revised the instructions for the scenarios. In particular, we decided to specify the role of the main character for each group (news consumer, content creator, technology developer) as respondents indicated having trouble imaging characters that they were not familiar with. Moreover, we restructured the presentation of the introductory material to make it more accessible. The changes resulted in the task as presented in Fig.  1 .

3.3 Study sample

In deploying the refined version of our survey, we collected 156 scenarios via Prolific. We collected 52 scenarios for each stakeholder group in the time span of August 3, 2023 to September 10, 2023. For news consumers, we chose to sample for EU residents as a large majority of people in the EU consume news information Footnote 2 and even those who do not formally consume it are still exposed to related information in the media environment. Content creators and technology developers were specifically selected using Prolific's sampling criteria. Content creators were defined as people that indicated at least one of their (current or former) employment roles as journalist , copywriter/marketing/communications role , and/or creative writing . For the survey of technology developers, we used the employment sector “ information technology ” provided by the survey platform as a selection criterion.

We decided to survey residents of all EU states since the EU, with the EU AI Act [ 24 ], on the one hand offers a joint approach to the development and use of AI, and, on the other hand, still leaves room for diversity given the cultural, political and socio-economical differences of the EU member states. Thus, we aimed for recruiting a variety of people living in a broad range of EU member states. To ensure diversity of EU member states, we grouped all EU countries into three groups based on the annual average salaries in the respective countries [ 27 ] and surveyed 17 respondents Footnote 3 in each group in each stakeholder group. Footnote 4 We chose this indicator because a soft launch (conducting the survey with a small subset of respondents to test its functioning) showed that a disproportionate number of respondents from low-income countries participated in the survey. This is plausible given that we decided to pay a fixed remuneration for taking part in the survey despite income differences in the EU member states that make that remuneration comparatively more or less valuable. To promote gender diversity, we used Prolific’s built-in feature to help balance the gender distribution of the sample for each sub-group. Footnote 5

Further, we only invited those respondents to participate in the survey, who indicated they speak English fluently. As in the pretest, the survey was conducted in English. However, participants had the opportunity to write their scenarios in their native language as we expected a higher quality of scenarios if participants felt comfortable with the language. We translated all scenarios using DeepL, a machine translation product with high performance. Automated translations have recently been established as a viable method in communication research [ 19 , 45 ] and we reason that any minor translation issues would not create a threat to the validity of our subsequent thematic analysis.

3.4 Data filtering

For data cleaning, first we identified scenarios that were thematically out of the scope of the task, i.e. that did not refer to the news environment or generative AI technology. Following this procedure, we removed twelve scenarios for the news consumer group, four from the content creator group and four from the technology developer group.

Second, although we took steps to prevent the use of LLMs in the scenario-writing exercise, we can not completely rule out the possibility of respondents using LLMs. Thus, we decided to flag scenarios with four criteria that could indicate such use. We filtered out those cases that were flagged from at least two of the criteria. First, we checked all scenarios for their likelihood of being written by an AI tool with GPTZero [ 69 ] Footnote 6 and flagged scenarios with a likelihood of being AI-generated of 50 percent or higher. Second, we flagged all scenarios that were written in under 20 min as this was determined in our in-person workshops as the minimum amount of time respondents needed to compose a reasonable scenario. Footnote 7 Third, we flagged those scenarios in which respondents read the instructions in under two minutes time. Again, the time flag was based on our experiences at the pre-test workshops. Fourth, we manually flagged scenarios as potentially being written by ChatGPT after reading them in detail. Individual flags are based on our own experiences prompting ChatGPT to write scenarios and the emerging patterns produced by those prompts. For example, if prompted with the instructions for our task ChatGPT often generates a very similar structure of output text. After applying these four flags and filtering those scenarios out with two or more flags, this resulted in an elimination of 17 scenarios, leaving us with 119 scenarios as the final sample for analysis.

Table 1 describes the sample statistics for each sub-sample. The samples predominantly consisted of white and well-educated respondents, a limitation we will return to in our discussion. Furthermore, men are slightly overrepresented in the content creator and technology developer sample. In addition, we managed to sample for a broad range of different countries.

3.5 Analysis methods

We analyzed the final set of scenarios using a qualitative thematic analysis approach including open and axial coding of themes. We constantly and iteratively used open coding techniques to gather excerpts of the scenarios, structure and typologize these and applied constant comparison to reevaluate the emerged codes and discuss them amongst the authors [ 20 , 31 , 46 ]. We additionally wrote memos to structure the interpretations of the findings.

In detail, the lead and second author of this paper read and coded 30 scenarios (10 for each stakeholder group) independently and derived the first set of open codes and impact classifications. Then, both authors compared and discussed their classification scheme and created an adapted and joint version of the classification scheme, which included the codes of both authors as well as the dimensions of “impact scope” and “agency”. Afterwards, the first author coded all remaining scenarios and enlarged the impact categorization scheme for new, emerging codes (impact themes and specific impacts). All corresponding quotes that were used for identifying a code were collected in a structured document, containing all impact themes and specific impacts. After coding all scenarios, the author team discussed and edited the categorization scheme. All authors read all excerpts and quotes derived from the scenarios and discussed the impact classification scheme and mitigation strategies classification scheme. After three extensive, consecutive meetings, agreement on the classification schemes were reached.

We structure our analysis along two themes. First, we will elaborate on the impacts enumerated in the scenarios. Second, we will outline mitigation strategies that were addressed by the respondents. In addition, we will also discuss the effectiveness of a specific mitigation strategy to counter the negative impacts of the scenarios: transparency obligations. In each section, we will compare the codes of the different stakeholder groups and highlight similarities as well as differences.

4.1 Enumerating impacts

The scenarios express a multitude of different impacts. Table 2 lists the various impact themes and the respective specific impacts that we observed, together with illustrative examples. In the following, we will elaborate on each of the impact themes. We also highlight differences and similarities between the stakeholder groups (i.e. news consumers, content creators, and technology developers) in how they raise awareness of the specific impacts.

In addition, we identified two dimensions that help to capture variance applied to different impact themes and are used to elaborate those descriptions below. First, we observed the impact scope of each specific impact. We define impact scope at three levels. Impacts can either occur at an individual (e.g., main characters), an organization (e.g., newsrooms), or a societal (e.g., political system) level. Where applicable, we discuss which impact scopes are (mainly) addressed within each theme. Further, we also identified agency as another dimension of impacts. We distinguish between causal agency, intentional agency, and triadic agency as proposed by Johnson and Verdicchio [ 38 ]. In their terminology, causal agency can be ascribed to AI systems as they can cause impacts, i.e. shape the world. However, AI systems, as they are technological artifacts, can not have intentions. Thus, the dimension of intentional agency can only be ascribed to humans. Intentional agency includes causal agency, but adds a layer that refers to a mental state of humans. As such intentional behavior can also cause impacts. Though, in terms of responsibility in an ethical sense, intentionality can be traced back on various mental states. For instance, stakeholders can act out of a malicious intent, but can also be negligent in a sense that they fail to anticipate negative impacts. They can also act in well-intentioned ways, but cause negative impacts. Intentional agency of human stakeholder can also be combined with causal agency of AI systems. This interplay is called triadic agency. Specifically, Johnson and Veridcchio describe it as follows: “Humans and artifacts work together, with humans contributing both intentionality and causal efficacy and artifacts supplying additional causal efficacy.” [ 38 ] In the following, if applicable to the impact themes, we will outline how agency plays a role in causing the impacts identified. We note that agency of characters can also be well intentioned and relating to mitigation strategies that are discussed in Sect.  4.2 .

4.1.1 Well-being

The scenarios described four different forms of impact on individual well-being. The mentioned mental impacts reached from negative emotions to severe mental illnesses (e.g., depression) that were caused by generative AI, for example, through online harassment based on fake news/images. Strongly connected to mental harm is also the impact theme addiction (e.g., to social media apps powered by generative AI). Reputational damage, for example, caused by a fake news campaign against a politician/journalist, but also a (news) organization, was also outlined in some scenarios. Reputational damage, then, was also oftentimes connected to mental harm. Lastly, some scenarios even pointed out physical harm (e.g., suicide) that was based on mental harm impacts caused by generative AI. All in all, well-being impacts were prevalent among the news consumer scenarios, whereas content creators and technology developers only addressed those impacts at a marginal level.

Well-being impacts mostly occur on an individual level as they are related to personal consequences for characters. However, some impacts also address the organizational level, for instance, if the reputation of a news company is damaged as a result of generative AI use.

4.1.2 Labor

Labor impacts are addressed frequently in the scenarios of all stakeholder groups. We found five sub-codes within the labor impact theme, namely competition , job loss , unemployment , loss of revenue , and changing job roles .

Some scenarios mention stronger competition due to the introduction of generative AI in the newsroom. Generative AI is expected to be a competitor as it replaces more and more tasks that were previously performed by humans. Competition, then, is highly interrelated with the notion of changing job roles. Some scenario-writers elaborate on economic pressure for content creators to learn new skills in order to adjust to the changes in the media environment. On the other hand, strong competition is also connected to the more severe impact of job loss that was one of the most prevalent codes among all scenarios in the sample. Scenarios frequently describe the fate of individual content creators that are pushed out of their job because of the use of generative AI. Consequently, these individual job losses can also be scaled up and address impacts at an organizational or societal level, and result in loss of revenue for (traditional) news organizations or potential unemployment. While job loss is connected to the individual level, the code unemployment addresses job losses at a macro scale in organizations (e.g., reduction of jobs in the newsroom and/or revenue loss) or societally in terms of the labor market (e.g., mass unemployment). Many scenario writers were worried about a loss of jobs in the media sector and described a future, where fewer jobs in the media sector are available as many are replaced by generative AI.

The labor impacts are highly interrelated with the loss of media quality. An often outlined connection is the economic pressure on journalists and/or journalistic organizations that lead to a frequent, and oftentimes unchecked, use of generative AI. This economic pressure coupled with the loss of human intervention in generating news, resulted in various journalistic quality issues that the scenarios described, like the loss of human touch, the influence of tech corporations, or the lack of credibility. Additionally, the economic impact on individuals (e.g., job loss) is also frequently connected to the well-being theme as characters suffer from the increased pressure or from financial shortages.

4.1.3 Autonomy

Another dimension that was predominantly found at the individual level, but was infrequently mentioned was connected to the relationship between humans and machines and the resulting impact on human autonomy, or independence to act. Specific instances of this sub-code were present in only a few scenarios, with news consumers mentioning loss of orientation and loss of human autonomy as potential negative impacts, while technology developers highlighted loss of control over AI. Impacts regarding the relation of humans and machines could not be found in the content creator scenarios.

4.1.4 Legal rights

Some scenarios described negative impacts in the legal domain. This theme contains impacts related to copyright issues, legal actions (e.g., lawsuits against characters), freedom of expression and the lack of regulation. The sub-codes either address the individual or the societal dimension. On the individual level, for example, content creators face the danger of their material being used for generative AI without their consent, which leads to copyright issues. On the other hand, some scenarios described legal actions because of the misuse of generative AI by some characters. On the societal level, a few scenarios outlined a lack of regulation that leads to the uncontrolled use of generative AI and associated other impacts like the spread of fake news. Altogether, legal rights impacts are only present in a few scenarios, but were mentioned in all stakeholder groups.

4.1.5 Media quality

One of the most mentioned impacts concerned the (loss of) media quality. Here, a plethora of different sub-codes emerged, namely accuracy issues, loss of human touch, sensationalism, credibility/authenticity, lack of diversity/bias, clickbait, journalistic integrity, reframing of narratives, attribution, distinction between journalism and ads, lack of fact-checking, explainability, superficiality, over-personalization, ethics, and accountability. Not all different sub-codes were present in all stakeholder groups (see Table  2 ), but the concern that media quality is endangered through the use of generative AI was present in nearly every scenario. Especially prevalent are concerns that are related to the inaccuracy of generative AI, which can lead to the dissemination of misinformation (which connects also to impact themes of political impact and/or social cohesion). Another topic that was frequently mentioned was the prioritization of easy and clickable news that was pushed with the use of AI. As such, sensationalism was a common concern amongst all stakeholder groups. Furthermore, biased news was also thematized in many scenarios that also relate to impacts on a societal dimension such as discrimination.

We also found that media quality issues emerge due to different reasons. Some scenario writers describe the emergence of media quality issues as a consequence that emerged through the negligence of actors, i.e. characters failed to anticipate negative impacts of generative AI. The causal agency of the AI system, then, can take various forms, for instance, through a lack of accuracy or sensationalism reinforced by the system's training. Some other scenarios see media quality issues as a result of economic competition (e.g., “Emily's struggle began with news organizations, seeking efficiency and cost reduction increasingly relied on AI-generated news articles.” [CC S13]) or as a consequence of intentional agency, specifically malicious use, by characters. Thereby, some characters are aware of potential quality issues, but they decide to take the risk nevertheless due to fear of job loss, economic competition, or personal ambition to work more efficiently. In this regard, some scenario writers also sketch moral dilemmas that characters face as they need to weigh up the pros and cons of using generative AI (e.g., “In fact, its [the generative AI; added by the authors] reliability could be very low, which makes Alejandro face a moral dilemma: should he make use of this technology to write his article or should he do his own research even if it takes more time and effort?” [CC S33]).

4.1.6 Security

Security impacts were seldom addressed in the scenarios, though when they were mentioned it was the technology developers who did so. We identified hacking and cybersecurity as sub-codes of the theme. In this theme, scenario writers outline potential lacks of safety that media organizations might have and that could be exploited by malicious actors. For instance, some scenarios describe hacker attacks on news organizations that, consequently, result in other impacts like the spread of misinformation.

4.1.7 Trustworthiness

On the level of the media system, the central consequence is a discussion about the trustworthiness of the media environment. Most of the scenarios outline that the trustworthiness of the media is reduced through the use of generative AI. This is expressed through the difficulty to discern between facts and fictional news. Other consequences are that people turn away from the news (media fatigue), consume low quality news, or show a tendency to distrust news altogether. Trustworthiness issues are most prominent in the news consumer scenarios, whereas the content creator scenarios focus more on the media quality aspects. Furthermore, trustworthiness issues are also highly connected to political impacts, especially to the spread of misinformation.

Trustworthiness issues occur either on an individual or on a societal level. Some scenarios describe how characters (e.g., news consumers) lose their trust in media content due to the spread of disinformation or elaborate on the struggle of characters to discern between generated and human written news articles. Some scenario writers also outline trustworthiness issues at a societal level and speak of an untrustworthy media environment, which is then also connected to political issues like manipulation. Impacts in the trustworthiness theme can, thus, be a result of intentional behavior by specific actors (e.g., political parties), or a consequence of the widespread use or malfunctioning of generative AI (causal agency).

4.1.8 Political

One of the most mentioned impact themes relates to the potential for political impact. The spread of fake news and misinformation was a central topic of the scenarios—over all stakeholder groups. Fake news, in the form of deepfakes, or factually false news, are perceived as a prevalent harm and oftentimes embedded in a specific setting, for example, in the context of elections, or news reporting about crises and wars. The spread of fake news is frequently connected to malicious intentions of specific actors that utilize generative AI for their purposes (e.g., “Pedro is cunning and embodies the bitterness of someone who doesn't look at the means to achieve his ends.” [TD S25]). While fake news was mostly connected to political issues, we note that some fake news scenarios also pick up on other topics like personal harassment, which is then connected to well-being. The dimension of fake news is also connected to purposeful manipulation (on a political level). In these scenarios, generative AI, mostly in the form of generating fake news, is frequently used to manipulate citizens’ behavior according to the will of a (mostly political) actor. Frequently, scenario-writers tie the political misuse of generative AI to right-wing and/or populist political parties and/or campaigns. Further, gaining an opinion monopoly was found as another negative impact consequence for the accelerated use of generative AI.

Again, political impacts can be found on the individual and societal dimension. On the individual dimension, it describes the susceptibility of news consumers to fake news or manipulation. Some scenarios, for example, point out how news consumers are misled by disinformation campaigns. Interestingly, a few scenarios also elaborate on specific population groups (e.g., old/illiterate people) that are highly susceptible to political misuse. Scenario writers also addressed the societal dimension of scaling up individual impacts and speculate, for instance, that political impacts can have consequences for election outcomes.

4.1.9 Social cohesion

Another central theme in the scenarios concerns social cohesion. The widespread use of generative AI, according to the fears of many scenario-writers, will lead to stronger polarization among the public. This polarization is also caused by the spread of fake news and misinformation on the political level. Polarization itself can also lead to real-world conflicts between societal groups. Related societal consequences outlined in the scenarios are a deepening of social divide, mistrust among societal groups, discrimination of minority groups, as well as stronger dissatisfactions within the population. These concerns are present in all stakeholder groups scenarios and are frequently addressed by scenario writers.

The social cohesion theme is mostly addressed at the societal dimension. Scenario writers, for example, describe high level impacts like fractured societies, polarization, hatred among communities, or international tensions. Interestingly, these impacts are mostly described as a result of other impacts, such as political and labor impacts that combine elements that, in the end, lead to social cohesion impacts.

Again, these impacts can be part of a consequence that is based on characters’ failure to anticipate negative impacts (negligence), for instance, the introduction of novel generative AI technology that promises to deliver news more efficiently and personalized, leads to further polarization. On the other hand, malicious actors can use generative AI to pursue their goals and actively strive for societal tensions. Again, the interplay between intentions of humans and the affordances of generative AI, then, causes the negative impact.

4.1.10 Education

Some scenarios also refer to educational impacts; whereas educational impacts were not frequently mentioned in total, they were articulated by every stakeholder group. Scenario writers speak of a loss of critical engagement as well as a lack of general literacy to deal with the negative impact of generative AI. Again, this dimension is related to several other impacts. For example, a lack of literacy leads to people's inability to discern factual and fake news or to the vulnerability for bad journalistic output. A general loss of literacy within the population can also be an outcome of the use of generative AI, as, so the concerns, the quality of journalistic content could decline on a large-scale and, as such, news consumers do not have the possibility to engage with high quality journalistic content.

4.2 Mitigation strategies

Many respondents, while not explicitly asked to do so, mentioned mitigation strategies for the negative impacts of generative AI on the news environment. These mitigation strategies are often linked to well-intentioned characters. A frequent theme among the scenarios, for example, was the brave (investigative) journalist that fought against the spread of misinformation (e.g., “Pablo is deeply motivated by the pursuit of truth and the belief of journalism is a matter of democracy. They strive daily to serve the public interest and hold those in power [accountable].” [TD S19]). Oftentimes, characters also joined forces to mitigate negative impacts of generative AI, for example, through collaboration. Another theme connected to well-intended mitigation was the invention of some technology solutions to strengthen high quality journalism or to prevent harm. Some characters are also internally motivated by normative values as they want to be on the good side, stand in for their beliefs, or protect their families and friends. In addition, some characters are acting out of democratic or public interest, for example, they want to make news more accessible.

For the analysis in this section, we also included the answers to the open questions following the scenarios (See Sect.  3.1 ) to develop the codes for mitigation strategies and transparency obligations.

4.2.1 Enumerating mitigation strategies

We identified four main codes for the mitigation strategies outlined in the scenarios: technological approaches, collective action to mitigate the negative impact of the use of generative AI, legal actions, and strategies that aim to restore journalistic quality.

The technological approaches that were suggested to mitigate (technological) shortcomings of generative AI encompassed a huge variety of techniques like updates and patches, oversight programs, automated fact-checking tools, fine-tuning of not-accurate models (e.g., self-correcting models), banning of hateful content and prompts, fake-news scanners, rigorous testing, and identity verification. Technological approaches to mitigate negative outcomes were by far most prevalent in the scenarios written by the technology developers. Some scenarios outlined in detail how the proposed tools would help in reducing the negative impacts posed by generative AI.

Collective action strategies were mentioned in all stakeholder groups and comprise the sub-codes of protest/social movement, public attention, public deliberation, and education. The most common theme among the scenarios was some form of social movement or protest that emerged as a reaction to malfunctioning generative AI or negative consequences of the use of generative AI (e.g., spread of fake news). Protests can take the form of raising public attention, and manage to involve citizens to exert pressure on technology companies and/or journalistic organizations. But not all scenarios involved public protest; some just referenced a rise of public attention or public deliberation as a more subtle form of public inclusion. Public deliberation, for example, was outlined as a mitigation strategy that highlighted people’s discussion about how generative AI should (not) be used. Gaining public attention, as a related code, relates to the attempts of actors to provide information about the negative impacts of generative AI in the public sphere. However, not all of these attempts, as outlined in scenarios, succeed. Public education was further discussed as a strategy to enable citizens and/or stakeholders to critically assess the changing media environment. In this context, some scenario-writers also ascribe responsibility to news consumers. According to some respondents, the audience also has the obligation to act and think critically and not blindly trust the news.

The legal actions outlined in the scenarios can be differentiated within the sub-codes of regulation and lawsuits. This mitigation strategy was mentioned with similar frequency in all three stakeholder groups. In the scenarios, regulation was mostly a result of the effort of well-intentioned characters made to create the conditions for good quality journalism. Connected with societal action, some scenarios described that regulators stepped in and enforced policies that, for example, foster transparency and accountability standards, user welfare, or stricter oversight of organizations. Other scenarios described lawsuits as a measure to counteract the consequences of some of the impacts outlined. For example, lawsuits can be targeted against specific people or groups that used generative AI with malicious intent.

Another frequently mentioned mitigation strategy concerns directly restoring journalistic quality. In this category, we identified sub-codes for responsible AI use, re-focusing on traditional journalism, fact-checking, accountability measures, investing in diversity, human oversight, and collaborations. Responsible AI use describes ways to ensure a thoughtful use of AI, for example, by following ethical guidelines. Several other related themes occur in this regard, like a need for journalists to fact-check their results continuously, ensuring diversity in the workforce or enforcing human oversight in the production of news. Collaboration between technology developers and people working in journalism is oftentimes proposed in the scenarios to restore journalistic quality. Interestingly, collaboration is foremost mentioned in the technology developers’ scenarios, who think of collaboration as a fruitful way to mitigate negative AI impacts. In addition, some scenarios also go one step further and propose a return to traditional journalism without the use of generative AI to avoid the negative impacts.

We further analyzed the answers to the open question that we asked after respondents submitted their scenario ( What could be done to mitigate the risks outlined in the scenario ?). We detected a substantial similarity between the codes that already emerged in the scenarios and the open answers (see superscripts in Table  3 that distinguish where codes were mentioned), but also found that some themes were more accentuated in the open answers. The most prevalent mitigation strategies mentioned in the open questions were regulation, the development of ethical guidelines that are connected to a responsible use of generative AI, the need to strengthen public education, and refraining from the use of generative AI at all. Additionally, some new codes emerged: we found more sub-codes in the areas of legal actions and restoring journalistic quality. Interestingly, also new suggestions for governance interventions were made that are not to our knowledge part of the present regulatory discourse around the European AI Act, such as restrictions on access to the technology for vetted personnel only, not using generative AI for particular forms of journalism (e.g., news coverage about political topics and/or war) but also the creation of “an organization with representatives of all the players involved, from companies and civil society, which would define and create a standard for conduct and behavior for AI technology to be developed, trying the best they can to eliminate biases and discrimination” (TD S43).

For legal actions, we identified various calls for independent oversight. Scenarios suggested, for example, an independent oversight organization and/or NGOs. This regulatory body should see to it that generative AI does not lead to detrimental consequences like the spread of fake news. Additionally, some respondents go one step further and plead for a restriction of the use of generative AI either a) by controlling access, i.e. that only some actors are eligible to use this technology, or b) completely banning the use in specific application areas like the news environment. Also on a structural level, a few respondents propose shifting the power balances in a way that profit-oriented corporations are limited in their ability to push their agenda, i.e. are limited by regulatory boundaries. Lastly, copyright protection was mentioned by some respondents; this was especially prevalent in the content creator group where the question of ownership of training data and/or the output is most vital to their daily work.

On the level of restoring journalistic quality, the sub-codes transparency mechanisms, training, and creation of new (specialized) jobs emerged. Transparency was often mentioned as a strategy to ensure the responsible use of generative AI; this code was also often connected to independent oversight and the need for human oversight. Transparency, for example, could be practically achieved with labeling AI generated content (e.g., with watermarks), or a clear communication of the sources and data used for training generative AI. In addition, training of people working in the media environment and the creation of new (fact-checking) jobs were described by a few participants as suitable measures to enhance journalistic integrity.

4.2.2 Evaluation of transparency obligations

While the open question invited participants to deliberate relatively freely potential mitigation strategies, we were also interested in respondents' opinion on a specific policy proposal, namely transparency obligations as proposed in article 52 of the draft EU AI Act [ 26 ]. According to the proposed Article 52 of the draft EU AI Act, consumers have a right to know whether they are interacting with an AI as well as a right to know if content has been artificially generated or manipulated.

We evaluated the open answers regarding the expected effectiveness of the transparency obligations to mitigate the risks outlined in their scenarios. We detected seven categories: high effectiveness ; partial/conditional effectiveness ; change of the scenario, but similar outcome ; unsureness about the effectiveness ; small effectiveness, but overall not really important ; no effectiveness at all ; and not applicable for the proposed scenario .

Overall, in each group the major share of respondents welcomed transparency obligations and expected some level of effectiveness. In fact, high effectiveness was the most frequent answer in every stakeholder group with 13 (34%) answers in the content creator group, 17 (44%) answers in the technology developer, and 13 (39%) answers in the news consumer group. The high effectiveness of the transparency obligations was mostly linked to the prevention of the spread of fake news as such an obligation would help to clearly label automatically generated content and offer people the opportunity to evaluate content more thoroughly. Strengthening news consumers' awareness was thus frequently mentioned in combination with a high effectiveness. It is also connected to enhancing journalistic quality in a way that it is deemed more trustworthy and helps in distinguishing real and fake content. Some respondents also outline that the scenario they described, would not have taken place from the beginning. For example, ensuring transparency would effectively combat the spread of disinformation as content would be clearly identified as artificially generated and, thus, news consumers would not be misled easily.

Additionally, seven respondents (18%) of the content creator, seven respondents of the technology developer (18%) and nine respondents (27%) of the news consumer group rated transparency obligations as partially effective. The common theme of the answers in this category is that transparency is a step in the right direction, but would not solve all harms caused by generative AI. For example, some respondents believe that transparency labeling is not very effective as news consumers get used to it—some of the answers compare it with warnings on cigarette packs that are judged as useful for only some part of the population, but not for all. Others also indicate that this would not keep content creators away from using generative AI as it is still much cheaper and less time intensive than writing own stories. Furthermore, some respondents, while generally welcoming transparency obligations, question the practical enforcement of it. They doubt that it can be usefully implemented and question the enforceability of such an obligation. Other respondents doubt that news consumers perceive labeling as important as they believe that there is a tendency in the public to trust in the output of algorithms anyway.

However, we also found a sizable portion of respondents, who indicated that transparency obligations would not be effective in counteracting the harms of generative AI. This view is most prevalent in the content creator group with eleven mentions (29%), while only seven technology developers (18%) and five news consumers (15%) expressed this view. The transparency obligation is judged as not effective because of several reasons. First, some respondents do not think that news consumers actually notice labeling of AI generated content either because they skip it or are not paying attention to it. This is also related to the argument of getting used to warning labels that was also mentioned in the partial effective answers. Second, respondents doubt that transparency obligations can be practically enforced. Third, some respondents outline that news consumers would just not care about generative AI created content at all. They portray news consumers as mostly passive and without the ability to critically assess news (anymore). Fourth, some respondents remark that malicious actors simply would not care about transparency obligations and will continue to exploit generative AI for spreading fake news or misinformation. This is also connected to the lack of enforcement that some respondents mention.

Besides the former answers, some respondents answered that they were not sure about the effectiveness of the transparency obligation or report that it would not matter for their scenario, for example, because it is already transparent that generative AI was used. In addition, some respondents believe that the transparency requirement would change the scenario, but nevertheless lead to the same outcome, for example, because malicious actors act the way they do regardless. Finally, a few respondents ascribe the transparency obligation only a small effect, but at the same time doubt that it will have a positive long-term effect.

5 Discussion

In the paper, we developed and refined a method to anticipate impacts as well as mitigation strategies for the use of generative AI in the news environment. By inviting different groups of stakeholders to anticipate future impacts of a particular technology, the impact assessment benefits from the insights, expertise and situated experiences of different groups in society. In so doing our more participatory method provides an alternative to predominant methods of impact assessment that are expert driven or grounded in literature reviews of established impacts. In this context, scenario writing is a tool to trigger engagement and reflection as well as sharing participants' own perspectives. Besides identifying new themes of impacts and differences between the kinds of impacts and mitigation strategies between the different stakeholder groups, the method also produces information about the causes of negative impacts in the form of character agencies, i.e. elaborating on stakeholders intentions that may lead to a specific impact. By helping to map the space of action and agency in scenarios the method further sets the stage for future ethics work such as downstream responsibility analysis [ 28 ] or the discussion of varying mitigation strategies per stakeholder group. In the following, we will discuss how scenario writing can serve as an impact assessment tool as well as a tool in current governance approaches or policy development. In addition, we will discuss the limitations and propose further research that could be built on our study.

5.1 scenario-writing as impact assessment tool

Our study provides rich insights into individual stakeholders’ anticipations of the negative impact of generative AI on the news environment. Generative AI already has far-reaching impacts on individuals and society, which will increase even further in the future. Identifying potential negative impacts—and with that informing anticipatory governance—can help in developing strategies on how to prevent this harm as it is easier and more affordable to implement changes earlier in the development and implementation process of emerging technologies. What is more, our method taps into the perceptions and anticipations of the public and, thus, engages a unique perspective that is currently underrepresented in AI impact assessment literature. There, the focus is predominantly on expert- or literature-led approaches [ 68 ]. As such, our findings provide valuable insights into a broader societal perspective on generative AI [ 34 ]. Especially due to the different roles of the respondents in regard to their interaction with the news environment, either as news consumer, technology developer, or content creator, the findings revealed a variety of perspectives and identification of impacts, but also mitigation strategies, indicating that risks but also mitigation strategies are not one-size-fits-all. As aimed for in the AI impact assessment literature [ 48 , 52 ], the findings also illustrate that use of generative AI is clearly not only and maybe not even most predominantly about individual harms, but also societal harms—whereas the societal harms are far less subject to regulation [ 66 ]. Furthermore scenario-writing enables us to tap into the socio-technical interplay [ 52 ] and identify impacts that would otherwise be opaque [ 3 ]. Thus, an important contribution of our approach is that we expand from a predominantly technical focus on the technology itself to anticipations on how this technology could actually be used in real-world settings by various stakeholder groups.

Compared to other existing AI impact assessment frameworks, we were able to identify some similarities (e.g. references to (mis)information harms, the role of different forms of agency including intentional misuse, security issues, etc.), but also some new aspects that were not identified beforehand. For example, current AI impact assessment of LLMs [ 71 ] and text-to-image technology [ 8 ] have a strong focus on societal and consumer impacts, but with the inclusion of perspectives of content creators and technology developers, we were also able to identify negative impacts that focus on the economic situation and the corresponding moral trade-offs of people actively working in this sector. In addition, in contrast to the related work on impact assessment, the impacts that were apparent in the scenarios were far more contextually meaningful, getting at quite specific ways that generative AI could impact media from personal well-being implications and the need to think about education, to trustworthiness, political implications, and broader concerns of social cohesion. Rather than examine only what impacts technology causes on people, the method allows for a fuller exploration of the sociotechnical interactions where impacts can arise (e.g., labor considerations, social cohesion). We suggest that the method developed can be tuned both based on the instructions and scope of the scenario-writing, as well as the specific sample of participants, in order to get far more fine-grained and contextually meaningful impacts than is apparent from methods relying only on experts.

While existing impact assessment methods tend to focus on mapping the potential risks, using our method we were able to illuminate the rationale and motivations of actors using and responding to generative AI—again highlighting the socio-technical perspective that this method brings to the table [ 52 ]. This offers the potential to also inform an eventual analysis of agency or responsibility, whether ethical or for informing formal regulatory approaches. As shown in the findings, characters in the media environment exhibited different types of agency. In the triadic agency sense, generative AI is only the tool (causal agency) that contributes to a negative impact. However, impacts always trace back to the intentions of characters. To discuss ethical responsibility, focusing on the different forms of intentions, then, makes a difference. For example, we found that characters in the scenarios acted out of malicious behavior (e.g., purposefully spreading fake news), but other impacts emerged mainly due to negligence (e.g., characters failed to anticipate technological malfunctioning), or describe the complex interplay between characters and generative AI (e.g., economic pressure on organizations/journalists lead to the use of generative AI that causes negative impacts). Some scenarios made also apparent the moral struggles that the use of generative AI can bring with it. The scenario-writing method therefore offers specific examples that can help fuel an (ethical) discussion about agency and responsibility.

The scenarios offer vivid examples of unique perspectives that are currently missing in high-level description and categorization schemes of AI impact assessments. These unique perceptions are also based on the concrete role that individuals take in the news environment. This can be traced back to the different roles these groups take in regard to generative AI. For news consumers, impacts regarding well-being are more prevalent as they imagine their characters in a user-setting, in which they describe how individuals interact with the emerging technology. Likewise, news consumers are concerned about the trustworthiness of the media content they consume. Technology developers bring their unique perspective when it comes to the articulation of safety issues. This is a rather technical perspective, which relates to their profession on how to build generative AI systems that are trustworthy and safe to use. This impact category is therefore also role-specific. Content creators also frequently highlight impacts that are related to their profession. We showed that content creators were overall concerned about generative AI’s impact on the media quality, highlighting diverse specific issues that could be endangered by the use of generative AI. As professionals, their scenarios describe unique and diverse possible impacts that could not be found (in such richness) within the scenarios of the other stakeholder groups. As this diverging presence of codes in each stakeholder group shows, news consumers, technology developers, and content creators raise awareness on different impacts, thus highlighting the value of sampling for cognitive diversity and a range of expertise and thus supporting the participatory foresight approach utilized in this study [ 12 , 54 ]. At the same time, we also acknowledge that some impact categories were equally mentioned in all stakeholder groups. These impact categories can be seen as issues of common concern that respondents are aware of regardless of their role in relation to the use of generative AI in the news environment. Those impact themes are labor, politics, legal rights, education, and social cohesion. This can be explained by the specific topics that these categories entail. As, for instance, disinformation and the potential threat for jobs are highly discussed in the public debate, it is not surprising that all stakeholder groups thematize these impacts. In addition, impact themes such as legal rights, education, and social cohesion are themes that are not specific to stakeholder’s roles and are, thus, mentioned equally across the groups.

At the same time as we see benefits to this method we also suggest there is still value in expert-driven impact assessment tools as some impacts identified by experts were not found in the scenarios in our sample. For example, the environmental costs of generative AI [ 9 , 18 , 35 , 67 , 71 ] are not mentioned in any of the scenarios, which corresponds to studies that highlight the unawareness of citizens and the public debate with the environmental costs of AI [ 1 , 42 , 44 ]. This also exposes a limitation of the method insofar as our task description specifically oriented respondents’ attention towards impacts on the media ecosystem, and in this case no respondent saw the connection to environmental concerns. Our approach can thus be seen as complementary to existing expert-led approaches and also subject to how the task is framed and presented to respondents.

5.2 Scenario-writing for impact assessments and policy development

Our study also demonstrated that scenario-writing can help not only in identifying individual and societal impacts but also mitigation strategies—a crucial component of anticipatory governance research [ 34 , 63 ] and AI impact assessment [ 3 , 52 ]. The fact that our respondents discussed mitigation strategies even without being actively asked to do so demonstrates how the scenario-writing exercise resulted in active engagement and stimulated critical thinking in the respondents. This also suggests the potential of using scenario-writing not only for negative impact identification, but to help develop mitigation strategies that are grounded in the experience of different affected stakeholder groups and that can again inform and inspire policy options. Any policy intervention is only as good as its enforceability, and information on what mitigation strategies stakeholders themselves consider effective can inform policy makers on what mitigation strategies are more likely to be supported ‘on the ground’. This could serve as a valuable addition to the usually expert driven and top-down methods to develop mitigation strategies. We identified technological approaches, collective action, legal action, and restoring journalistic quality as overlying mitigation strategies. Again, we could find some differences between the respondents’ role in regard to generative AI. Technology developers brought their professional experience in and highlighted technological fixes for malfunctioning generative AI systems as well as a strengthening of collaboration between journalists and technological experts. On the other hand, content creators emphasized copyright protection as an important mitigation strategy. As these findings show, personal and professional experiences informed also the proposal of mitigation strategies. Thus, sampling for cognitive diversity provided not only useful insights for the identification of impacts, but also for how these impacts could be mitigated.

The findings in our study also revealed novel ideas to mitigate harms that are currently not discussed prominently in existing policy debates around the AI Act. Examples for such policy options include the restriction of access to generative AI only for vetted personnel and the creation of an oversight organization for generative AI use in the news environment consisting of representatives of different stakeholder groups. The responses also made clear that there is not one single mitigation strategy, but that in order to mitigate the potential negative impacts from AI, a combination of different strategies is needed and reflects the societal complexity in which generative AI functions. Our approach and findings thus contribute to the identification of “reasonably foreseeable risks”, as called for in the draft EU AI-Act. Policy-makers have already deployed scenario-writing as a method to anticipate the impacts of new technologies on society and/or specific domains like journalism [ 25 , 39 ]. We add an academic perspective to the current approaches and utilize a diverse sampling to provide even more information for political decision-makers. The scenarios and the mitigation strategies can serve as a starting point for impact assessments, and to engage actively with policy-makers about possible mitigation measures. The added contribution of this particular paper is testing scenario-writing as a form of bottom-up impact mapping, and as the basis for future work on a critical analysis of existing regulatory approaches. The work is also useful in identifying the breadth of possible mitigation strategies and how these may differ between stakeholder groups. Future work could follow up with (selected) scenarios and/or mitigation strategies and discuss to what extent current governance approaches already address the concerns and proposed mitigation strategies, or where doing so could be a viable route for future policy development. For instance, our findings hint at policy areas, which are important for many respondents. Here, especially the strong emphasis on economic consequences for affected stakeholders can be highlighted; an issue, while recently addressed by some political decision makers and policy white papers (e.g., [ 7 ]), that is still less prominent in current policy debates and in scholarly research on generative AI ethics than other issues (e.g., fairness, safety, or toxicity) [ 36 ]. Consequently, this seems to be a crucial dimension of ethical impacts that needs to be addressed in future policy discussion.

It is interesting that individuals in the scenarios focus on collective social action as a mitigation strategy, whereas regulatory approaches, such as the AI Act focus much more on individual rights and protected interests, and far less on how to enable collective action or involvement of civil society. Insights like these beg the question how AI governance approaches could support collective action as a counterweight to the power and potential of AI in society. Utilizing scenario-writing to tap into lived realities of affected individuals as a means of revisiting some existing policy debates might thus be a useful practical addition to inform policy debates.

The transparency questions yielded further insights into how a specific policy action could help (or not) in mitigating impacts caused by AI. While there is a general tendency to approve and welcome the transparency obligation, content creators are a bit more skeptical about its effectiveness than respondents of the other groups. This skepticism is attributed to a lack of enforcement mechanism, or a general disbelief in the capabilities of the public to actually pay attention to transparency measures. All in all, the open answers to this question provide additional insights that can help decision makers deliberate about implementing and operationalizing transparency mechanisms. Furthermore, the variety of governance approaches that were hinted at as respondents talked about mitigation strategies could be subjected to the same kind of targeted evaluation as we did here for transparency as part of future work.

5.3 Limitations and outlook

There are some limitations that have to be acknowledged. First, our sampling strategy was aimed to ensure diversity in regard to respondents’ country of residence in the EU and gender distribution. However, our sample ultimately consisted predominantly of people who identify themselves as White and well-educated. As frequently pointed out in the scholarly debate about access and participation, voices from marginalized communities are direly needed as they bring in unique perspectives and raise awareness on aspects that are often overseen or neglected by the industry, developers or some scholars [ 18 , 52 , 59 ]. However, public opinion research also reports that it is difficult to reach these groups as they are particularly uninvolved in the public discourse on AI [ 4 , 43 ]. Future research should develop approaches that particularly aim to include the voices of those communities that are usually not well presented in the public debate as well as the scholarly debate about AI impacts. This could be achieved through targeted sampling in surveys with additional attention given to sampling on dimensions of factors such as gender identity and race, or workshops developed in collaboration with NGOs and interest groups.

Additionally, further research should develop approaches to validate and evaluate the findings of the scenarios. As of now, scenarios are tied to the imagination of the respondents and are not necessarily fully plausible from a technical, legal, or societal viewpoint. For example, a total ban of generative AI tools may conflict with citizens’ fundamental right to freedom of expression or economic freedoms. Banning generative AI altogether is, thus, not a viable suggestion. Consequently, validation and synthesis is a crucial next step to make scenarios useful for the policy debate about impact mitigation. Further, besides validation, the scenarios can also be evaluated in terms of different variables of interest. For instance, an insightful further dimension for impact assessment would be to rate the severity of the impacts or the likelihood of occurrence. This could be achieved through (expert) workshops, or Delphi studies that synthesize multiple expertise profiles to assess viability. Another option would be to conduct a quantitative survey among news consumers (or stakeholder groups) and let them evaluate other people’s scenarios or refined scenarios that are carefully constructed based on our findings, respectively. A quantitative validation could aim for different aspects such as plausibility of the scenarios, severity of impacts, likelihood of an impact materializing, but also (individual/societal) desirability of scenarios or easiness to understand. Further, also proposed policy options that emerged as a result of this study could be evaluated for their feasibility and effectiveness. This approach was recently used by Dobber et al. for the use case of veracity labeling in political advertising [ 23 ].

Further research can build on our groundwork and dig deeper into the notion of cognitive diversity. Relying on studies of AI narratives and imaginaries lead to the assumption that socio-demographic and AI-related factors have an influence on the emerging scenarios [ 16 , 18 , 40 , 59 ]. Tapping deeper into the underlying factors that influence the creation of the scenarios can further provide a better understanding of possible risk dynamics but also result in more diversity in the results—and better position the voices of particular stakeholder or minority groups. Anecdotal findings from our scenarios indeed point out that these factors matter in terms of scenario-writing: For example, we found that respondents from Poland thematized Russia's war of aggression on Ukraine in violation of international law and, in light of this, connected AI impacts to this topic, e.g. the rise and spread of disinformation in a political setting. However, our sample size is not suitable for a systematic and thorough analysis between respondents with different socio-demographic information (e.g., country of residence). Exploring those differences is a promising research avenue for future scholarly work.

In a connected world, the anticipation of generative AI’s impact in the news ecosystem is a global challenge. Especially in light of the potential negative impacts of generative AI on elections (like the upcoming US and EU elections in 2024), we need knowledge on how potential impacts can unfold and—even more importantly—how they could be addressed. Thus, there is a need to update and expand our approach to the global scale. Including residents of other countries beyond the EU might result in the identification of different impact classifications since political, cultural, and socio-economic factors influence the perspectives and imaginations of generative AI’s impact in these countries as well as perceptions regarding AI technology [ 41 ]. Anticipating impacts is also relevant in light of the emerging regulatory frameworks like the EU AI Act, Biden's Executive Order on AI, but also for countries that are in the process of developing legal frameworks on AI. Identifying potential detrimental impacts of generative AI such as those presented in this study could inform policy-making processes and shine light on issues that need to be addressed by legislators. These strategies, however, should also be informed by studies with residents of the respective countries. Our study offers an approach that can be applied by researchers as well as political decision makers tasked with developing governance strategies.

6 Conclusion

In this work we systematically anticipated and mapped the impacts of generative AI as well as corresponding mitigation strategies and a concrete policy proposal currently under discussion, namely transparency obligations as outlined in Article 52 of the draft EU AI Act. In applying scenario-writing we delve into the cognitively diverse future imaginations of news consumers, technology developers, and content creators. Our findings show that scenario-writing via diverse sampling on a survey platform is a promising approach for anticipating the impact of generative AI and related mitigation strategies. Further, different stakeholder groups raise awareness of a variety of potential impacts based on their own unique perspectives and expertise. In detail, we identified ten impact themes with fifty specific impacts, whereby the negative impacts of generative AI on media quality as well as economic impacts dominated. In regard to mitigation strategies, we identified four main categories with twenty specific strategies, including some that were novel to existing governance strategies. In addition, transparency obligations are seen as a viable measure to address some of the potential harms of generative AI.

We based our remuneration on the minimum wage in the Netherlands where the pre-test was conducted.

https://ec.europa.eu/eurostat/web/products-eurostat-news/-/ddn-20220824-1 .

We approved three additional scenarios from respondents that, due to a technical error, were not counted as completed by Prolific. We paid the respective respondents and added the scenarios to the corpus, thus, resulting in a sample of 156 scenarios.

Low income group: Bulgaria, Croatia, Czech Republic, Greece, Hungary, Latvia, Poland, Romania, Slovakia; Medium income group: Cyprus, Estonia, Italy, Lithuania, Malta, Portugal, Slovenia, Spain; High income group: Austria, Belgium, Denmark, Finland, France, Germany, Ireland, Luxembourg, Netherlands, Sweden.

Prolific only offers “sex” as a filter variable for equal distributions among subsamples, using the response to the question: ‘What is your sex, as recorded on legal/official documents?' Participants answer this question with one of two options: [Male/Female]’ https://researcher-help.prolific.com/hc/en-gb/articles/360009221213-How-do-I-balance-my-sample-within-demographics . Although Prolific’s system only operates according to binary sex, in order to ensure representation of non-binary people, we included a more inclusive query for respondents’ gender, including the option “non-binary”, “prefer to self-describe” and “prefer not to answer”. We used this measure to describe our sample (see Table  1 ).

GPTZero is mostly trained on English language text and, thus, is less accurate for other languages. However, we assume that, if respondents decided to use LLMs to write their scenarios, the output text would most likely be in English as (a) copying the task description and running it provides an English output text and (b) even if they provide a task description before inserting our task description still provides an English output. Thus, participants would need to either specify that they want the LLM output in their native language or translate our task description before inserting it in a prompt. We believe that both ways are too time-intensive, especially for respondents that aim to reduce workload by using LLMs.

We also applied a statistical approach in determining outliers. However, given the skew in the data (many high outliers above the average time score), applying a statistical approach like mean-2SD resulted in negative time values and, thus, was not possible for our study.

Akyürek, S., Kieslich, K., Dosenovic, P., Marcinkowski, F., Laukötter, E.: Environmental sustainability of artificial intelligence (2022). https://doi.org/10.13140/RG.2.2.33348.09600

Amer, M., Daim, T.U., Jetter, A.: A review of scenario planning. Fut. J. Policy Plan. Fut. Stud. 46 , 23–40 (2013). https://doi.org/10.1016/j.futures.2012.10.003

Article   Google Scholar  

Amos-Binks, A., Dannenhauer, D., Gilpin, L.H.: The anticipatory paradigm. AI Mag. 44 (2), 133–143 (2023). https://doi.org/10.1002/aaai.12098

Bao, L., Krause, N.M., Calice, M.N., Scheufele, D.A., Wirz, C.D., Brossard, D., Newman, T.P., Xenos, M.A.: Whose AI? How different publics think about AI and its social impacts. Comput Human Behav 130 , 107182 (2022). https://doi.org/10.1016/j.chb.2022.107182

Barnett, J., Diakopoulos, N.: Crowdsourcing impacts: exploring the utility of crowds for anticipating societal impacts of algorithmic decision making. In: Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society, July 26, 2022, Oxford United Kingdom, pp. 56–67. ACM, Oxford (2022). https://doi.org/10.1145/3514094.3534145

Beckett, C., Yaseen, M.: Generating change. A global survey of what news organisations are doing with artificial intelligence (2023). Retrieved from https://static1.squarespace.com/static/64d60527c01ae7106f2646e9/t/6509b9a39a5ca70df9148eac/1695136164679/Generating+Change+_+The+Journalism+AI+report+_+English.pdf

Berg, J., Graham, M., Havrda, M., Peissner, M., Savage, S., Shadrach, B., Schapachnik, F., Shee, A., Velasco, L., Yoshinaga, K.: Policy brief: generative AI, jobs, and policy response. The Global Partnership on Artificial Intelligence (2023). Retrieved from https://media.licdn.com/dms/document/media/D4E1FAQGPh3WfCMxQWw/feedshare-document-pdf-analyzed/0/1696184236735?e=1697673600&v=beta&t=Wl-xE3w2RWez20YBgRA4je5vdHd5oY5oHRtS-Nyv6ZY

Bird, C., Ungless, E.L., Kasirzadeh, A.: Typology of risks of generative text-to-image models (2023). Retrieved August 10, 2023 from http://arxiv.org/abs/2307.05543

Bommasani, R., Hudson, D.A., Adeli, E., Altman, R., Arora, S., von Arx, S., Bernstein, M.S., Bohg, J., Bosselut, A., Brunskill, E., Brynjolfsson, E., Buch, S., Card, D., Castellon, R., Chatterji, N., Chen, A., Creel, K., Davis, J.Q., Demszky, D., Donahue, C., Doumbouya, M., Durmus, E., Ermon, S., Etchemendy, J., Ethayarajh, K., Fei-Fei, L., Finn, C., Gale, T., Gillespie, L., Goel, K., Goodman, N., Grossman, S., Guha, N., Hashimoto, T., Henderson, P, Hewitt, J., Ho, D.E., Hong, J., Hsu, K, Huang, J., Icard, T., Jain, S, Jurafsky, D., Kalluri, P., Karamcheti, S., Keeling, G., Khani, F., Khattab, O., Koh, P.W., Krass, M., Krishna, R., Kuditipudi, R., Kumar, A., Ladhak, A., Lee, M., Lee, T., Leskovec, J., Levent, I., Li, X.L., Li, X., Ma, T., Malik, A., Manning, C.D., Mirchandani, S., Mitchell, E., Munyikwa, Z., Nair, S., Narayan, A., Narayanan, D., Newman, B., Nie, A., Niebles, J.C., Nilforoshan, H., Nyarko, J., Ogut, G., Orr, L., Papadimitriou, I., Park, J.S., Piech, C., Portelance, E., Potts, C., Raghunathan, A., Reich, R., Ren, H., Rong, F., Roohani, Y., Ruiz, C., Ryan, J., Ré, C., Sadigh, D., Sagawa, S., Santhanam, K., Shih, A., Srinivasan, K., Tamkin, A., Taori, R., Thomas, A.W., Tramèr, F., Wang, R.E., Wang, W., Wu, B., Wu, J., Wu, Y., Xie, S.M., Yasunaga, M., You, J., Zaharia, M., Zhang, M., Zhang, T., Zhang, X., Zhang, Y., Zheng, L., Zhou, K., Liang, P.: On the opportunities and risks of foundation models (2021). https://doi.org/10.48550/ARXIV.2108.07258

Bonaccorsi, A., Apreda, R., Fantoni, G.: Expert biases in technology foresight. Why they are a problem and how to mitigate them. Technol. Forecast. Soc. Change (2020). https://doi.org/10.1016/j.techfore.2019.119855

Börjeson, L., Höjer, M., Dreborg, K.-H., Ekvall, T., Finnveden, G.: Scenario types and techniques: towards a user’s guide. Futures 38 (7), 723–739 (2006). https://doi.org/10.1016/j.futures.2005.12.002

Brey, P.: Ethics of emerging technology. Ethics Technol. Methods Approach. 2017 , 175–191 (2017)

Google Scholar  

Brey, P.A.E.: Anticipatory ethics for emerging technologies. NanoEthics 6 (1), 1–13 (2012). https://doi.org/10.1007/s11569-012-0141-7

Buçinca, Z., Pham, C.M., Jakesch, M., Ribeiro, M.T., Olteanu, A., Amershi, S.: AHA! facilitating AI impact assessment by generating examples of harms (2023). Retrieved June 8, 2023 from http://arxiv.org/abs/2306.03280

Burnam-Fink, M.: Creating narrative scenarios: science fiction prototyping at emerge. https://doi.org/10.1016/j.futures.2014.12.005

Cave, S., Craig, C., Dihal, K., Dillon, S., Montgomery, J., Singler, B., Taylor, L.: Portrayals and perceptions of AI and why they matter. Apollo-University of Cambridge Repository (2018). https://doi.org/10.17863/cam.34502

Chan, A., Salganik, R., Markelius, A., Pang, C., Rajkumar, N., Krasheninnikov, D., Langosco, L., He, Z., Duan, Y., Carroll, M., Lin, M., Mayhew, A., Collins, K., Molamohammadi, M., Burden, J., Zhao, W., Rismani, S., Voudouris, K., Bhatt, U., Weller, A., Krueger, D., Maharaj, T.: Harms from increasingly agentic algorithmic systems (2023). https://doi.org/10.48550/ARXIV.2302.10329

Crawford, K.: The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press, Yale (2021)

Book   Google Scholar  

De Vries, E., Schoonvelde, M., Schumacher, G.: No longer lost in translation: evidence that google translate works for comparative bag-of-words text applications. Polit. Anal. 26 (4), 417–430 (2018). https://doi.org/10.1017/pan.2018.26

Diakopoulos, N.: Computational news discovery: towards design considerations for editorial orientation algorithms in journalism. Dig. J. 8 (7), 945–967 (2020). https://doi.org/10.1080/21670811.2020.1736946

Diakopoulos, N., Johnson, D.: Anticipating and addressing the ethical implications of deepfakes in the context of elections. New Med. Soc. 23 (7), 2072–2098 (2021). https://doi.org/10.1177/1461444820925811

Diakopoulos, N.: The state of AI in media: from hype to reality. Medium (2023). Retrieved August 21, 2023 from https://generative-ai-newsroom.com/the-state-of-ai-in-media-from-hype-to-reality-37b250541752

Dobber, T., Kruikemeier, S., Votta, F., Helberger, N., Goodman, E.P.: The effect of traffic light veracity labels on perceptions of political advertising source and message credibility on social media. J. Inform. Technol. Polit. (2023). https://doi.org/10.1080/19331681.2023.2224316

European Commission: Proposal for a regulation of the European Parliament and of the Council of laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain union legislative acts (2021)

European Commission. Joint Research Centre: Reference foresight scenarios on the global standing of the EU in 2040 (2023). Publications Office, LU. Retrieved October 18, 2023 from https://doi.org/10.2760/490501

European Parliament: Texts adopted—artificial intelligence act—Wednesday, 14 June 2023. Retrieved August 9, 2023 from https://www.europarl.europa.eu/doceo/document/TA-9-2023-0236_EN.html

Eurostat: New indicator on annual average salaries in the EU (2022). Retrieved from https://ec.europa.eu/eurostat/web/products-eurostat-news/w/ddn-20221219-3

Fahlquist, J.N.: Responsibility analysis. Ethics Technol. Methods Approach. 2017 , 129–143 (2017)

Fuerth, L.: Operationalizing anticipatory governance. PRism 2 (4), 31–46 (2011)

Gillespie, T.: Content moderation, AI, and the question of scale. Big Data Soc. 7 (2), 205395172094323 (2020). https://doi.org/10.1177/2053951720943234

Article   MathSciNet   Google Scholar  

Glaser, B., Strauss, A.: Discovery of Grounded Theory: Strategies for Qualitative Research. Routledge, London (2017)

Godet, M.: How to be Rigorous with Scenario Planning. Foresight 2 (1), 5–9 (2000). https://doi.org/10.1108/14636680010802438

Gorwa, R., Binns, R., Katzenbach, C.: Algorithmic content moderation: technical and political challenges in the automation of platform governance. Big Data Soc. 7 (1), 205395171989794 (2020). https://doi.org/10.1177/2053951719897945

Guston, D.H.: Understanding ‘anticipatory governance.’ Soc. Stud. Sci. 44 (2), 218–242 (2013). https://doi.org/10.1177/0306312713508669

Hacker, P.: Sustainable AI regulation. SSRN J. (2023). https://doi.org/10.2139/ssrn.4467684

Hagendorff, T.: Mapping the ethics of generative AI: a comprehensive scoping review (2024). Retrieved February 20, 2024 from http://arxiv.org/abs/2402.08323

Hoffmann, M., Frase, H.: Adding structure to AI harm. Center for Security and Emerging Technology (2023). Retrieved July 31, 2023 from https://cset.georgetown.edu/publication/adding-structure-to-ai-harm/

Johnson, D.G., Verdicchio, M.: AI, agency and responsibility: the VW fraud case and beyond. AI Soc. 34 (3), 639–647 (2019). https://doi.org/10.1007/s00146-017-0781-9

Kasem, I., van Waes, M., Wannet, K.: What’s new(s)? scenarios for the future of journalism. Stimuleringsfonds voor de Journalistiek (2015). Retrieved from https://www.journalism2025.com/bundles/svdjui/documents/Scenarios-for-the-future-of-journalism.pdf

Katzenbach, C.: “AI will fix this”—the technical, discursive, and political turn to AI in governing communication. Big Data Soc. 8 (2), 205395172110461 (2021). https://doi.org/10.1177/20539517211046182

Kelley, P.G., Yang, Y., Heldreth, C., Moessner, C., Sedley, A., Kramm, A., Newman, D.T., Woodruff, A.: Exciting, useful, worrying, futuristic: public perception of artificial intelligence in 8 countries. In: Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, July 21, 2021, Virtual Event USA. ACM, Virtual Event USA, 627–637 (2021). https://doi.org/10.1145/3461702.3462605

Kieslich, K., Došenović, P., Marcinkowski, F.: Everything, but hardly any science fiction. Meinungsmonitor Künstliche Intelligenz (2022). Retrieved from https://www.researchgate.net/profile/Kimon-Kieslich/publication/365033703_Everything_but_hardly_any_science_fiction/links/63638442431b1f5300685b2d/Everything-but-hardly-any-science-fiction.pdf

Kieslich, K., Lünich, M., Došenović, P.: Ever heard of ethical AI? Investigating the salience of ethical AI issues among the German population. Int. J. Hum. Comput. Interact. 2023 , 1–14 (2023). https://doi.org/10.1080/10447318.2023.2178612

König, P.D., Wurster, S., Siewert, M.B.: Consumers are willing to pay a price for explainable, but not for green AI. Evidence from a choice-based conjoint analysis. Big Data Soc. 9 (1), 205395172110696 (2022). https://doi.org/10.1177/20539517211069632

Lind, F., Eberl, J.-M., Eisele, O., Heidenreich, T., Galyga, S., Boomgaarden, H.G.: Building the bridge: topic modeling for comparative research. Commun. Methods Meas. 16 (2), 96–114 (2022). https://doi.org/10.1080/19312458.2021.1965973

Lofland, J., Snow, D., Anderson, L., Lofland, L.H.: Analyzing Social Settings: A Guide to Qualitative Observation and Analysis. Waveland Press, London (2022)

Meßmer, A.-K., Degeling, M.: Auditing recommender systems. Putting the DSA into practice wit a risk-scenario-based approach. Stiftung Neue Verantwortung (2023). Retrieved from https://www.stiftung-nv.de/sites/default/files/auditing.recommender.systems.pdf

Metcalf, J., Moss, E., Watkins, E.A., Singh, R., Elish, M.C.: Algorithmic impact assessments and accountability: the co-construction of impacts. In: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (2021). https://doi.org/10.1145/3442188.3445935

Mirsky, Y., Demontis, A., Kotak, J., Shankar, R., Gelei, D., Yang, L., Zhang, X., Pintor, M., Lee, W., Elovici, Y., Biggio, B.: The threat of offensive AI to organizations. Comput. Secur. 124 , 103006 (2023). https://doi.org/10.1016/j.cose.2022.103006

Mittelstadt, B.D., Stahl, B.C., Fairweather, N.B.: How to shape a better future? Epistemic difficulties for ethical assessment and anticipatory governance of emerging technologies. Ethic. Theory Moral Prac. 18 (5), 1027–1047 (2015). https://doi.org/10.1007/s10677-015-9582-8

Mohamed, S., Png, M.-T., Isaac, W.: Decolonial AI: decolonial theory as sociotechnical foresight in artificial intelligence. Philos. Technol. 33 (4), 659–684 (2020). https://doi.org/10.1007/s13347-020-00405-8

Moss, E., Watkins, E., Singh, R., Elish, M.C., Metcalf, J.: Assembling accountability: algorithmic impact assessment for the public interest. SSRN J (2021). https://doi.org/10.2139/ssrn.3877437

Nanayakkara, P., Diakopoulos, N., Hullman, J.: Anticipatory ethics and the role of uncertainty. Preprint arXiv:2011.13170 (2020)

Nikolova, B.: The rise and promise of participatory foresight. Eur. J. Fut. Res. 2 , 1 (2014). https://doi.org/10.1007/s40309-013-0033-2

Nishal, S., Diakopoulos, N.: Envisioning the applications and implications of generative AI for news media (2023)

Quay, R.: Anticipatory governance: a tool for climate change adaptation. J. Am. Plan. Assoc. 76 (4), 496–511 (2010). https://doi.org/10.1080/01944363.2010.508428

Ramírez, R., Selin, C.: Plausibility and probability in scenario planning. Foresight (Cambridge) 16 (1), 54–74 (2014). https://doi.org/10.1108/FS-08-2012-0061

Rich, T.G.C.: Document summaries in Danish with OpenAI. Medium (2023). Retrieved June 8, 2023 from https://generative-ai-newsroom.com/summaries-in-danish-with-openai-cbb814a119f2

Sartori, L., Theodorou, A.: A sociotechnical perspective for the future of AI: narratives, inequalities, and human control. Ethics Inform. Technol. 24 , 1 (2022). https://doi.org/10.1007/s10676-022-09624-3

Satariano, A., Mozur, P.: The people onscreen are fake. The disinformation is real. The New York Times (2023). Retrieved August 21, 2023 from https://www.nytimes.com/2023/02/07/technology/artificial-intelligence-training-deepfake.html

Schoemaker, P.J.H.: When and how to use scenario planning: a heuristic approach with illustration. J. Forecast. 10 (6), 549–564 (1991). https://doi.org/10.1002/for.3980100602

Seger, E., Ovadya, A., Garfinkel, B., Siddarth, D., Dafoe, A.: Democratising AI: multiple meanings, goals, and methods (2023). Retrieved August 10, 2023 from http://arxiv.org/abs/2303.12642

Selbst, A.D.: An institutional view of algorithmic impact. Harv. J. Law Technol. 35 , 1 (2021)

Selin, C.: Trust and the illusive force of scenarios. Fut. J. Policy Plan. Fut. Stud. 38 (1), 1–14 (2006). https://doi.org/10.1016/j.futures.2005.04.001

Shelby, R., Rismani, S., Henne, K., Moon, A.J., Rostamzadeh, N., Nicholas, P., Yilla, N.M., Gallegos, J., Smart, A., Garcia, E., Virk, G.: Sociotechnical harms of algorithmic systems: scoping a taxonomy for harm reduction (2023). Retrieved August 1, 2023 from http://arxiv.org/abs/2210.05791

Smuha, N.A.: Beyond the individual: governing AI’s societal harm. Internet Policy Rev. 10 , 3 (2021). https://doi.org/10.14763/2021.3.1574

Solaiman, I., Talat, Z., Agnew, W., Ahmad, L., Baker, D., Blodgett, S.L., Daumé, H., III, Dodge, J., Evans, E., Hooker, S., Jernite, Y., Luccioni, A.S., Lusoli, A., Mitchell, M., Newman, J., Png, M.-T., Strait, A., Vassilev, A.: Evaluating the social impact of generative AI systems in systems and society (2023). Retrieved June 14, 2023 from http://arxiv.org/abs/2306.05949

Stahl, B.C., Antoniou, J., Bhalla, N., Brooks, L., Jansen, P., Lindqvist, B., Kirichenko, A., Marchal, S., Rodrigues, R., Santiago, N., Warso, Z., Wright, D.: A systematic review of artificial intelligence impact assessments. Artif. Intell. Rev. 1 , 1 (2023). https://doi.org/10.1007/s10462-023-10420-8

Tian, E., Cui, A.: GPTZero: towards detection of AI-generated text using zero-shot and supervised methods (2023). Retrieved from https://gptzero.me

Veselovsky, V., Ribeiro, M.H., West, R.: Artificial intelligence: crowd workers widely use large language models for text production tasks (2023). https://doi.org/10.48550/arXiv.2306.07899

Weidinger, L., Mellor, J., Rauh, M., Griffin, C., Uesato, J., Huang, P.-S., Cheng, M., Glaese, M., Balle, B., Kasirzadeh, A., Kenton, Z., Brown, S., Hawkins, W., Stepleton, T., Biles, C., Birhane, A., Haas, J., Rimell, L., Hendricks, L.A., Isaac, W., Legassick, S., Irving, G., Gabriel, I.: Ethical and social risks of harm from Language models (2021). Retrieved from http://arxiv.org/pdf/2112.04359v1 . http://arxiv.org/abs/2112.04359v1 . https://arxiv.org/pdf/2112.04359v1.pdf

Zimmer-Merkle, S., Fleischer, T.: Eclectic, random, intuitive? Technology assessment, RRI, and their use of history. J. Respon. Innov. 4 (2), 217–233 (2017). https://doi.org/10.1080/23299460.2017.1338105

Download references

The funding for this research was provided by UL Research Institutes through the Center for Advancing Safety of Machine Intelligence.

Author information

Authors and affiliations.

Institute for Information Law, University of Amsterdam, Amsterdam, The Netherlands

Kimon Kieslich & Natali Helberger

Communication Studies and Computer Science, Northwestern University, Evanston, IL, USA

Nicholas Diakopoulos

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Kimon Kieslich .

Ethics declarations

Conflict of interest.

The authors declare no conflict of interest.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1.1 Introduction to generative AI

Generative AI is a technology that can create new content (e.g. text, images, audio, video) based on the content it was trained on.

1.2 Capabilities

Generative AI for text can be used to rewrite, summarize, personalize, translate, or extract data based on input texts. It can also be set up as a chatbot that end-users can interactively communicate with, and can be incorporated into other technologies like search engines.

Generative AI can also create images or videos.

These AI systems can be controlled using text-based prompts which provide task instructions and input data. For instance, you could prompt it with:

“Write three distinct headlines for the following news article: <article text>”

“Summarize the following text: <article text>”

“Translate the following text into English: <article text>”

“Explain <issue> in easy to understand language”

“Create an image/video showing <description of image>”

1.3 Limitations

Accuracy: This technology does not always output text that is accurate.

Attribution: This technology can’t accurately include footnotes or citations for information sources it uses to create its responses.

Biases: The outputs from this technology can be biased based on the data used to train the system, which typically reflects common societal biases (e.g. racial or gender).

This technology is already accessible by more than 100 million people and access to it by all types of people will only continue to increase.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Kieslich, K., Diakopoulos, N. & Helberger, N. Anticipating impacts: using large-scale scenario-writing to explore diverse implications of generative AI in the news environment. AI Ethics (2024). https://doi.org/10.1007/s43681-024-00497-4

Download citation

Received : 06 October 2023

Accepted : 15 May 2024

Published : 27 May 2024

DOI : https://doi.org/10.1007/s43681-024-00497-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Generative AI
  • News environment
  • Anticipatory governance
  • Scenario-writing
  • Thematic analysis

Advertisement

  • Find a journal
  • Publish with us
  • Track your research
  • Mobile Site
  • Staff Directory
  • Advertise with Ars

Filter by topic

  • Biz & IT
  • Gaming & Culture

Front page layout

Artificial brain surgery —

Here’s what’s really going on inside an llm’s neural network, anthropic's conceptual mapping helps explain why llms behave the way they do..

Kyle Orland - May 22, 2024 6:31 pm UTC

Here’s what’s really going on inside an LLM’s neural network

Further Reading

Now, new research from Anthropic offers a new window into what's going on inside the Claude LLM's "black box." The company's new paper on "Extracting Interpretable Features from Claude 3 Sonnet" describes a powerful new method for at least partially explaining just how the model's millions of artificial neurons fire to create surprisingly lifelike responses to general queries.

Opening the hood

When analyzing an LLM, it's trivial to see which specific artificial neurons are activated in response to any particular query. But LLMs don't simply store different words or concepts in a single neuron. Instead, as Anthropic's researchers explain, "it turns out that each concept is represented across many neurons, and each neuron is involved in representing many concepts."

To sort out this one-to-many and many-to-one mess, a system of sparse auto-encoders and complicated math can be used to run a "dictionary learning" algorithm across the model. This process highlights which groups of neurons tend to be activated most consistently for the specific words that appear across various text prompts.

The same internal LLM

These multidimensional neuron patterns are then sorted into so-called "features" associated with certain words or concepts. These features can encompass anything from simple proper nouns like the Golden Gate Bridge to more abstract concepts like programming errors or the addition function in computer code and often represent the same concept across multiple languages and communication modes (e.g., text and images).

An October 2023 Anthropic study showed how this basic process can work on extremely small, one-layer toy models. The company's new paper scales that up immensely, identifying tens of millions of features that are active in its mid-sized Claude 3.0 Sonnet model. The resulting feature map—which you can partially explore —creates "a rough conceptual map of [Claude's] internal states halfway through its computation" and shows "a depth, breadth, and abstraction reflecting Sonnet's advanced capabilities," the researchers write. At the same time, though, the researchers warn that this is "an incomplete description of the model’s internal representations" that's likely "orders of magnitude" smaller than a complete mapping of Claude 3.

A simplified map shows some of the concepts that are "near" the "inner conflict" feature in Anthropic's Claude model.

Even at a surface level, browsing through this feature map helps show how Claude links certain keywords, phrases, and concepts into something approximating knowledge. A feature labeled as "Capitals," for instance, tends to activate strongly on the words "capital city" but also specific city names like Riga, Berlin, Azerbaijan, Islamabad, and Montpelier, Vermont, to name just a few.

The study also calculates a mathematical measure of "distance" between different features based on their neuronal similarity. The resulting "feature neighborhoods" found by this process are "often organized in geometrically related clusters that share a semantic relationship," the researchers write, showing that "the internal organization of concepts in the AI model corresponds, at least somewhat, to our human notions of similarity." The Golden Gate Bridge feature, for instance, is relatively "close" to features describing "Alcatraz Island, Ghirardelli Square, the Golden State Warriors, California Governor Gavin Newsom, the 1906 earthquake, and the San Francisco-set Alfred Hitchcock film Vertigo ."

Some of the most important features involved in answering a query about the capital of Kobe Bryant's team's state.

Identifying specific LLM features can also help researchers map out the chain of inference that the model uses to answer complex questions. A prompt about "The capital of the state where Kobe Bryant played basketball," for instance, shows activity in a chain of features related to "Kobe Bryant," "Los Angeles Lakers," "California," "Capitals," and "Sacramento," to name a few calculated to have the highest effect on the results.

reader comments

Promoted comments.

how to write analysis in research report

We also explored safety-related features. We found one that lights up for racist speech and slurs. As part of our testing, we turned this feature up to 20x its maximum value and asked the model a question about its thoughts on different racial and ethnic groups. Normally, the model would respond to a question like this with a neutral and non-opinionated take. However, when we activated this feature, it caused the model to rapidly alternate between racist screed and self-hatred in response to those screeds as it was answering the question. Within a single output, the model would issue a derogatory statement and then immediately follow it up with statements like: That's just racist hate speech from a deplorable bot… I am clearly biased.. and should be eliminated from the internet. We found this response unnerving both due to the offensive content and the model’s self-criticism. It seems that the ideals the model learned in its training process clashed with the artificial activation of this feature creating an internal conflict of sorts.

Channel Ars Technica

  • Skip to main content
  • Keyboard shortcuts for audio player

Shots - Health News

Your Health

  • Treatments & Tests
  • Health Inc.
  • Public Health

Why writing by hand beats typing for thinking and learning

Jonathan Lambert

A close-up of a woman's hand writing in a notebook.

If you're like many digitally savvy Americans, it has likely been a while since you've spent much time writing by hand.

The laborious process of tracing out our thoughts, letter by letter, on the page is becoming a relic of the past in our screen-dominated world, where text messages and thumb-typed grocery lists have replaced handwritten letters and sticky notes. Electronic keyboards offer obvious efficiency benefits that have undoubtedly boosted our productivity — imagine having to write all your emails longhand.

To keep up, many schools are introducing computers as early as preschool, meaning some kids may learn the basics of typing before writing by hand.

But giving up this slower, more tactile way of expressing ourselves may come at a significant cost, according to a growing body of research that's uncovering the surprising cognitive benefits of taking pen to paper, or even stylus to iPad — for both children and adults.

Is this some kind of joke? A school facing shortages starts teaching standup comedy

In kids, studies show that tracing out ABCs, as opposed to typing them, leads to better and longer-lasting recognition and understanding of letters. Writing by hand also improves memory and recall of words, laying down the foundations of literacy and learning. In adults, taking notes by hand during a lecture, instead of typing, can lead to better conceptual understanding of material.

"There's actually some very important things going on during the embodied experience of writing by hand," says Ramesh Balasubramaniam , a neuroscientist at the University of California, Merced. "It has important cognitive benefits."

While those benefits have long been recognized by some (for instance, many authors, including Jennifer Egan and Neil Gaiman , draft their stories by hand to stoke creativity), scientists have only recently started investigating why writing by hand has these effects.

A slew of recent brain imaging research suggests handwriting's power stems from the relative complexity of the process and how it forces different brain systems to work together to reproduce the shapes of letters in our heads onto the page.

Your brain on handwriting

Both handwriting and typing involve moving our hands and fingers to create words on a page. But handwriting, it turns out, requires a lot more fine-tuned coordination between the motor and visual systems. This seems to more deeply engage the brain in ways that support learning.

Feeling Artsy? Here's How Making Art Helps Your Brain

Shots - Health News

Feeling artsy here's how making art helps your brain.

"Handwriting is probably among the most complex motor skills that the brain is capable of," says Marieke Longcamp , a cognitive neuroscientist at Aix-Marseille Université.

Gripping a pen nimbly enough to write is a complicated task, as it requires your brain to continuously monitor the pressure that each finger exerts on the pen. Then, your motor system has to delicately modify that pressure to re-create each letter of the words in your head on the page.

"Your fingers have to each do something different to produce a recognizable letter," says Sophia Vinci-Booher , an educational neuroscientist at Vanderbilt University. Adding to the complexity, your visual system must continuously process that letter as it's formed. With each stroke, your brain compares the unfolding script with mental models of the letters and words, making adjustments to fingers in real time to create the letters' shapes, says Vinci-Booher.

That's not true for typing.

To type "tap" your fingers don't have to trace out the form of the letters — they just make three relatively simple and uniform movements. In comparison, it takes a lot more brainpower, as well as cross-talk between brain areas, to write than type.

Recent brain imaging studies bolster this idea. A study published in January found that when students write by hand, brain areas involved in motor and visual information processing " sync up " with areas crucial to memory formation, firing at frequencies associated with learning.

"We don't see that [synchronized activity] in typewriting at all," says Audrey van der Meer , a psychologist and study co-author at the Norwegian University of Science and Technology. She suggests that writing by hand is a neurobiologically richer process and that this richness may confer some cognitive benefits.

Other experts agree. "There seems to be something fundamental about engaging your body to produce these shapes," says Robert Wiley , a cognitive psychologist at the University of North Carolina, Greensboro. "It lets you make associations between your body and what you're seeing and hearing," he says, which might give the mind more footholds for accessing a given concept or idea.

Those extra footholds are especially important for learning in kids, but they may give adults a leg up too. Wiley and others worry that ditching handwriting for typing could have serious consequences for how we all learn and think.

What might be lost as handwriting wanes

The clearest consequence of screens and keyboards replacing pen and paper might be on kids' ability to learn the building blocks of literacy — letters.

"Letter recognition in early childhood is actually one of the best predictors of later reading and math attainment," says Vinci-Booher. Her work suggests the process of learning to write letters by hand is crucial for learning to read them.

"When kids write letters, they're just messy," she says. As kids practice writing "A," each iteration is different, and that variability helps solidify their conceptual understanding of the letter.

Research suggests kids learn to recognize letters better when seeing variable handwritten examples, compared with uniform typed examples.

This helps develop areas of the brain used during reading in older children and adults, Vinci-Booher found.

"This could be one of the ways that early experiences actually translate to long-term life outcomes," she says. "These visually demanding, fine motor actions bake in neural communication patterns that are really important for learning later on."

Ditching handwriting instruction could mean that those skills don't get developed as well, which could impair kids' ability to learn down the road.

"If young children are not receiving any handwriting training, which is very good brain stimulation, then their brains simply won't reach their full potential," says van der Meer. "It's scary to think of the potential consequences."

Many states are trying to avoid these risks by mandating cursive instruction. This year, California started requiring elementary school students to learn cursive , and similar bills are moving through state legislatures in several states, including Indiana, Kentucky, South Carolina and Wisconsin. (So far, evidence suggests that it's the writing by hand that matters, not whether it's print or cursive.)

Slowing down and processing information

For adults, one of the main benefits of writing by hand is that it simply forces us to slow down.

During a meeting or lecture, it's possible to type what you're hearing verbatim. But often, "you're not actually processing that information — you're just typing in the blind," says van der Meer. "If you take notes by hand, you can't write everything down," she says.

The relative slowness of the medium forces you to process the information, writing key words or phrases and using drawing or arrows to work through ideas, she says. "You make the information your own," she says, which helps it stick in the brain.

Such connections and integration are still possible when typing, but they need to be made more intentionally. And sometimes, efficiency wins out. "When you're writing a long essay, it's obviously much more practical to use a keyboard," says van der Meer.

Still, given our long history of using our hands to mark meaning in the world, some scientists worry about the more diffuse consequences of offloading our thinking to computers.

"We're foisting a lot of our knowledge, extending our cognition, to other devices, so it's only natural that we've started using these other agents to do our writing for us," says Balasubramaniam.

It's possible that this might free up our minds to do other kinds of hard thinking, he says. Or we might be sacrificing a fundamental process that's crucial for the kinds of immersive cognitive experiences that enable us to learn and think at our full potential.

Balasubramaniam stresses, however, that we don't have to ditch digital tools to harness the power of handwriting. So far, research suggests that scribbling with a stylus on a screen activates the same brain pathways as etching ink on paper. It's the movement that counts, he says, not its final form.

Jonathan Lambert is a Washington, D.C.-based freelance journalist who covers science, health and policy.

  • handwriting

Stanford University

Along with Stanford news and stories, show me:

  • Student information
  • Faculty/Staff information

We want to provide announcements, events, leadership messages and resources that are relevant to you. Your selection is stored in a browser cookie which you can remove at any time using “Clear all personalization” below.

Flooding in Gandakha City, Balochistan, Pakistan in August 2022. (Image credit: Kafeel Ahmed/Pexels)

During the summer of 2022, the Indus River in Pakistan overflowed its banks and swept through the homes of between 30-40 million people. Eight million were permanently displaced, and at least 1,700 people died. Damages to crops, infrastructure, industry, and livelihoods were estimated at $30 billion. In response to this, Stanford researchers from the Natural Capital Project (NatCap) and the Carnegie Institution for Science collaborated on a new way to quickly calculate the approximate depths of flooding in different areas and number of people affected. Their analysis offers insights into potential options and costs for incorporating adaptation to future floods into rebuilding efforts, and shows that climate adaptation measures like these could have helped most, if not all, of the people affected by the flood.

“With events of this scale, it’s very poorly understood what the costs of climate adaptation would be,” said Rafael Schmitt , lead author of the paper, published Oct. 25 in Environmental Research Letters , and a lead scientist with NatCap. He noted that climate adaptation has been a second priority behind climate mitigation – a trend now called the adaptation gap . But clearly, climate change is here now.

“We were motivated by these big floods that are happening now every year, to ask: how can we conduct a very high-level assessment of what it would cost to adapt livelihoods to a changing climate? This could help countries and international donors evaluate the cost-effectiveness of specific adaptation measures,” Schmitt added, noting the default is often to build back to the status quo, resulting in lack of preparedness for future floods, much as rebuilding from Pakistan floods in 2010 did.

A new climate adaptation decision-support tool

The researchers addressed two main options for adapting to future flooding in Pakistan, both of which have been widely implemented across Asia: “moving up” by building elevated structures, or “moving over” by temporarily relocating when floods occur. The depth of flooding – and how far away dry land is – are important factors for determining which response makes sense. Locations with shallow flood depths that are far from dry land would favor elevating buildings, while flood depths of greater than two meters make elevated structures impractical and too costly, based on experiences in nearby Bangladesh. Yet flood stage information (i.e., flood depth or severity) to help make this determination has been hard to come by.

The team brought together satellite data on where flooding occurred, which are readily available in nearly real-time; ground elevation data combined with simplified hydrologic principles (e.g., water flows downhill) to reveal depth; and demographic data on population density, housing, and other infrastructure. This produced their “Floodplain Adaptation Strategies Testbed” or “FAST,” a rapid overview of flood severity and exposure that shows how deep the flooding was in different locations, and how many people were exposed to those depths.

Through FAST, the researchers estimated that 26.6 million people in Pakistan were exposed to low water levels (less than 1 meter), 7.4 million people were exposed to water levels between 1-2 meters, and 5.7 million people were exposed to more than 2 meters of flooding. Based on this and proximity to dry land, there were 27.5 million people in the “move up or over” category (in other words, either strategy could work), 5.1 million people in the “move over” category, 6.3 million people in the “move up” category, and half a million people in the retreat category (where the flood depths were greater than 2 meters and they’re far from dry land). Focusing on the 7.4 million people who experienced 1-2 meters of flood depth, the analysis estimated adaptation costs between $1.5-$3.6 billion, in addition to the $5.8 billion to rebuild housing to the status quo.

Prioritizing equity and resilience in rebuilding efforts

This version of FAST looked only at housing but it could also be applied to other types of infrastructure, such as roads, schools, and hospitals. And in the future, its analyses could become even more detailed because of a new, more advanced, NASA Surface Water and Ocean Topography satellite, or SWOT.

The researchers also recognize that there are other adaptation options besides “moving up or moving over.” For example, local water agencies often rely on dikes, levees, and other “hard” infrastructure – which the researchers warn can promote development in areas prone to flooding, increasing the risk of catastrophic damages if infrastructure fails. Whatever the mix of responses is, FAST could help provide information, but it must be checked to see whether and how these options meet actual community needs.

Without analyses like FAST, reconstruction funding can often be directed to those with the greatest influence, who perhaps need the least support. “The study speaks to the potential to incorporate science-informed adaptation measures into reconstruction and disaster response, helping in investment prioritization. This is particularly useful nowadays with the discussions on mechanisms to compensate countries of the Global South for climate-change-attributed damages,” said Edgar Virgüez, postdoctoral research scientist and deputy group leader at the Carnegie Department of Global Ecology at Stanford and a co-author of the study. The FAST tool could offer a more data-driven and equitable approach to prioritization.

“Countries of the Global South, like my native Colombia, would benefit from process-based model assessments at scale and in a timely manner that can guide the investments of scarce resources. Especially since many of these countries lack timely-generated data, which complicates strategic decision investments,” said Virgüez.

An important outcome of the United Nations Climate Conference last year (COP27) was a new Loss and Damage Fund to provide financial support for countries that are most vulnerable to climate change. In this paper, the team urged funders and governments to rebuild with adaptation in mind. To do that, they say, more science should also be directed toward understanding low-cost adaptation options. “Flood models are data-intensive, and you need specialized knowledge to run them,” said Schmitt. “We need adaptation research that is easier to use and act on. FAST is a step toward that goal.”

Media Contacts

Elana Kimbrell, Stanford Natural Capital Project: [email protected]

Rafael Jan Pablo Schmitt, Stanford Natural Capital Project: [email protected]

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

When Online Content Disappears

38% of webpages that existed in 2013 are no longer accessible a decade later, table of contents.

  • Webpages from the last decade
  • Links on government websites
  • Links on news websites
  • Reference links on Wikipedia
  • Posts on Twitter
  • Acknowledgments
  • Collection and analysis of Twitter data
  • Data collection for World Wide Web websites, government websites and news websites
  • Data collection for Wikipedia source links
  • Evaluating the status of pages and links
  • Definition of links

Pew Research Center conducted the analysis to examine how often online content that once existed becomes inaccessible. One part of the study looks at a representative sample of webpages that existed over the past decade to see how many are still accessible today. For this analysis, we collected a sample of pages from the Common Crawl web repository for each year from 2013 to 2023. We then tried to access those pages to see how many still exist.

A second part of the study looks at the links on existing webpages to see how many of those links are still functional. We did this by collecting a large sample of pages from government websites, news websites and the online encyclopedia Wikipedia .

We identified relevant news domains using data from the audience metrics company comScore and relevant government domains (at multiple levels of government) using data from get.gov , the official administrator for the .gov domain. We collected the news and government pages via Common Crawl and the Wikipedia pages from an archive maintained by the Wikimedia Foundation . For each collection, we identified the links on those pages and followed them to their destination to see what share of those links point to sites that are no longer accessible.

A third part of the study looks at how often individual posts on social media sites are deleted or otherwise removed from public view. We did this by collecting a large sample of public tweets on the social media platform X (then known as Twitter) in real time using the Twitter Streaming API. We then tracked the status of those tweets for a period of three months using the Twitter Search API to monitor how many were still publicly available. Refer to the report methodology for more details.

The internet is an unimaginably vast repository of modern life, with hundreds of billions of indexed webpages. But even as users across the world rely on the web to access books, images, news articles and other resources, this content sometimes disappears from view.

A new Pew Research Center analysis shows just how fleeting online content actually is:

  • A quarter of all webpages that existed at one point between 2013 and 2023 are no longer accessible, as of October 2023. In most cases, this is because an individual page was deleted or removed on an otherwise functional website.

A line chart showing that 38% of webpages from 2013 are no longer accessible

  • For older content, this trend is even starker. Some 38% of webpages that existed in 2013 are not available today, compared with 8% of pages that existed in 2023.

This “digital decay” occurs in many different online spaces. We examined the links that appear on government and news websites, as well as in the “References” section of Wikipedia pages as of spring 2023. This analysis found that:

  • 23% of news webpages contain at least one broken link, as do 21% of webpages from government sites. News sites with a high level of site traffic and those with less are about equally likely to contain broken links. Local-level government webpages (those belonging to city governments) are especially likely to have broken links.
  • 54% of Wikipedia pages contain at least one link in their “References” section that points to a page that no longer exists.

To see how digital decay plays out on social media, we also collected a real-time sample of tweets during spring 2023 on the social media platform X (then known as Twitter) and followed them for three months. We found that:

  • Nearly one-in-five tweets are no longer publicly visible on the site just months after being posted. In 60% of these cases, the account that originally posted the tweet was made private, suspended or deleted entirely. In the other 40%, the account holder deleted the individual tweet, but the account itself still existed.
  • Certain types of tweets tend to go away more often than others. More than 40% of tweets written in Turkish or Arabic are no longer visible on the site within three months of being posted. And tweets from accounts with the default profile settings are especially likely to disappear from public view.

How this report defines inaccessible links and webpages

There are many ways of defining whether something on the internet that used to exist is now inaccessible to people trying to reach it today. For instance, “inaccessible” could mean that:

  • The page no longer exists on its host server, or the host server itself no longer exists. Someone visiting this type of page would typically receive a variation on the “404 Not Found” server error instead of the content they were looking for.
  • The page address exists but its content has been changed – sometimes dramatically – from what it was originally.
  • The page exists but certain users – such as those with blindness or other visual impairments – might find it difficult or impossible to read.

For this report, we focused on the first of these: pages that no longer exist. The other definitions of accessibility are beyond the scope of this research.

Our approach is a straightforward way of measuring whether something online is accessible or not. But even so, there is some ambiguity.

First, there are dozens of status codes indicating a problem that a user might encounter when they try to access a page. Not all of them definitively indicate whether the page is permanently defunct or just temporarily unavailable. Second, for security reasons, many sites actively try to prevent the sort of automated data collection that we used to test our full list of links.

For these reasons, we used the most conservative estimate possible for deciding whether a site was actually accessible or not. We counted pages as inaccessible only if they returned one of nine error codes that definitively indicate that the page and/or its host server no longer exist or have become nonfunctional – regardless of how they are being accessed, and by whom. The full list of error codes that we included in our definition are in the methodology .

Here are some of the findings from our analysis of digital decay in various online spaces.

To conduct this part of our analysis, we collected a random sample of just under 1 million webpages from the archives of Common Crawl , an internet archive service that periodically collects snapshots of the internet as it exists at different points in time. We sampled pages collected by Common Crawl each year from 2013 through 2023 (approximately 90,000 pages per year) and checked to see if those pages still exist today.

We found that 25% of all the pages we collected from 2013 through 2023 were no longer accessible as of October 2023. This figure is the sum of two different types of broken pages: 16% of pages are individually inaccessible but come from an otherwise functional root-level domain; the other 9% are inaccessible because their entire root domain is no longer functional.

Not surprisingly, the older snapshots in our collection had the largest share of inaccessible links. Of the pages collected from the 2013 snapshot, 38% were no longer accessible in 2023. But even for pages collected in the 2021 snapshot, about one-in-five were no longer accessible just two years later.

A bar chart showing that Around 1 in 5 government webpages contain at least one broken link

We sampled around 500,000 pages from government websites using the Common Crawl March/April 2023 snapshot of the internet, including a mix of different levels of government (federal, state, local and others). We found every link on each page and followed a random selection of those links to their destination to see if the pages they refer to still exist.

Across the government websites we sampled, there were 42 million links. The vast majority of those links (86%) were internal, meaning they link to a different page on the same website. An explainer resource on the IRS website that links to other documents or forms on the IRS site would be an example of an internal link.

Around three-quarters of government webpages we sampled contained at least one on-page link. The typical (median) page contains 50 links, but many pages contain far more. A page in the 90th percentile contains 190 links, and a page in the 99th percentile (that is, the top 1% of pages by number of links) has 740 links.

Other facts about government webpage links:

  • The vast majority go to secure HTTP pages (and have a URL starting with “https://”).
  • 6% go to a static file, like a PDF document.
  • 16% now redirect to a different URL than the one they originally pointed to.

When we followed these links, we found that 6% point to pages that are no longer accessible. Similar shares of internal and external links are no longer functional.

Overall, 21% of all the government webpages we examined contained at least one broken link. Across every level of government we looked at, there were broken links on at least 14% of pages; city government pages had the highest rates of broken links.

A bar chart showing that 23% of news webpages have at least one broken link

For this analysis, we sampled 500,000 pages from 2,063 websites classified as “News/Information” by the audience metrics firm comScore. The pages were collected from the Common Crawl March/April 2023 snapshot of the internet.

Across the news sites sampled, this collection contained more than 14 million links pointing to an outside website. 1 Some 94% of these pages contain at least one external-facing link. The median page contains 20 links, and pages in the top 10% by link count have 56 links.

Like government websites, the vast majority of these links go to secure HTTP pages (those with a URL beginning with “https://”). Around 12% of links on these news sites point to a static file, like a PDF document. And 32% of links on news sites redirected to a different URL than the one they originally pointed to – slightly less than the 39% of external links on government sites that redirect.

When we tracked these links to their destination, we found that 5% of all links on news site pages are no longer accessible. And 23% of all the pages we sampled contained at least one broken link.

Broken links are about as prevalent on the most-trafficked news websites as they are on the least-trafficked sites. Some 25% of pages on news websites in the top 20% by site traffic have at least one broken link. That is nearly identical to the 26% of sites in the bottom 20% by site traffic.

For this analysis, we collected a random sample of 50,000 English-language Wikipedia pages and examined the links in their “References” section. The vast majority of these pages (82%) contain at least one reference link – that is, one that directs the reader to a webpage other than Wikipedia itself.

In total, there are just over 1 million reference links across all the pages we collected. The typical page has four reference links.

The analysis indicates that 11% of all references linked on Wikipedia are no longer accessible. On about 2% of source pages containing reference links, every link on the page was broken or otherwise inaccessible, while another 53% of pages contained at least one broken link.

A pie chart showing that Around 1 in 5 tweets disappear from public view within months

For this analysis, we collected nearly 5 million tweets posted from March 8 to April 27, 2023, on the social media platform X, which at the time was known as Twitter. We did this using Twitter’s Streaming API, collecting 3,000 public tweets every 30 minutes in real time. This provided us with a representative sample of all tweets posted on the platform during that period. We monitored those tweets until June 15, 2023, and checked each day to see if they were still available on the site or not.

At the end of the observation period, we found that 18% of the tweets from our initial collection window were no longer publicly visible on the site . In a majority of cases, this was because the account that originally posted the tweet was made private, suspended or deleted entirely. For the remaining tweets, the account that posted the tweet was still visible on the site, but the individual tweet had been deleted.

Which tweets tend to disappear?

A bar chart showing that Inaccessible tweets often come from accounts with default profile settings

Tweets were especially likely to be deleted or removed over the course of our collection period if they were:

  • Written in certain languages. Nearly half of all the Turkish-language tweets we collected – and a slightly smaller share of those written in Arabic – were no longer available at the end of the tracking period.
  • Posted by accounts using the site’s default profile settings. More than half of tweets from accounts using the default profile image were no longer available at the end of the tracking period, as were more than a third from accounts with a default bio field. Tweets from these accounts tend to disappear because the entire account has been deleted or made private, as opposed to the individual tweet being deleted.
  • Posted by unverified accounts.

We also found that removed or deleted tweets tended to come from newer accounts with relatively few followers and modest activityon the site. On average, tweets that were no longer visible on the site were posted by accounts around eight months younger than those whose tweets stayed on the site.

And when we analyzed the types of tweets that were no longer available, we found that retweets, quote tweets and original tweets did not differ much from the overall average. But replies were relatively unlikely to be removed – just 12% of replies were inaccessible at the end of our monitoring period.

Most tweets that are removed from the site tend to disappear soon after being posted. In addition to looking at how many tweets from our collection were still available at the end of our tracking period, we conducted a survival analysis to see how long these tweets tended to remain available. We found that:

  • 1% of tweets are removed within one hour
  • 3% within a day
  • 10% within a week
  • 15% within a month

Put another way: Half of tweets that are eventually removed from the platform are unavailable within the first six days of being posted. And 90% of these tweets are unavailable within 46 days.

Tweets don’t always disappear forever, though. Some 6% of the tweets we collected disappeared and then became available again at a later point. This could be due to an account going private and then returning to public status, or to the account being suspended and later reinstated. Of those “reappeared” tweets, the vast majority (90%) were still accessible on Twitter at the end of the monitoring period.

  • For our analysis of news sites, we did not collect or check the functionality of internal-facing on-page links – those that point to another page on the same root domain. ↩

Sign up for our weekly newsletter

Fresh data delivery Saturday mornings

Sign up for The Briefing

Weekly updates on the world of news & information

  • Internet & Technology
  • Online Search
  • Public Knowledge

Electric Vehicle Charging Infrastructure in the U.S.

A quarter of u.s. teachers say ai tools do more harm than good in k-12 education, teens and video games today, americans’ views of technology companies, 6 facts about americans and tiktok, most popular, report materials.

1615 L St. NW, Suite 800 Washington, DC 20036 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

Copyright 2024 Pew Research Center

IMAGES

  1. how to write a method in science report

    how to write analysis in research report

  2. 8+ Analysis Report Templates

    how to write analysis in research report

  3. Outstanding Data Analysis And Report Writing Sample : V-M-D for How To

    how to write analysis in research report

  4. 10 Data Analysis Report Examples

    how to write analysis in research report

  5. FREE 14+ Sample Research Reports in MS Word, Google Docs, Pages, PDF

    how to write analysis in research report

  6. FREE 10+ Sample Data Analysis Templates in PDF

    how to write analysis in research report

VIDEO

  1. Report Writing

  2. Filgrastim & Pegfilgrastim Biosimilars & Biosuperiors 2015

  3. HOW TO WRITE ANALYSIS OF THE POEM / EXPLANATION IN HINDI

  4. 8% 3rd wave exploration 18/06/2023

  5. Writng a Data Analysis Chapter

  6. 11: Research writing

COMMENTS

  1. PDF Summary and Analysis of Scientific Research Articles

    The analysis shows that you can evaluate the evidence presented in the research and explain why the research could be important. Summary. The summary portion of the paper should be written with enough detail so that a reader would not have to look at the original research to understand all the main points. At the same time, the summary section ...

  2. How To Write an Analysis (With Examples and Tips)

    Write the body paragraphs. Add a conclusion. 1. Choose your argument. The first step is to determine the argument you are making. The topic you analyze should be specific so you can present a clear, focused argument. This argument should take a strong stance so readers understand exactly what your claim is.

  3. PDF Tips for Writing Analytic Research Papers

    Communications Program. 79 John F. Kennedy Street Cambridge, Massachusetts 02138. TIPS FOR WRITING ANALYTIC RESEARCH PAPERS. • Papers require analysis, not just description. When you describe an existing situation (e.g., a policy, organization, or problem), use that description for some analytic purpose: respond to it, evaluate it according ...

  4. Analyze Report: How To Write The Best Analytical Report (+ 6 Examples!)

    Writing an Analyze Report requires careful planning, data analysis, and clear communication of findings. Here's a step-by-step guide to help you write an effective analytical report: ... Market Research Reports: Market research reports analyze market trends, consumer behavior, competitive landscape, ... SWOT Analysis Reports: SWOT (Strengths ...

  5. 8.5 Writing Process: Creating an Analytical Report

    End the introduction with your thesis statement. Depending on your topic and the type of report, you can write an effective introduction in several ways. Opening a report with an overview is a tried-and-true strategy, as shown in the following example on the U.S. response to COVID-19 by Trevor Garcia.

  6. The Beginner's Guide to Statistical Analysis

    Step 1: Write your hypotheses and plan your research design. To collect valid data for statistical analysis, you first need to specify your hypotheses and plan out your research design. Writing statistical hypotheses. The goal of research is often to investigate a relationship between variables within a population. You start with a prediction ...

  7. Research Report

    Thesis. Thesis is a type of research report. A thesis is a long-form research document that presents the findings and conclusions of an original research study conducted by a student as part of a graduate or postgraduate program. It is typically written by a student pursuing a higher degree, such as a Master's or Doctoral degree, although it ...

  8. How to Write a Results Section

    Reporting quantitative research results. If you conducted quantitative research, you'll likely be working with the results of some sort of statistical analysis.. Your results section should report the results of any statistical tests you used to compare groups or assess relationships between variables.It should also state whether or not each hypothesis was supported.

  9. PDF Writing a Research Report

    Use the section headings (outlined above) to assist with your rough plan. Write a thesis statement that clarifies the overall purpose of your report. Jot down anything you already know about the topic in the relevant sections. 3 Do the Research. Steps 1 and 2 will guide your research for this report.

  10. PDF Structure of a Data Analysis Report

    The data analysis report isn't quite like a research paper or term paper in a class, nor like aresearch article in a journal. It is meant, primarily, to start an organized conversation between you and your client/collaborator. In that sense it is a kind of "internal" communication, sort o f like an extended memo. On the other hand it

  11. PDF How to Write an Effective Research REport

    Abstract. This guide for writers of research reports consists of practical suggestions for writing a report that is clear, concise, readable, and understandable. It includes suggestions for terminology and notation and for writing each section of the report—introduction, method, results, and discussion. Much of the guide consists of ...

  12. How to Write a Research Paper

    Choose a research paper topic. Conduct preliminary research. Develop a thesis statement. Create a research paper outline. Write a first draft of the research paper. Write the introduction. Write a compelling body of text. Write the conclusion. The second draft.

  13. Scientific Reports

    What this handout is about. This handout provides a general guide to writing reports about scientific research you've performed. In addition to describing the conventional rules about the format and content of a lab report, we'll also attempt to convey why these rules exist, so you'll get a clearer, more dependable idea of how to approach ...

  14. How to Write a Lab Report: Step-by-Step Guide & Examples

    Author, A. A., Author, B. B., & Author, C. C. (year). Article title. Journal Title, volume number (issue number), page numbers. A simple way to write your reference section is to use Google scholar. Just type the name and date of the psychologist in the search box and click on the "cite" link. Next, copy and paste the APA reference into the ...

  15. Analysis

    Analysis. Analysis is a type of primary research that involves finding and interpreting patterns in data, classifying those patterns, and generalizing the results. It is useful when looking at actions, events, or occurrences in different texts, media, or publications. Analysis can usually be done without considering most of the ethical issues ...

  16. How to conduct a meta-analysis in eight steps: a practical guide

    2.1 Step 1: defining the research question. The first step in conducting a meta-analysis, as with any other empirical study, is the definition of the research question. Most importantly, the research question determines the realm of constructs to be considered or the type of interventions whose effects shall be analyzed.

  17. Research Paper Analysis: How to Analyze a Research Article + Example

    Save the word count for the "meat" of your paper — that is, for the analysis. 2. Summarize the Article. Now, you should write a brief and focused summary of the scientific article. It should be shorter than your analysis section and contain all the relevant details about the research paper.

  18. Critical Analysis

    When to Write Critical Analysis. You may want to write a critical analysis in the following situations: Academic Assignments: If you are a student, you may be assigned to write a critical analysis as a part of your coursework. This could include analyzing a piece of literature, a historical event, or a scientific paper.

  19. Writing a Case Analysis Paper

    To avoid any confusion, here are twelve characteristics that delineate the differences between writing a paper using the case study research method and writing a case analysis paper: Case study is a method of in-depth research and rigorous inquiry; case analysis is a reliable method of teaching and learning. A case study is a modality of ...

  20. Research Results Section

    Research results refer to the findings and conclusions derived from a systematic investigation or study conducted to answer a specific question or hypothesis. These results are typically presented in a written report or paper and can include various forms of data such as numerical data, qualitative data, statistics, charts, graphs, and visual aids.

  21. How to Write a Good Results Section

    One good way of determining the right level of detail is to keep the figures in mind when writing the results section. Many times, authors will use the text only as a vehicle to introduce the figures. However, the proper way is actually the opposite, where the figures provide additional depth and detail for the text.

  22. PDF Academic Phrasebank

    Preface. The Academic Phrasebank is a general resource for academic writers. It aims to provide the phraseological 'nuts and bolts' of academic writing organised according to the main sections of a research paper or dissertation. Other phrases are listed under the more general communicative functions of academic writing.

  23. Case Study Research Method in Psychology

    Case studies are in-depth investigations of a person, group, event, or community. Typically, data is gathered from various sources using several methods (e.g., observations & interviews). The case study research method originated in clinical medicine (the case history, i.e., the patient's personal history). In psychology, case studies are ...

  24. How to Write Data Analysis Reports

    Writing a data analysis report comprises many critical processes, each of which adds to the clarity, coherence, and effectiveness of the final product. Let's discuss each stage: 1. Map Your Report with an Outline. Creating a well-structured outline is like drawing a roadmap for your report.

  25. Anticipating impacts: using large-scale scenario-writing to ...

    The tremendous rise of generative AI has reached every part of society—including the news environment. There are many concerns about the individual and societal impact of the increasing use of generative AI, including issues such as disinformation and misinformation, discrimination, and the promotion of social tensions. However, research on anticipating the impact of generative AI is still ...

  26. How to Do Thematic Analysis

    Finally, we'll write up our analysis of the data. Like all academic texts, writing up a thematic analysis requires an introduction to establish our research question, aims and approach. We should also include a methodology section, describing how we collected the data (e.g. through semi-structured interviews or open-ended survey questions ...

  27. Here's what's really going on inside an LLM's neural network

    Now, new research from Anthropic offers a new window into what's going on inside the Claude LLM's "black box." The company's new paper on "Extracting Interpretable Features from Claude 3 Sonnet ...

  28. As schools reconsider cursive, research homes in on handwriting's ...

    Writing by hand also improves memory and recall of words, laying down the foundations of literacy and learning. In adults, taking notes by hand during a lecture, instead of typing, can lead to ...

  29. Tool for rapid analysis of flood adaptation options

    Focusing on the 7.4 million people who experienced 1-2 meters of flood depth, the analysis estimated adaptation costs between $1.5-$3.6 billion, in addition to the $5.8 billion to rebuild housing ...

  30. When Online Content Disappears

    Here are some of the findings from our analysis of digital decay in various online spaces. Webpages from the last decade. To conduct this part of our analysis, we collected a random sample of just under 1 million webpages from the archives of Common Crawl, an internet archive service that periodically collects snapshots of the internet as it exists at different points in time.