• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case AskWhy Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

what is research methods evaluation

Home Market Research

Evaluation Research: Definition, Methods and Examples

Evaluation Research

Content Index

  • What is evaluation research
  • Why do evaluation research

Quantitative methods

Qualitative methods.

  • Process evaluation research question examples
  • Outcome evaluation research question examples

What is evaluation research?

Evaluation research, also known as program evaluation, refers to research purpose instead of a specific method. Evaluation research is the systematic assessment of the worth or merit of time, money, effort and resources spent in order to achieve a goal.

Evaluation research is closely related to but slightly different from more conventional social research . It uses many of the same methods used in traditional social research, but because it takes place within an organizational context, it requires team skills, interpersonal skills, management skills, political smartness, and other research skills that social research does not need much. Evaluation research also requires one to keep in mind the interests of the stakeholders.

Evaluation research is a type of applied research, and so it is intended to have some real-world effect.  Many methods like surveys and experiments can be used to do evaluation research. The process of evaluation research consisting of data analysis and reporting is a rigorous, systematic process that involves collecting data about organizations, processes, projects, services, and/or resources. Evaluation research enhances knowledge and decision-making, and leads to practical applications.

LEARN ABOUT: Action Research

Why do evaluation research?

The common goal of most evaluations is to extract meaningful information from the audience and provide valuable insights to evaluators such as sponsors, donors, client-groups, administrators, staff, and other relevant constituencies. Most often, feedback is perceived value as useful if it helps in decision-making. However, evaluation research does not always create an impact that can be applied anywhere else, sometimes they fail to influence short-term decisions. It is also equally true that initially, it might seem to not have any influence, but can have a delayed impact when the situation is more favorable. In spite of this, there is a general agreement that the major goal of evaluation research should be to improve decision-making through the systematic utilization of measurable feedback.

Below are some of the benefits of evaluation research

  • Gain insights about a project or program and its operations

Evaluation Research lets you understand what works and what doesn’t, where we were, where we are and where we are headed towards. You can find out the areas of improvement and identify strengths. So, it will help you to figure out what do you need to focus more on and if there are any threats to your business. You can also find out if there are currently hidden sectors in the market that are yet untapped.

  • Improve practice

It is essential to gauge your past performance and understand what went wrong in order to deliver better services to your customers. Unless it is a two-way communication, there is no way to improve on what you have to offer. Evaluation research gives an opportunity to your employees and customers to express how they feel and if there’s anything they would like to change. It also lets you modify or adopt a practice such that it increases the chances of success.

  • Assess the effects

After evaluating the efforts, you can see how well you are meeting objectives and targets. Evaluations let you measure if the intended benefits are really reaching the targeted audience and if yes, then how effectively.

  • Build capacity

Evaluations help you to analyze the demand pattern and predict if you will need more funds, upgrade skills and improve the efficiency of operations. It lets you find the gaps in the production to delivery chain and possible ways to fill them.

Methods of evaluation research

All market research methods involve collecting and analyzing the data, making decisions about the validity of the information and deriving relevant inferences from it. Evaluation research comprises of planning, conducting and analyzing the results which include the use of data collection techniques and applying statistical methods.

Some of the evaluation methods which are quite popular are input measurement, output or performance measurement, impact or outcomes assessment, quality assessment, process evaluation, benchmarking, standards, cost analysis, organizational effectiveness, program evaluation methods, and LIS-centered methods. There are also a few types of evaluations that do not always result in a meaningful assessment such as descriptive studies, formative evaluations, and implementation analysis. Evaluation research is more about information-processing and feedback functions of evaluation.

These methods can be broadly classified as quantitative and qualitative methods.

The outcome of the quantitative research methods is an answer to the questions below and is used to measure anything tangible.

  • Who was involved?
  • What were the outcomes?
  • What was the price?

The best way to collect quantitative data is through surveys , questionnaires , and polls . You can also create pre-tests and post-tests, review existing documents and databases or gather clinical data.

Surveys are used to gather opinions, feedback or ideas of your employees or customers and consist of various question types . They can be conducted by a person face-to-face or by telephone, by mail, or online. Online surveys do not require the intervention of any human and are far more efficient and practical. You can see the survey results on dashboard of research tools and dig deeper using filter criteria based on various factors such as age, gender, location, etc. You can also keep survey logic such as branching, quotas, chain survey, looping, etc in the survey questions and reduce the time to both create and respond to the donor survey . You can also generate a number of reports that involve statistical formulae and present data that can be readily absorbed in the meetings. To learn more about how research tool works and whether it is suitable for you, sign up for a free account now.

Create a free account!

Quantitative data measure the depth and breadth of an initiative, for instance, the number of people who participated in the non-profit event, the number of people who enrolled for a new course at the university. Quantitative data collected before and after a program can show its results and impact.

The accuracy of quantitative data to be used for evaluation research depends on how well the sample represents the population, the ease of analysis, and their consistency. Quantitative methods can fail if the questions are not framed correctly and not distributed to the right audience. Also, quantitative data do not provide an understanding of the context and may not be apt for complex issues.

Learn more: Quantitative Market Research: The Complete Guide

Qualitative research methods are used where quantitative methods cannot solve the research problem , i.e. they are used to measure intangible values. They answer questions such as

  • What is the value added?
  • How satisfied are you with our service?
  • How likely are you to recommend us to your friends?
  • What will improve your experience?

LEARN ABOUT: Qualitative Interview

Qualitative data is collected through observation, interviews, case studies, and focus groups. The steps for creating a qualitative study involve examining, comparing and contrasting, and understanding patterns. Analysts conclude after identification of themes, clustering similar data, and finally reducing to points that make sense.

Observations may help explain behaviors as well as the social context that is generally not discovered by quantitative methods. Observations of behavior and body language can be done by watching a participant, recording audio or video. Structured interviews can be conducted with people alone or in a group under controlled conditions, or they may be asked open-ended qualitative research questions . Qualitative research methods are also used to understand a person’s perceptions and motivations.

LEARN ABOUT:  Social Communication Questionnaire

The strength of this method is that group discussion can provide ideas and stimulate memories with topics cascading as discussion occurs. The accuracy of qualitative data depends on how well contextual data explains complex issues and complements quantitative data. It helps get the answer of “why” and “how”, after getting an answer to “what”. The limitations of qualitative data for evaluation research are that they are subjective, time-consuming, costly and difficult to analyze and interpret.

Learn more: Qualitative Market Research: The Complete Guide

Survey software can be used for both the evaluation research methods. You can use above sample questions for evaluation research and send a survey in minutes using research software. Using a tool for research simplifies the process right from creating a survey, importing contacts, distributing the survey and generating reports that aid in research.

Examples of evaluation research

Evaluation research questions lay the foundation of a successful evaluation. They define the topics that will be evaluated. Keeping evaluation questions ready not only saves time and money, but also makes it easier to decide what data to collect, how to analyze it, and how to report it.

Evaluation research questions must be developed and agreed on in the planning stage, however, ready-made research templates can also be used.

Process evaluation research question examples:

  • How often do you use our product in a day?
  • Were approvals taken from all stakeholders?
  • Can you report the issue from the system?
  • Can you submit the feedback from the system?
  • Was each task done as per the standard operating procedure?
  • What were the barriers to the implementation of each task?
  • Were any improvement areas discovered?

Outcome evaluation research question examples:

  • How satisfied are you with our product?
  • Did the program produce intended outcomes?
  • What were the unintended outcomes?
  • Has the program increased the knowledge of participants?
  • Were the participants of the program employable before the course started?
  • Do participants of the program have the skills to find a job after the course ended?
  • Is the knowledge of participants better compared to those who did not participate in the program?

MORE LIKE THIS

what is research methods evaluation

Customer Experience Lessons from 13,000 Feet — Tuesday CX Thoughts

Aug 20, 2024

insight

Insight: Definition & meaning, types and examples

Aug 19, 2024

employee loyalty

Employee Loyalty: Strategies for Long-Term Business Success 

Jotform vs SurveyMonkey

Jotform vs SurveyMonkey: Which Is Best in 2024

Aug 15, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Tuesday CX Thoughts (TCXT)
  • Uncategorized
  • What’s Coming Up
  • Workforce Intelligence
  • Privacy Policy

Research Method

Home » Evaluating Research – Process, Examples and Methods

Evaluating Research – Process, Examples and Methods

Table of Contents

Evaluating Research

Evaluating Research

Definition:

Evaluating Research refers to the process of assessing the quality, credibility, and relevance of a research study or project. This involves examining the methods, data, and results of the research in order to determine its validity, reliability, and usefulness. Evaluating research can be done by both experts and non-experts in the field, and involves critical thinking, analysis, and interpretation of the research findings.

Research Evaluating Process

The process of evaluating research typically involves the following steps:

Identify the Research Question

The first step in evaluating research is to identify the research question or problem that the study is addressing. This will help you to determine whether the study is relevant to your needs.

Assess the Study Design

The study design refers to the methodology used to conduct the research. You should assess whether the study design is appropriate for the research question and whether it is likely to produce reliable and valid results.

Evaluate the Sample

The sample refers to the group of participants or subjects who are included in the study. You should evaluate whether the sample size is adequate and whether the participants are representative of the population under study.

Review the Data Collection Methods

You should review the data collection methods used in the study to ensure that they are valid and reliable. This includes assessing the measures used to collect data and the procedures used to collect data.

Examine the Statistical Analysis

Statistical analysis refers to the methods used to analyze the data. You should examine whether the statistical analysis is appropriate for the research question and whether it is likely to produce valid and reliable results.

Assess the Conclusions

You should evaluate whether the data support the conclusions drawn from the study and whether they are relevant to the research question.

Consider the Limitations

Finally, you should consider the limitations of the study, including any potential biases or confounding factors that may have influenced the results.

Evaluating Research Methods

Evaluating Research Methods are as follows:

  • Peer review: Peer review is a process where experts in the field review a study before it is published. This helps ensure that the study is accurate, valid, and relevant to the field.
  • Critical appraisal : Critical appraisal involves systematically evaluating a study based on specific criteria. This helps assess the quality of the study and the reliability of the findings.
  • Replication : Replication involves repeating a study to test the validity and reliability of the findings. This can help identify any errors or biases in the original study.
  • Meta-analysis : Meta-analysis is a statistical method that combines the results of multiple studies to provide a more comprehensive understanding of a particular topic. This can help identify patterns or inconsistencies across studies.
  • Consultation with experts : Consulting with experts in the field can provide valuable insights into the quality and relevance of a study. Experts can also help identify potential limitations or biases in the study.
  • Review of funding sources: Examining the funding sources of a study can help identify any potential conflicts of interest or biases that may have influenced the study design or interpretation of results.

Example of Evaluating Research

Example of Evaluating Research sample for students:

Title of the Study: The Effects of Social Media Use on Mental Health among College Students

Sample Size: 500 college students

Sampling Technique : Convenience sampling

  • Sample Size: The sample size of 500 college students is a moderate sample size, which could be considered representative of the college student population. However, it would be more representative if the sample size was larger, or if a random sampling technique was used.
  • Sampling Technique : Convenience sampling is a non-probability sampling technique, which means that the sample may not be representative of the population. This technique may introduce bias into the study since the participants are self-selected and may not be representative of the entire college student population. Therefore, the results of this study may not be generalizable to other populations.
  • Participant Characteristics: The study does not provide any information about the demographic characteristics of the participants, such as age, gender, race, or socioeconomic status. This information is important because social media use and mental health may vary among different demographic groups.
  • Data Collection Method: The study used a self-administered survey to collect data. Self-administered surveys may be subject to response bias and may not accurately reflect participants’ actual behaviors and experiences.
  • Data Analysis: The study used descriptive statistics and regression analysis to analyze the data. Descriptive statistics provide a summary of the data, while regression analysis is used to examine the relationship between two or more variables. However, the study did not provide information about the statistical significance of the results or the effect sizes.

Overall, while the study provides some insights into the relationship between social media use and mental health among college students, the use of a convenience sampling technique and the lack of information about participant characteristics limit the generalizability of the findings. In addition, the use of self-administered surveys may introduce bias into the study, and the lack of information about the statistical significance of the results limits the interpretation of the findings.

Note*: Above mentioned example is just a sample for students. Do not copy and paste directly into your assignment. Kindly do your own research for academic purposes.

Applications of Evaluating Research

Here are some of the applications of evaluating research:

  • Identifying reliable sources : By evaluating research, researchers, students, and other professionals can identify the most reliable sources of information to use in their work. They can determine the quality of research studies, including the methodology, sample size, data analysis, and conclusions.
  • Validating findings: Evaluating research can help to validate findings from previous studies. By examining the methodology and results of a study, researchers can determine if the findings are reliable and if they can be used to inform future research.
  • Identifying knowledge gaps: Evaluating research can also help to identify gaps in current knowledge. By examining the existing literature on a topic, researchers can determine areas where more research is needed, and they can design studies to address these gaps.
  • Improving research quality : Evaluating research can help to improve the quality of future research. By examining the strengths and weaknesses of previous studies, researchers can design better studies and avoid common pitfalls.
  • Informing policy and decision-making : Evaluating research is crucial in informing policy and decision-making in many fields. By examining the evidence base for a particular issue, policymakers can make informed decisions that are supported by the best available evidence.
  • Enhancing education : Evaluating research is essential in enhancing education. Educators can use research findings to improve teaching methods, curriculum development, and student outcomes.

Purpose of Evaluating Research

Here are some of the key purposes of evaluating research:

  • Determine the reliability and validity of research findings : By evaluating research, researchers can determine the quality of the study design, data collection, and analysis. They can determine whether the findings are reliable, valid, and generalizable to other populations.
  • Identify the strengths and weaknesses of research studies: Evaluating research helps to identify the strengths and weaknesses of research studies, including potential biases, confounding factors, and limitations. This information can help researchers to design better studies in the future.
  • Inform evidence-based decision-making: Evaluating research is crucial in informing evidence-based decision-making in many fields, including healthcare, education, and public policy. Policymakers, educators, and clinicians rely on research evidence to make informed decisions.
  • Identify research gaps : By evaluating research, researchers can identify gaps in the existing literature and design studies to address these gaps. This process can help to advance knowledge and improve the quality of research in a particular field.
  • Ensure research ethics and integrity : Evaluating research helps to ensure that research studies are conducted ethically and with integrity. Researchers must adhere to ethical guidelines to protect the welfare and rights of study participants and to maintain the trust of the public.

Characteristics Evaluating Research

Characteristics Evaluating Research are as follows:

  • Research question/hypothesis: A good research question or hypothesis should be clear, concise, and well-defined. It should address a significant problem or issue in the field and be grounded in relevant theory or prior research.
  • Study design: The research design should be appropriate for answering the research question and be clearly described in the study. The study design should also minimize bias and confounding variables.
  • Sampling : The sample should be representative of the population of interest and the sampling method should be appropriate for the research question and study design.
  • Data collection : The data collection methods should be reliable and valid, and the data should be accurately recorded and analyzed.
  • Results : The results should be presented clearly and accurately, and the statistical analysis should be appropriate for the research question and study design.
  • Interpretation of results : The interpretation of the results should be based on the data and not influenced by personal biases or preconceptions.
  • Generalizability: The study findings should be generalizable to the population of interest and relevant to other settings or contexts.
  • Contribution to the field : The study should make a significant contribution to the field and advance our understanding of the research question or issue.

Advantages of Evaluating Research

Evaluating research has several advantages, including:

  • Ensuring accuracy and validity : By evaluating research, we can ensure that the research is accurate, valid, and reliable. This ensures that the findings are trustworthy and can be used to inform decision-making.
  • Identifying gaps in knowledge : Evaluating research can help identify gaps in knowledge and areas where further research is needed. This can guide future research and help build a stronger evidence base.
  • Promoting critical thinking: Evaluating research requires critical thinking skills, which can be applied in other areas of life. By evaluating research, individuals can develop their critical thinking skills and become more discerning consumers of information.
  • Improving the quality of research : Evaluating research can help improve the quality of research by identifying areas where improvements can be made. This can lead to more rigorous research methods and better-quality research.
  • Informing decision-making: By evaluating research, we can make informed decisions based on the evidence. This is particularly important in fields such as medicine and public health, where decisions can have significant consequences.
  • Advancing the field : Evaluating research can help advance the field by identifying new research questions and areas of inquiry. This can lead to the development of new theories and the refinement of existing ones.

Limitations of Evaluating Research

Limitations of Evaluating Research are as follows:

  • Time-consuming: Evaluating research can be time-consuming, particularly if the study is complex or requires specialized knowledge. This can be a barrier for individuals who are not experts in the field or who have limited time.
  • Subjectivity : Evaluating research can be subjective, as different individuals may have different interpretations of the same study. This can lead to inconsistencies in the evaluation process and make it difficult to compare studies.
  • Limited generalizability: The findings of a study may not be generalizable to other populations or contexts. This limits the usefulness of the study and may make it difficult to apply the findings to other settings.
  • Publication bias: Research that does not find significant results may be less likely to be published, which can create a bias in the published literature. This can limit the amount of information available for evaluation.
  • Lack of transparency: Some studies may not provide enough detail about their methods or results, making it difficult to evaluate their quality or validity.
  • Funding bias : Research funded by particular organizations or industries may be biased towards the interests of the funder. This can influence the study design, methods, and interpretation of results.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Tables in Research Paper

Tables in Research Paper – Types, Creating Guide...

Table of Contents

Table of Contents – Types, Formats, Examples

Data collection

Data Collection – Methods Types and Examples

Literature Review

Literature Review – Types Writing Guide and...

Research Paper

Research Paper – Structure, Examples and Writing...

Research Objectives

Research Objectives – Types, Examples and...

  • Evaluation Research Design: Examples, Methods & Types

busayo.longe

As you engage in tasks, you will need to take intermittent breaks to determine how much progress has been made and if any changes need to be effected along the way. This is very similar to what organizations do when they carry out  evaluation research.  

The evaluation research methodology has become one of the most important approaches for organizations as they strive to create products, services, and processes that speak to the needs of target users. In this article, we will show you how your organization can conduct successful evaluation research using Formplus .

What is Evaluation Research?

Also known as program evaluation, evaluation research is a common research design that entails carrying out a structured assessment of the value of resources committed to a project or specific goal. It often adopts social research methods to gather and analyze useful information about organizational processes and products.  

As a type of applied research , evaluation research typically associated  with real-life scenarios within organizational contexts. This means that the researcher will need to leverage common workplace skills including interpersonal skills and team play to arrive at objective research findings that will be useful to stakeholders. 

Characteristics of Evaluation Research

  • Research Environment: Evaluation research is conducted in the real world; that is, within the context of an organization. 
  • Research Focus: Evaluation research is primarily concerned with measuring the outcomes of a process rather than the process itself. 
  • Research Outcome: Evaluation research is employed for strategic decision making in organizations. 
  • Research Goal: The goal of program evaluation is to determine whether a process has yielded the desired result(s). 
  • This type of research protects the interests of stakeholders in the organization. 
  • It often represents a middle-ground between pure and applied research. 
  • Evaluation research is both detailed and continuous. It pays attention to performative processes rather than descriptions. 
  • Research Process: This research design utilizes qualitative and quantitative research methods to gather relevant data about a product or action-based strategy. These methods include observation, tests, and surveys.

Types of Evaluation Research

The Encyclopedia of Evaluation (Mathison, 2004) treats forty-two different evaluation approaches and models ranging from “appreciative inquiry” to “connoisseurship” to “transformative evaluation”. Common types of evaluation research include the following: 

  • Formative Evaluation

Formative evaluation or baseline survey is a type of evaluation research that involves assessing the needs of the users or target market before embarking on a project.  Formative evaluation is the starting point of evaluation research because it sets the tone of the organization’s project and provides useful insights for other types of evaluation.  

  • Mid-term Evaluation

Mid-term evaluation entails assessing how far a project has come and determining if it is in line with the set goals and objectives. Mid-term reviews allow the organization to determine if a change or modification of the implementation strategy is necessary, and it also serves for tracking the project. 

  • Summative Evaluation

This type of evaluation is also known as end-term evaluation of project-completion evaluation and it is conducted immediately after the completion of a project. Here, the researcher examines the value and outputs of the program within the context of the projected results. 

Summative evaluation allows the organization to measure the degree of success of a project. Such results can be shared with stakeholders, target markets, and prospective investors. 

  • Outcome Evaluation

Outcome evaluation is primarily target-audience oriented because it measures the effects of the project, program, or product on the users. This type of evaluation views the outcomes of the project through the lens of the target audience and it often measures changes such as knowledge-improvement, skill acquisition, and increased job efficiency. 

  • Appreciative Enquiry

Appreciative inquiry is a type of evaluation research that pays attention to result-producing approaches. It is predicated on the belief that an organization will grow in whatever direction its stakeholders pay primary attention to such that if all the attention is focused on problems, identifying them would be easy. 

In carrying out appreciative inquiry, the research identifies the factors directly responsible for the positive results realized in the course of a project, analyses the reasons for these results, and intensifies the utilization of these factors. 

Evaluation Research Methodology 

There are four major evaluation research methods, namely; output measurement, input measurement, impact assessment and service quality

  • Output/Performance Measurement

Output measurement is a method employed in evaluative research that shows the results of an activity undertaking by an organization. In other words, performance measurement pays attention to the results achieved by the resources invested in a specific activity or organizational process. 

More than investing resources in a project, organizations must be able to track the extent to which these resources have yielded results, and this is where performance measurement comes in. Output measurement allows organizations to pay attention to the effectiveness and impact of a process rather than just the process itself. 

Other key indicators of performance measurement include user-satisfaction, organizational capacity, market penetration, and facility utilization. In carrying out performance measurement, organizations must identify the parameters that are relevant to the process in question, their industry, and the target markets. 

5 Performance Evaluation Research Questions Examples

  • What is the cost-effectiveness of this project?
  • What is the overall reach of this project?
  • How would you rate the market penetration of this project?
  • How accessible is the project? 
  • Is this project time-efficient? 

performance-evaluation-survey

  • Input Measurement

In evaluation research, input measurement entails assessing the number of resources committed to a project or goal in any organization. This is one of the most common indicators in evaluation research because it allows organizations to track their investments. 

The most common indicator of inputs measurement is the budget which allows organizations to evaluate and limit expenditure for a project. It is also important to measure non-monetary investments like human capital; that is the number of persons needed for successful project execution and production capital. 

5 Input Evaluation Research Questions Examples

  • What is the budget for this project?
  • What is the timeline of this process?
  • How many employees have been assigned to this project? 
  • Do we need to purchase new machinery for this project? 
  • How many third-parties are collaborators in this project? 

what is research methods evaluation

  • Impact/Outcomes Assessment

In impact assessment, the evaluation researcher focuses on how the product or project affects target markets, both directly and indirectly. Outcomes assessment is somewhat challenging because many times, it is difficult to measure the real-time value and benefits of a project for the users. 

In assessing the impact of a process, the evaluation researcher must pay attention to the improvement recorded by the users as a result of the process or project in question. Hence, it makes sense to focus on cognitive and affective changes, expectation-satisfaction, and similar accomplishments of the users. 

5 Impact Evaluation Research Questions Examples

  • How has this project affected you? 
  • Has this process affected you positively or negatively?
  • What role did this project play in improving your earning power? 
  • On a scale of 1-10, how excited are you about this project?
  • How has this project improved your mental health? 

what is research methods evaluation

  • Service Quality

Service quality is the evaluation research method that accounts for any differences between the expectations of the target markets and their impression of the undertaken project. Hence, it pays attention to the overall service quality assessment carried out by the users. 

It is not uncommon for organizations to build the expectations of target markets as they embark on specific projects. Service quality evaluation allows these organizations to track the extent to which the actual product or service delivery fulfils the expectations. 

5 Service Quality Evaluation Questions

  • On a scale of 1-10, how satisfied are you with the product?
  • How helpful was our customer service representative?
  • How satisfied are you with the quality of service?
  • How long did it take to resolve the issue at hand?
  • How likely are you to recommend us to your network?

what is research methods evaluation

Uses of Evaluation Research 

  • Evaluation research is used by organizations to measure the effectiveness of activities and identify areas needing improvement. Findings from evaluation research are key to project and product advancements and are very influential in helping organizations realize their goals efficiently.     
  • The findings arrived at from evaluation research serve as evidence of the impact of the project embarked on by an organization. This information can be presented to stakeholders, customers, and can also help your organization secure investments for future projects. 
  • Evaluation research helps organizations to justify their use of limited resources and choose the best alternatives. 
  •  It is also useful in pragmatic goal setting and realization. 
  • Evaluation research provides detailed insights into projects embarked on by an organization. Essentially, it allows all stakeholders to understand multiple dimensions of a process, and to determine strengths and weaknesses. 
  • Evaluation research also plays a major role in helping organizations to improve their overall practice and service delivery. This research design allows organizations to weigh existing processes through feedback provided by stakeholders, and this informs better decision making. 
  • Evaluation research is also instrumental to sustainable capacity building. It helps you to analyze demand patterns and determine whether your organization requires more funds, upskilling or improved operations.

Data Collection Techniques Used in Evaluation Research

In gathering useful data for evaluation research, the researcher often combines quantitative and qualitative research methods . Qualitative research methods allow the researcher to gather information relating to intangible values such as market satisfaction and perception. 

On the other hand, quantitative methods are used by the evaluation researcher to assess numerical patterns, that is, quantifiable data. These methods help you measure impact and results; although they may not serve for understanding the context of the process. 

Quantitative Methods for Evaluation Research

A survey is a quantitative method that allows you to gather information about a project from a specific group of people. Surveys are largely context-based and limited to target groups who are asked a set of structured questions in line with the predetermined context.

Surveys usually consist of close-ended questions that allow the evaluative researcher to gain insight into several  variables including market coverage and customer preferences. Surveys can be carried out physically using paper forms or online through data-gathering platforms like Formplus . 

  • Questionnaires

A questionnaire is a common quantitative research instrument deployed in evaluation research. Typically, it is an aggregation of different types of questions or prompts which help the researcher to obtain valuable information from respondents. 

A poll is a common method of opinion-sampling that allows you to weigh the perception of the public about issues that affect them. The best way to achieve accuracy in polling is by conducting them online using platforms like Formplus. 

Polls are often structured as Likert questions and the options provided always account for neutrality or indecision. Conducting a poll allows the evaluation researcher to understand the extent to which the product or service satisfies the needs of the users. 

Qualitative Methods for Evaluation Research

  • One-on-One Interview

An interview is a structured conversation involving two participants; usually the researcher and the user or a member of the target market. One-on-One interviews can be conducted physically, via the telephone and through video conferencing apps like Zoom and Google Meet. 

  • Focus Groups

A focus group is a research method that involves interacting with a limited number of persons within your target market, who can provide insights on market perceptions and new products. 

  • Qualitative Observation

Qualitative observation is a research method that allows the evaluation researcher to gather useful information from the target audience through a variety of subjective approaches. This method is more extensive than quantitative observation because it deals with a smaller sample size, and it also utilizes inductive analysis. 

  • Case Studies

A case study is a research method that helps the researcher to gain a better understanding of a subject or process. Case studies involve in-depth research into a given subject, to understand its functionalities and successes. 

How to Formplus Online Form Builder for Evaluation Survey 

  • Sign into Formplus

In the Formplus builder, you can easily create your evaluation survey by dragging and dropping preferred fields into your form. To access the Formplus builder, you will need to create an account on Formplus. 

Once you do this, sign in to your account and click on “Create Form ” to begin. 

formplus

  • Edit Form Title

Click on the field provided to input your form title, for example, “Evaluation Research Survey”.

what is research methods evaluation

Click on the edit button to edit the form.

Add Fields: Drag and drop preferred form fields into your form in the Formplus builder inputs column. There are several field input options for surveys in the Formplus builder. 

what is research methods evaluation

Edit fields

Click on “Save”

Preview form.

  • Form Customization

With the form customization options in the form builder, you can easily change the outlook of your form and make it more unique and personalized. Formplus allows you to change your form theme, add background images, and even change the font according to your needs. 

evaluation-research-from-builder

  • Multiple Sharing Options

Formplus offers multiple form sharing options which enables you to easily share your evaluation survey with survey respondents. You can use the direct social media sharing buttons to share your form link to your organization’s social media pages. 

You can send out your survey form as email invitations to your research subjects too. If you wish, you can share your form’s QR code or embed it on your organization’s website for easy access. 

Conclusion  

Conducting evaluation research allows organizations to determine the effectiveness of their activities at different phases. This type of research can be carried out using qualitative and quantitative data collection methods including focus groups, observation, telephone and one-on-one interviews, and surveys. 

Online surveys created and administered via data collection platforms like Formplus make it easier for you to gather and process information during evaluation research. With Formplus multiple form sharing options, it is even easier for you to gather useful data from target markets.

Logo

Connect to Formplus, Get Started Now - It's Free!

  • characteristics of evaluation research
  • evaluation research methods
  • types of evaluation research
  • what is evaluation research
  • busayo.longe

Formplus

You may also like:

Formal Assessment: Definition, Types Examples & Benefits

In this article, we will discuss different types and examples of formal evaluation, and show you how to use Formplus for online assessments.

what is research methods evaluation

What is Pure or Basic Research? + [Examples & Method]

Simple guide on pure or basic research, its methods, characteristics, advantages, and examples in science, medicine, education and psychology

Assessment vs Evaluation: 11 Key Differences

This article will discuss what constitutes evaluations and assessments along with the key differences between these two research methods.

Recall Bias: Definition, Types, Examples & Mitigation

This article will discuss the impact of recall bias in studies and the best ways to avoid them during research.

Formplus - For Seamless Data Collection

Collect data the right way with a versatile data collection tool. try formplus and transform your work productivity today..

  • Find My Rep

You are here

Evaluation Research Methods

Evaluation Research Methods

  • Elliot Stern - FAcSS, Emeritus Professor of Evaluation Research, University of Lancaster, UK
  • Description

This collection offers a complete guide to evaluations research methods. It is organized in four volumes.

Volume 1 focuses on foundation issues and includes sections on the rationale for evaluation, central methodological debates, the role of theory and applying values, criteria and standards.

Volume 2 examines explaining through evaluation and covers sections on experimentation and causal inference, outcomes and inputs, socio-economic indicators, economics and cost benefit approaches and realist methods.

Volume 3 addresses qualitative methods and includes sections on case studies, responsive, developmental and accompanying evaluation, participation and empowerment, constructivism and postmodernism and multi-criteria and classificatory methods.

Volume 4 concentrates on evaluation to improve policy with sections on performance management, systematic reviews, institutionalization and utilization and policy learning and design.

The collection offers a unique and unparalleled guide to this rapidly expanding research method. It demonstrates how method and theory are applied in policy and strategy and will be an invaluable addition to any social science library.

Elliot Stern is the editor of Evaluation: the International Journal of Theory, Research and Practic e, and works both as an independent consultant. He was previously Principal Advisor for evaluation studies at the Tavistock Institute, London.   VOLUME ONE   PART ONE: FOUNDATIONS ISSUES IN EVALUATION   SECTION ONE: TYPOLOGIES AND PARADIGMS Michael Scriven The Logic of Evaluation and Evaluation Practice George Julnes and Melvin M Mark Evaluation as Sensemaking Knowledge Construction in a Realist World

Preview this book

Select a purchasing option, related products.

Data Analysis

Research Evaluation

  • First Online: 23 June 2020

Cite this chapter

what is research methods evaluation

  • Carlo Ghezzi 2  

1034 Accesses

1 Citations

  • The original version of this chapter was revised. A correction to this chapter can be found at https://doi.org/10.1007/978-3-030-45157-8_7

This chapter is about research evaluation. Evaluation is quintessential to research. It is traditionally performed through qualitative expert judgement. The chapter presents the main evaluation activities in which researchers can be engaged. It also introduces the current efforts towards devising quantitative research evaluation based on bibliometric indicators and critically discusses their limitations, along with their possible (limited and careful) use.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Change history

19 october 2021.

The original version of the chapter was inadvertently published with an error. The chapter has now been corrected.

Notice that the taxonomy presented in Box 5.1 does not cover all kinds of scientific papers. As an example, it does not cover survey papers, which normally are not submitted to a conference.

Private institutions and industry may follow different schemes.

Adler, R., Ewing, J., Taylor, P.: Citation statistics: A report from the international mathematical union (imu) in cooperation with the international council of industrial and applied mathematics (iciam) and the institute of mathematical statistics (ims). Statistical Science 24 (1), 1–14 (2009). URL http://www.jstor.org/stable/20697661

Esposito, F., Ghezzi, C., Hermenegildo, M., Kirchner, H., Ong, L.: Informatics Research Evaluation. Informatics Europe (2018). URL https://www.informatics-europe.org/publications.html

Friedman, B., Schneider, F.B.: Incentivizing quality and impact: Evaluating scholarship in hiring, tenure, and promotion. Computing Research Association (2016). URL https://cra.org/resources/best-practice-memos/incentivizing-quality-and-impact-evaluating-scholarship-in-hiring-tenure-and-promotion/

Hicks, D., Wouters, P., Waltman, L., de Rijcke, S., Rafols, I.: Bibliometrics: The leiden manifesto for research metrics. Nature News 520 (7548), 429 (2015). https://doi.org/10.1038/520429a . URL http://www.nature.com/news/bibliometrics-the-leiden-manifesto-for-research-metrics-1.17351

Parnas, D.L.: Stop the numbers game. Commun. ACM 50 (11), 19–21 (2007). https://doi.org/10.1145/1297797.1297815 . URL http://doi.acm.org/10.1145/1297797.1297815

Patterson, D., Snyder, L., Ullman, J.: Evaluating computer scientists and engineers for promotion and tenure. Computing Research Association (1999). URL https://cra.org/resources/best-practice-memos/incentivizing-quality-and-impact-evaluating-scholarship-in-hiring-tenure-and-promotion/

Saenen, B., Borrell-Damian, L.: Reflections on University Research Assessment: key concepts, issues and actors. European University Association (2019). URL https://eua.eu/component/attachments/attachments.html?id=2144

Download references

Author information

Authors and affiliations.

Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Milano, Italy

Carlo Ghezzi

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Carlo Ghezzi .

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this chapter

Ghezzi, C. (2020). Research Evaluation. In: Being a Researcher. Springer, Cham. https://doi.org/10.1007/978-3-030-45157-8_5

Download citation

DOI : https://doi.org/10.1007/978-3-030-45157-8_5

Published : 23 June 2020

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-45156-1

Online ISBN : 978-3-030-45157-8

eBook Packages : Computer Science Computer Science (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Drive Research Logo

Contact Us (315) 303-2040

Evaluation Market Research

Discover expert insights, methodologies, and strategies to make informed business and stay ahead in your industry with evaluation research.

Request a Quote Read the Blog

  • Market Research Company Blog

What is Evaluation Research? + [Methods & Examples]

by Emily Taylor

Posted at: 2/20/2023 1:30 PM

Every business and organization has goals. 

But, how do you know if the time, money, and resources spent on strategies to achieve these goals are working?

Or, if they’re even worth it? 

Evaluation research is a great way to answer these common questions as it measures how effective a specific program or strategy is.

In this post, we’ll cover what evaluation research is, how to conduct it, the benefits of doing so, and more.

Article Contents

  • Definition of evaluation research
  • The purpose of program evaluation
  • Evaluation research advantages and disadvantages
  • Research evaluation methods
  • Examples and types of evaluation research
  • Evaluation research questions

Evaluation Research: Definition

Evaluation research, also known as program evaluation, is a systematic analysis that evaluates whether a program or strategy is worth the effort, time, money, and resources spent to achieve a goal. 

Based on the project’s objectives, the study may target different audiences such as: 

  • Stakeholders
  • Prospective customers
  • Board members

The feedback gathered from program evaluation research is used to validate whether something should continue or be changed in any way to better meet organizational goals.  

Evaluation Research Definition

The Purpose of Program Evaluation

The main purpose of evaluation research is to understand whether or not a process or strategy has delivered the desired results. 

It is especially helpful when launching new products, services, or concepts.

That’s because research program evaluation allows you to gather feedback from target audiences to learn what is working and what needs improvement. 

It is a vehicle for hearing people’s experiences with your new concept to gauge whether it is the right fit for the intended audience.

And with data-driven companies being 23 times more likely to acquire customers, it seems like a no-brainer.

data-driven company stats

As a result of evaluation research, organizations can better build a program or solution that provides audiences with exactly what they need.

Better yet, it’s done without wasting time and money figuring out new iterations before landing on the final product.

Evaluation Research Advantages & Disadvantages

In this section, our market research company dives more into the benefits and drawbacks of conducting research evaluation methods.

Understanding these pros and cons will help determine if it’s right for your business.

Advantages of Evaluation Research

In many instances, the pros of program evaluation outweigh the cons.

It is an effective tool for data-driven decision-making and sets organizations on a clear path to success.

Here are just a few of the many benefits of conducting research evaluation methods.

Justifies the time, money, and resources spent

First, evaluation research helps justify all of the resources spent on a program or strategy. 

Without evaluation research, it can be difficult to promote the continuation of a costly or time-intensive activity with no evidence it’s working. 

Rather than relying on opinions and gut reactions about the effectiveness of a program or strategy, evaluation research measures levels of effectiveness through data collected. 

Identifies unknown negative or positive impacts of a strategy

Second, program research helps users better understand how projects are carried out, who helps them come to fruition, who is affected, and more. 

These finer details shed light on how a program or strategy affects all facets of an organization.

As a result, you may learn there are unrealized effects that surprise you and your decision-makers.

Helps organizations improve

The research can highlight areas of strengths (i.e., factors of the program/strategy that should not be changed) and weaknesses (i.e., factors of the programs/strategy that could be improved).

Disadvantages of Evaluation Research

Despite its many advantages, there are still limitations and drawbacks to evaluation research.

Here are a few challenges to keep in mind before moving forward.

It can be costly

The cost of market research varies based on methodology, audience type, incentives, and more.

For instance, a focus group will be more expensive than an online survey.

Though, I’ll also make the argument that conducting evaluation research can save brands money down the line from investing in something that is a dud.

Poor memory recall

Many research evaluation methods are dependent on feedback from customers, employees, and other audiences. 

If the study is not conducted right after a process or strategy is implemented, it can be harder for these audiences to remember their true opinions and feelings on the matter.

Therefore, the data might be less accurate because of the lapse in time and memory.

Research Evaluation Methods

Evaluation research can include a mix of qualitative and quantitative methods depending on your objectives. 

A market research company , like Drive Research , can design an approach to best meet your goals, objectives, and budget for a successful study.

Below we share different approaches to evaluation research.

But, here is a quick graphic that explains the main differences between qualitative and quantitative research methodologies .

qualitative vs quantitative data

Quantitative Research Methods

Quantitative evaluation research aims to measure audience feedback.

Metrics quantitative market research often measures include:

  • Level of impact
  • Level of awareness
  • Level of satisfaction
  • Level of perception
  • Expected usage
  • Usage of competitors

In addition to other metrics to gauge the success of a program or strategy. 

This type of evaluation research can be done through online surveys or phone surveys. 

Online surveys

Perhaps the most common form of quantitative research , online surveys are extremely effective for gathering feedback. 

They are commonly used for evaluation research because they offer quick, cost-effective, and actionable insights.

Typically, the survey is conducted by a third-party online survey company to ensure anonymity and limit bias from the respondents. 

The market research firm develops the survey, conducts fieldwork, and creates a report based on the results.

For instance, here is the online survey process followed by Drive Research when conducting program evaluations for our clients.

online survey process by drive research

Phone surveys

Another way to conduct evaluation research is with phone surveys .

This type of market research allows trained interviewers to have one-on-one conversations with your target audience. 

Oftentimes they range from 15 to 30-minute discussions to gather enough information and feedback. 

The benefit of phone surveys for program evaluation research is that interviewers can ask respondents to explain their answers in more detail. 

Whereas, an online survey is limited to multiple-choice questions with pre-determined answer options (with the addition of a few open ends). 

Though, online surveys are much faster and more cost-effective to complete.

Recommended Reading: What is the Most Cost-Effective Market Research Methodology?

Qualitative Research Methods

Qualitative evaluation research aims to explore audience feedback.

Factors quantitative market research often evaluates include:

  • Areas of satisfaction
  • Areas of weaknesses
  • Recommendations

This type of exploratory evaluation research can be completed through in-depth interviews or focus groups.

It involves working with a qualitative recruiting company to recruit specific types of people for the research, developing a specific line of questioning, and then summarizing the results to ensure anonymity.

For instance, here is the process Drive Research follows when recruiting people to participate in evaluation research. 

qualitative recruitment process by drive research

Focus groups

If you are considering conducting qualitative evaluation research, it’s likely that focus groups are your top methodology of choice.

Focus groups are a great way to collect feedback from targeted audiences all at once.

It is also a helpful methodology for showing product markups, logo designs, commercials, and more.

Though, a great alternative to traditional focus groups is online focus groups .

Remote focus groups can reduce the costs of evaluation research because it eliminates many of the fees associated with in-person groups.

For instance, there are no facility rental fees.

Plus, recruiting participants is cheaper because you can cast a wider net being that they can join an online forum from anywhere in the country.

focus group example

In-depth interviews (IDIs)

Similar to focus groups, in-depth interviews gather tremendous amounts of information and feedback from target consumers. 

In this setting though, interviewers speak with participants one-on-one, rather than in a group. 

This level of attention allows interviewers to expand on more areas of what satisfies and dissatisfies someone about a product, service, or program. 

Additionally, it eliminates group bias in evaluation research.

This is because participants are more comfortable providing honest opinions without being intimidated by others in a focus group.

Examples and Types of Evaluation Research

There are different types of evaluation research based on the business and audience type.

Most commonly it is carried out for product concepts, marketing strategies, and programs.

We share a few examples of each below.

Product Evaluation Research Example 

Each year, 95 percent of new products introduced to the market fail. 

product failure stat

Therefore market research for new product development is critical in determining what could deter the success of a concept before it reaches shelves.

Lego is a great example of a brand using evaluation research for new product concepts.

In 2011 they learned 90% of their buyers were boys. 

Although boys were not their sole target demographic, the brand had more products that were appealing to this audience such as Star Wars and superheroes. 

To grow its audience, Lego conducted evaluation research to determine what topics and themes would entice female buyers.

With this insight, Lego launched Lego Friends. It included more details and features girls were looking for. 

Marketing Evaluation Research Example 

Marketing evaluation research or campaign evaluation surveys is a technique used to measure the effectiveness of advertising and marketing strategies. 

An example of this would be surveying a target audience before and after launching a paid social media campaign. 

Brands can determine if factors such as awareness, perception, and likelihood to purchase have changed due to the advertisements. 

Recommended Reading: Advertising Testing with Market Research

Process Evaluation Research Example

Process evaluations are commonly used to understand the implementation of a new program.

It helps decision-makers evaluate how a program’s goal or outcome was achieved. 

Additionally, process evaluation research quantifies how often the program was used, who benefited from the program, the resources used to implement the new process, any problems encountered, and more.

Examples of programs and processes where evaluation research is beneficial are:

  • Customer loyalty programs
  • Client referral programs
  • Customer retention programs
  • Workplace wellness programs
  • Orientation of new employees
  • Employee buddy programs 

Evaluation Research Questions

Evaluation research design sets the tone for a successful study.

It is important to ask the right questions in order to achieve the intended results. 

Product evaluation research questions include:

  • How appealing is the following product concept?
  • If available in a store near you, how likely are you to purchase [product]?
  • Which of the following packaging types do you prefer?
  • Which of the following [colors, flavors, sizes, etc.] would you be most interested in purchasing?

Marketing evaluation research questions include:

  • Please rate your level of awareness for [Brand].
  • What is your perception of [Brand]?
  • Do you remember seeing advertisements for [Brand] in the past 3 months?
  • Where did you see or hear the advertising for [Brand]? ie. Facebook, TV, radio, etc.
  • How likely are you to make a purchase from [Brand]?

Process evaluation research questions include:

  • Please rate your level of satisfaction with [Process].
  • Please explain why you provided [Rating].
  • What barriers existed to implementing [Process]?
  • How likely are you to use [Process] moving forward?
  • Please rate your level of agreement with the following statement: I find a lot of value in [Process].

While these are great examples of what evaluation research questions to ask, keep in mind they should be reflective of your unique goals and objectives. 

Our evaluation research company can help design, program, field, and analyze your survey to assure you are using quality data to drive decision-making.

Contact Our Evaluation Research Company

Wondering if continuing an employee or customer program is still offering value to your organization? Or, perhaps you need to determine if a new product concept is working as effectively as it should be. Evaluation research can help achieve these objectives and plenty of others. 

Drive Research is a full-service market research company specializing in evaluation research through surveys, focus groups, and IDIs.  Contact our team by filling out the form below or emailing [email protected] .

emily taylor about the author

Emily Taylor

As a Research Manager, Emily is approaching a decade of experience in the market research industry and loves to challenge the status quo. She is a certified VoC professional with a passion for storytelling.

Learn more about Emily, here .

subscribe to our blog

Categories: Market Research Glossary

Need help with your project? Get in touch with Drive Research.

View Our Blog

  • DOI: 10.1353/lib.2006.0050
  • Corpus ID: 21005052

Evaluation Research: An Overview

  • Published in Library Trends 6 September 2006
  • Education, Sociology

112 Citations

A study on the development of standard indicators for college & university libraries' evaluation, quantitative research: a successful investigation in natural and social sciences, assessing the effectiveness and quality of libraries, evaluation inquiry in donor funded programmes in northern ghana: experiences of programme staff, the efficient team-driven quality scholarship model: a process evaluation of collaborative research, the context, input process, product (cipp) evaluation model as a comprehensive framework for evaluating online english learning towards the industrial revolution era 5.0, an approach to evaluating latin american university libraries, the historical development of evaluation use, assessment of effectiveness of public integrity training workshops for civil servants – a case study, duck soup and library outcome evaluation, 54 references, qualitative research and evaluation methods (3rd ed.), the development of self-assessment tool-kits for the library and information sector, benchmarking: a process for improvement., outcomes assessment in the networked environment: research questions, issues, considerations, and moving forward, evaluating the school library media center, a theory-guided approach to library services assessment, basic research methods for librarians, evaluation: a systematic approach, bridging the gulf: mixed methods and library service evaluation, building evaluation capacity: activities for teaching and training, related papers.

Showing 1 through 3 of 0 Related Papers

Site logo

  • Understanding Evaluation Methodologies: M&E Methods and Techniques for Assessing Performance and Impact
  • Learning Center

EVALUATION METHODOLOGIES and M&E Methods

This article provides an overview and comparison of the different types of evaluation methodologies used to assess the performance, effectiveness, quality, or impact of services, programs, and policies. There are several methodologies both qualitative and quantitative, including surveys, interviews, observations, case studies, focus groups, and more…In this essay, we will discuss the most commonly used qualitative and quantitative evaluation methodologies in the M&E field.

Table of Contents

  • Introduction to Evaluation Methodologies: Definition and Importance
  • Types of Evaluation Methodologies: Overview and Comparison
  • Program Evaluation methodologies
  • Qualitative Methodologies in Monitoring and Evaluation (M&E)
  • Quantitative Methodologies in Monitoring and Evaluation (M&E)
  • What are the M&E Methods?
  • Difference Between Evaluation Methodologies and M&E Methods
  • Choosing the Right Evaluation Methodology: Factors and Criteria
  • Our Conclusion on Evaluation Methodologies

1. Introduction to Evaluation Methodologies: Definition and Importance

Evaluation methodologies are the methods and techniques used to measure the performance, effectiveness, quality, or impact of various interventions, services, programs, and policies. Evaluation is essential for decision-making, improvement, and innovation, as it helps stakeholders identify strengths, weaknesses, opportunities, and threats and make informed decisions to improve the effectiveness and efficiency of their operations.

Evaluation methodologies can be used in various fields and industries, such as healthcare, education, business, social services, and public policy. The choice of evaluation methodology depends on the specific goals of the evaluation, the type and level of data required, and the resources available for conducting the evaluation.

The importance of evaluation methodologies lies in their ability to provide evidence-based insights into the performance and impact of the subject being evaluated. This information can be used to guide decision-making, policy development, program improvement, and innovation. By using evaluation methodologies, stakeholders can assess the effectiveness of their operations and make data-driven decisions to improve their outcomes.

Overall, understanding evaluation methodologies is crucial for individuals and organizations seeking to enhance their performance, effectiveness, and impact. By selecting the appropriate evaluation methodology and conducting a thorough evaluation, stakeholders can gain valuable insights and make informed decisions to improve their operations and achieve their goals.

2. Types of Evaluation Methodologies: Overview and Comparison

Evaluation methodologies can be categorized into two main types based on the type of data they collect: qualitative and quantitative. Qualitative methodologies collect non-numerical data, such as words, images, or observations, while quantitative methodologies collect numerical data that can be analyzed statistically. Here is an overview and comparison of the main differences between qualitative and quantitative evaluation methodologies:

Qualitative Evaluation Methodologies:

  • Collect non-numerical data, such as words, images, or observations.
  • Focus on exploring complex phenomena, such as attitudes, perceptions, and behaviors, and understanding the meaning and context behind them.
  • Use techniques such as interviews, observations, case studies, and focus groups to collect data.
  • Emphasize the subjective nature of the data and the importance of the researcher’s interpretation and analysis.
  • Provide rich and detailed insights into people’s experiences and perspectives.
  • Limitations include potential bias from the researcher, limited generalizability of findings, and challenges in analyzing and synthesizing the data.

Quantitative Evaluation Methodologies:

  • Collect numerical data that can be analyzed statistically.
  • Focus on measuring specific variables and relationships between them, such as the effectiveness of an intervention or the correlation between two factors.
  • Use techniques such as surveys and experimental designs to collect data.
  • Emphasize the objectivity of the data and the importance of minimizing bias and variability.
  • Provide precise and measurable data that can be compared and analyzed statistically.
  • Limitations include potential oversimplification of complex phenomena, limited contextual information, and challenges in collecting and analyzing data.

Choosing between qualitative and quantitative evaluation methodologies depends on the specific goals of the evaluation, the type and level of data required, and the resources available for conducting the evaluation. Some evaluations may use a mixed-methods approach that combines both qualitative and quantitative data collection and analysis techniques to provide a more comprehensive understanding of the subject being evaluated.

3. Program evaluation methodologies

Program evaluation methodologies encompass a diverse set of approaches and techniques used to assess the effectiveness, efficiency, and impact of programs and interventions. These methodologies provide systematic frameworks for collecting, analyzing, and interpreting data to determine the extent to which program objectives are being met and to identify areas for improvement. Common program evaluation methodologies include quantitative methods such as experimental designs, quasi-experimental designs, and surveys, as well as qualitative approaches like interviews, focus groups, and case studies.

Each methodology offers unique advantages and limitations depending on the nature of the program being evaluated, the available resources, and the research questions at hand. By employing rigorous program evaluation methodologies, organizations can make informed decisions, enhance program effectiveness, and maximize the use of resources to achieve desired outcomes.

Catch HR’s eye instantly?

  • Resume Review
  • Resume Writing
  • Resume Optimization

Premier global development resume service since 2012

Stand Out with a Pro Resume

4. Qualitative Methodologies in Monitoring and Evaluation (M&E)

Qualitative methodologies are increasingly being used in monitoring and evaluation (M&E) to provide a more comprehensive understanding of the impact and effectiveness of programs and interventions. Qualitative methodologies can help to explore the underlying reasons and contexts that contribute to program outcomes and identify areas for improvement. Here are some common qualitative methodologies used in M&E:

Interviews involve one-on-one or group discussions with stakeholders to collect data on their experiences, perspectives, and perceptions. Interviews can provide rich and detailed data on the effectiveness of a program, the factors that contribute to its success or failure, and the ways in which it can be improved.

Observations

Observations involve the systematic and objective recording of behaviors and interactions of stakeholders in a natural setting. Observations can help to identify patterns of behavior, the effectiveness of program interventions, and the ways in which they can be improved.

Document review

Document review involves the analysis of program documents, such as reports, policies, and procedures, to understand the program context, design, and implementation. Document review can help to identify gaps in program design or implementation and suggest ways in which they can be improved.

Participatory Rural Appraisal (PRA)

PRA is a participatory approach that involves working with communities to identify and analyze their own problems and challenges. It involves using participatory techniques such as mapping, focus group discussions, and transect walks to collect data on community perspectives, experiences, and priorities. PRA can help ensure that the evaluation is community-driven and culturally appropriate, and can provide valuable insights into the social and cultural factors that influence program outcomes.

Key Informant Interviews

Key informant interviews are in-depth, open-ended interviews with individuals who have expert knowledge or experience related to the program or issue being evaluated. Key informants can include program staff, community leaders, or other stakeholders. These interviews can provide valuable insights into program implementation and effectiveness, and can help identify areas for improvement.

Ethnography

Ethnography is a qualitative method that involves observing and immersing oneself in a community or culture to understand their perspectives, values, and behaviors. Ethnographic methods can include participant observation, interviews, and document analysis, among others. Ethnography can provide a more holistic understanding of program outcomes and impacts, as well as the broader social context in which the program operates.

Focus Group Discussions

Focus group discussions involve bringing together a small group of individuals to discuss a specific topic or issue related to the program. Focus group discussions can be used to gather qualitative data on program implementation, participant experiences, and program outcomes. They can also provide insights into the diversity of perspectives within a community or stakeholder group .

Photovoice is a qualitative method that involves using photography as a tool for community empowerment and self-expression. Participants are given cameras and asked to take photos that represent their experiences or perspectives on a program or issue. These photos can then be used to facilitate group discussions and generate qualitative data on program outcomes and impacts.

Case Studies

Case studies involve gathering detailed qualitative data through interviews, document analysis, and observation, and can provide a more in-depth understanding of a specific program component. They can be used to explore the experiences and perspectives of program participants or stakeholders and can provide insights into program outcomes and impacts.

Qualitative methodologies in M&E are useful for identifying complex and context-dependent factors that contribute to program outcomes, and for exploring stakeholder perspectives and experiences. Qualitative methodologies can provide valuable insights into the ways in which programs can be improved and can complement quantitative methodologies in providing a comprehensive understanding of program impact and effectiveness

5. Quantitative Methodologies in Monitoring and Evaluation (M&E)

Quantitative methodologies are commonly used in monitoring and evaluation (M&E) to measure program outcomes and impact in a systematic and objective manner. Quantitative methodologies involve collecting numerical data that can be analyzed statistically to provide insights into program effectiveness, efficiency, and impact. Here are some common quantitative methodologies used in M&E:

Surveys involve collecting data from a large number of individuals using standardized questionnaires or surveys. Surveys can provide quantitative data on people’s attitudes, opinions, behaviors, and experiences, and can help to measure program outcomes and impact.

Baseline and Endline Surveys

Baseline and endline surveys are quantitative surveys conducted at the beginning and end of a program to measure changes in knowledge, attitudes, behaviors, or other outcomes. These surveys can provide a snapshot of program impact and allow for comparisons between pre- and post-program data.

Randomized Controlled Trials (RCTs)

RCTs are a rigorous quantitative evaluation method that involve randomly assigning participants to a treatment group (receiving the program) and a control group (not receiving the program), and comparing outcomes between the two groups. RCTs are often used to assess the impact of a program.

Cost-Benefit Analysis

Cost-benefit analysis is a quantitative method used to assess the economic efficiency of a program or intervention. It involves comparing the costs of the program with the benefits or outcomes generated, and can help determine whether a program is cost-effective or not.

Performance Indicators

Performance indicator s are quantitative measures used to track progress toward program goals and objectives. These indicators can be used to assess program effectiveness, efficiency, and impact, and can provide regular feedback on program performance.

Statistical Analysis

Statistical analysis involves using quantitative data and statistical method s to analyze data gathered from various evaluation methods, such as surveys or observations. Statistical analysis can provide a more rigorous assessment of program outcomes and impacts and help identify patterns or relationships between variables.

Experimental designs

Experimental designs involve manipulating one or more variables and measuring the effects of the manipulation on the outcome of interest. Experimental designs are useful for establishing cause-and-effect relationships between variables, and can help to measure the effectiveness of program interventions.

Quantitative methodologies in M&E are useful for providing objective and measurable data on program outcomes and impact, and for identifying patterns and trends in program performance. Quantitative methodologies can provide valuable insights into the effectiveness, efficiency, and impact of programs, and can complement qualitative methodologies in providing a comprehensive understanding of program performance.

6. What are the M&E Methods?

Monitoring and Evaluation (M&E) methods encompass the tools, techniques, and processes used to assess the performance of projects, programs, or policies.

These methods are essential in determining whether the objectives are being met, understanding the impact of interventions, and guiding decision-making for future improvements. M&E methods fall into two broad categories: qualitative and quantitative, often used in combination for a comprehensive evaluation.

7. Choosing the Right Evaluation Methodology: Factors and Criteria

Choosing the right evaluation methodology is essential for conducting an effective and meaningful evaluation. Here are some factors and criteria to consider when selecting an appropriate evaluation methodology:

  • Evaluation goals and objectives: The evaluation goals and objectives should guide the selection of an appropriate methodology. For example, if the goal is to explore stakeholders’ perspectives and experiences, qualitative methodologies such as interviews or focus groups may be more appropriate. If the goal is to measure program outcomes and impact, quantitative methodologies such as surveys or experimental designs may be more appropriate.
  • Type of data required: The type of data required for the evaluation should also guide the selection of the methodology. Qualitative methodologies collect non-numerical data, such as words, images, or observations, while quantitative methodologies collect numerical data that can be analyzed statistically. The type of data required will depend on the evaluation goals and objectives.
  • Resources available: The resources available, such as time, budget, and expertise, can also influence the selection of an appropriate methodology. Some methodologies may require more resources, such as specialized expertise or equipment, while others may be more cost-effective and easier to implement.
  • Accessibility of the subject being evaluated: The accessibility of the subject being evaluated, such as the availability of stakeholders or data, can also influence the selection of an appropriate methodology. For example, if stakeholders are geographically dispersed, remote data collection methods such as online surveys or video conferencing may be more appropriate.
  • Ethical considerations: Ethical considerations, such as ensuring the privacy and confidentiality of stakeholders, should also be taken into account when selecting an appropriate methodology. Some methodologies, such as interviews or focus groups, may require more attention to ethical considerations than others.

Overall, choosing the right evaluation methodology depends on a variety of factors and criteria, including the evaluation goals and objectives, the type of data required, the resources available, the accessibility of the subject being evaluated, and ethical considerations. Selecting an appropriate methodology can ensure that the evaluation is effective, meaningful, and provides valuable insights into program performance and impact.

8. Our Conclusion on Evaluation Methodologies

It’s worth noting that many evaluation methodologies use a combination of quantitative and qualitative methods to provide a more comprehensive understanding of program outcomes and impacts. Both qualitative and quantitative methodologies are essential in providing insights into program performance and effectiveness.

Qualitative methodologies focus on gathering data on the experiences, perspectives, and attitudes of individuals or communities involved in a program, providing a deeper understanding of the social and cultural factors that influence program outcomes. In contrast, quantitative methodologies focus on collecting numerical data on program performance and impact, providing more rigorous evidence of program effectiveness and efficiency.

Each methodology has its strengths and limitations, and a combination of both qualitative and quantitative approaches is often the most effective in providing a comprehensive understanding of program outcomes and impact. When designing an M&E plan, it is crucial to consider the program’s objectives, context, and stakeholders to select the most appropriate methodologies.

Overall, effective M&E practices require a systematic and continuous approach to data collection, analysis, and reporting. With the right combination of qualitative and quantitative methodologies, M&E can provide valuable insights into program performance, progress, and impact, enabling informed decision-making and resource allocation, ultimately leading to more successful and impactful programs.

' data-src=

Munir Barnaba

Thanks for your help its of high value, much appreciated

' data-src=

Very informative. Thank you

' data-src=

Chokri HAMOUDA

I am grateful for this article, which offers valuable insights and serves as an excellent educational resource. My thanks go to the author.

Leave a Comment Cancel Reply

Your email address will not be published.

How strong is my Resume?

Only 2% of resumes land interviews.

Land a better, higher-paying career

what is research methods evaluation

Jobs for You

Call for consultancy: evaluation of dfpa projects in kenya, uganda and ethiopia.

  • The Danish Family Planning Association

Project Assistant – Close Out

  • United States (Remote)

Global Technical Advisor – Information Management

  • Belfast, UK
  • Concern Worldwide

Intern- International Project and Proposal Support – ISPI

  • United States

Budget and Billing Consultant

Manager ii, budget and billing, usaid/lac office of regional sustainable development – program analyst, team leader, senior finance and administrative manager, data scientist.

  • New York, NY, USA
  • Everytown For Gun Safety

Energy Evaluation Specialist

Senior evaluation specialist, associate project manager, project manager i, services you might be interested in, useful guides ....

How to Create a Strong Resume

Monitoring And Evaluation Specialist Resume

Resume Length for the International Development Sector

Types of Evaluation

Monitoring, Evaluation, Accountability, and Learning (MEAL)

LAND A JOB REFERRAL IN 2 WEEKS (NO ONLINE APPS!)

Sign Up & To Get My Free Referral Toolkit Now:

An official website of the United States government

Here’s how you know

Official websites use .gov A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS A lock ( Lock Locked padlock icon ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Evaluation Home

The Federal Evaluation Toolkit BETA

Evaluation 101.

What is evaluation? How can it help me do my job better? Evaluation 101 provides resources to help you answer those questions and more. You will learn about program evaluation and why it is needed, along with some helpful frameworks that place evaluation in the broader evidence context. Other resources provide helpful overviews of specific types of evaluation you may encounter or be considering, including implementation, outcome, and impact evaluations, and rapid cycle approaches.

What is Evaluation?

Heard the term "evaluation," but are still not quite sure what that means? These resources help you answer the question, "what is evaluation?," and learn more about how evaluation fits into a broader evidence-building framework.

What is Program Evaluation?: A Beginners Guide

Program evaluation uses systematic data collection to help us understand whether programs, policies, or organizations are effective. This guide explains how program evaluation can contribute to improving program services. It provides a high-level, easy-to-read overview of program evaluation from start (planning and evaluation design) to finish (dissemination), and includes links to additional resources.

Types of Evaluation

What's the difference between an impact evaluation and an implementation evaluation? What does each type of evaluation tell us? Use these resources to learn more about the different types of evaluation, what they are, how they are used, and what types of evaluation questions they answer.

Common Framework for Research and Evaluation The Administration for Children & Families Common Framework for Research and Evaluation (OPRE Report #2016-14). Office of Planning, Research, and Evaluation, U.S. Department of Health and Human Services. https://www.acf.hhs.gov/sites/default/files/documents/opre/acf_common_framework_for_research_and_evaluation_v02_a.pdf" aria-label="Info for Common Framework for Research and Evaluation">

Building evidence is not one-size-fits all, and different questions require different methods and approaches. The Administration for Children & Families Common Framework for Research and Evaluation describes, in detail, six different types of research and evaluation approaches – foundational descriptive studies, exploratory descriptive studies, design and development studies, efficacy studies, effectiveness studies, and scale-up studies – and can help you understand which type of evaluation might be most useful for you and your information needs.

Formative Evaluation Toolkit Formative evaluation toolkit: A step-by-step guide and resources for evaluating program implementation and early outcomes . Washington, DC: Children’s Bureau, Administration for Children and Families, U.S. Department of Health and Human Services." aria-label="Info for Formative Evaluation Toolkit">

Formative evaluation can help determine whether an intervention or program is being implemented as intended and producing the expected outputs and short-term outcomes. This toolkit outlines the steps involved in conducting a formative evaluation and includes multiple planning tools, references, and a glossary. Check out the overview to learn more about how this resource can help you.

Introduction to Randomized Evaluations

Randomized evaluations, also known as randomized controlled trials (RCTs), are one of the most rigorous evaluation methods used to conduct impact evaluations to determine the extent to which your program, policy, or initiative caused the outcomes you see. They use random assignment of people/organizations/communities affected by the program or policy to rule out other factors that might have caused the changes your program or policy was designed to achieve. This in-depth resource introduces randomized evaluations in a non-technical way, provides examples of RCTs in practice, describes when RCTs might be the right approach, and offers a thorough FAQ about RCTs.

Rapid Cycle Evaluation at a Glance Rapid Cycle Evaluation at a Glance (OPRE #2020-152). Office of Planning, Research, and Evaluation, U.S. Department of Health and Human Services. https://www.acf.hhs.gov/opre/report/rapid-cycle-evaluation-glance" aria-label="Info for Rapid Cycle Evaluation at a Glance">

Rapid Cycle Evaluation (RCE) can be used to efficiently assess implementation and inform program improvement. This brief provides an introduction to RCE, describing what it is, how it compares to other methods, when and how to use it, and includes more in-depth resources. Use this brief to help you figure out whether RCE makes sense for your program.

Evaluation.gov

An official website of the Federal Government

Popular searches

  • How to Get Participants For Your Study
  • How to Do Segmentation?
  • Conjoint Preference Share Simulator
  • MaxDiff Analysis
  • Likert Scales
  • Reliability & Validity

Request consultation

Do you need support in running a pricing or product study? We can help you with agile consumer research and conjoint analysis.

Looking for an online survey platform?

Conjointly offers a great survey tool with multiple question types, randomisation blocks, and multilingual support. The Basic tier is always free.

Research Methods Knowledge Base

  • Navigating the Knowledge Base
  • Language Of Research
  • Philosophy of Research
  • Ethics in Research
  • Conceptualizing

Introduction to Evaluation

  • The Planning-Evaluation Cycle
  • An Evaluation Culture
  • Measurement
  • Research Design
  • Table of Contents

Fully-functional online survey tool with various question types, logic, randomisation, and reporting for unlimited number of surveys.

Completely free for academics and students .

Evaluation is a methodological area that is closely related to, but distinguishable from more traditional social research. Evaluation utilizes many of the same methodologies used in traditional social research, but because evaluation takes place within a political and organizational context, it requires group skills, management ability, political dexterity, sensitivity to multiple stakeholders and other skills that social research in general does not rely on as much. Here we introduce the idea of evaluation and some of the major terms and issues in the field.

Definitions of Evaluation

Probably the most frequently given definition is:

Evaluation is the systematic assessment of the worth or merit of some object

This definition is hardly perfect. There are many types of evaluations that do not necessarily result in an assessment of worth or merit – descriptive studies, implementation analyses, and formative evaluations, to name a few. Better perhaps is a definition that emphasizes the information-processing and feedback functions of evaluation. For instance, one might say:

Evaluation is the systematic acquisition and assessment of information to provide useful feedback about some object

Both definitions agree that evaluation is a systematic endeavor and both use the deliberately ambiguous term ‘object’ which could refer to a program, policy, technology, person, need, activity, and so on. The latter definition emphasizes acquiring and assessing information rather than assessing worth or merit because all evaluation work involves collecting and sifting through data, making judgements about the validity of the information and of inferences we derive from it, whether or not an assessment of worth or merit results.

The Goals of Evaluation

The generic goal of most evaluations is to provide “useful feedback” to a variety of audiences including sponsors, donors, client-groups, administrators, staff, and other relevant constituencies. Most often, feedback is perceived as “useful” if it aids in decision-making. But the relationship between an evaluation and its impact is not a simple one – studies that seem critical sometimes fail to influence short-term decisions, and studies that initially seem to have no influence can have a delayed impact when more congenial conditions arise. Despite this, there is broad consensus that the major goal of evaluation should be to influence decision-making or policy formulation through the provision of empirically-driven feedback.

Evaluation Strategies

‘Evaluation strategies’ means broad, overarching perspectives on evaluation. They encompass the most general groups or “camps” of evaluators; although, at its best, evaluation work borrows eclectically from the perspectives of all these camps. Four major groups of evaluation strategies are discussed here.

Scientific-experimental models are probably the most historically dominant evaluation strategies. Taking their values and methods from the sciences – especially the social sciences – they prioritize on the desirability of impartiality, accuracy, objectivity and the validity of the information generated. Included under scientific-experimental models would be: the tradition of experimental and quasi-experimental designs; objectives-based research that comes from education; econometrically-oriented perspectives including cost-effectiveness and cost-benefit analysis; and the recent articulation of theory-driven evaluation.

The second class of strategies are management-oriented systems models . Two of the most common of these are PERT , the P rogram E valuation and R eview T echnique, and CPM , the C ritical P ath M ethod. Both have been widely used in business and government in this country. It would also be legitimate to include the Logical Framework or “Logframe” model developed at U.S. Agency for International Development and general systems theory and operations research approaches in this category. Two management-oriented systems models were originated by evaluators: the UTOS model where U stands for Units, T for Treatments, O for Observing Observations and S for Settings; and the CIPP model where the C stands for Context, the I for Input, the first P for Process and the second P for Product. These management-oriented systems models emphasize comprehensiveness in evaluation, placing evaluation within a larger framework of organizational activities.

The third class of strategies are the qualitative/anthropological models . They emphasize the importance of observation, the need to retain the phenomenological quality of the evaluation context, and the value of subjective human interpretation in the evaluation process. Included in this category are the approaches known in evaluation as naturalistic or ‘Fourth Generation’ evaluation; the various qualitative schools; critical theory and art criticism approaches; and, the ‘grounded theory’ approach of Glaser and Strauss among others.

Finally, a fourth class of strategies is termed participant-oriented models . As the term suggests, they emphasize the central importance of the evaluation participants, especially clients and users of the program or technology. Client-centered and stakeholder approaches are examples of participant-oriented models, as are consumer-oriented evaluation systems.

With all of these strategies to choose from, how to decide? Debates that rage within the evaluation profession – and they do rage – are generally battles between these different strategists, with each claiming the superiority of their position. In reality, most good evaluators are familiar with all four categories and borrow from each as the need arises. There is no inherent incompatibility between these broad strategies – each of them brings something valuable to the evaluation table. In fact, in recent years attention has increasingly turned to how one might integrate results from evaluations that use different strategies, carried out from different perspectives, and using different methods. Clearly, there are no simple answers here. The problems are complex and the methodologies needed will and should be varied.

Types of Evaluation

There are many different types of evaluations depending on the object being evaluated and the purpose of the evaluation. Perhaps the most important basic distinction in evaluation types is that between formative and summative evaluation. Formative evaluations strengthen or improve the object being evaluated – they help form it by examining the delivery of the program or technology, the quality of its implementation, and the assessment of the organizational context, personnel, procedures, inputs, and so on. Summative evaluations, in contrast, examine the effects or outcomes of some object – they summarize it by describing what happens subsequent to delivery of the program or technology; assessing whether the object can be said to have caused the outcome; determining the overall impact of the causal factor beyond only the immediate target outcomes; and, estimating the relative costs associated with the object.

Formative evaluation includes several evaluation types:

  • needs assessment determines who needs the program, how great the need is, and what might work to meet the need
  • evaluability assessment determines whether an evaluation is feasible and how stakeholders can help shape its usefulness
  • structured conceptualization helps stakeholders define the program or technology, the target population, and the possible outcomes
  • implementation evaluation monitors the fidelity of the program or technology delivery
  • process evaluation investigates the process of delivering the program or technology, including alternative delivery procedures

Summative evaluation can also be subdivided:

  • outcome evaluations investigate whether the program or technology caused demonstrable effects on specifically defined target outcomes
  • impact evaluation is broader and assesses the overall or net effects – intended or unintended – of the program or technology as a whole
  • cost-effectiveness and cost-benefit analysis address questions of efficiency by standardizing outcomes in terms of their dollar costs and values
  • secondary analysis reexamines existing data to address new questions or use methods not previously employed
  • meta-analysis integrates the outcome estimates from multiple studies to arrive at an overall or summary judgement on an evaluation question

Evaluation Questions and Methods

Evaluators ask many different kinds of questions and use a variety of methods to address them. These are considered within the framework of formative and summative evaluation as presented above.

In formative research the major questions and methodologies are:

What is the definition and scope of the problem or issue, or what’s the question?

Formulating and conceptualizing methods might be used including brainstorming, focus groups, nominal group techniques, Delphi methods, brainwriting, stakeholder analysis, synectics, lateral thinking, input-output analysis, and concept mapping.

Where is the problem and how big or serious is it?

The most common method used here is “needs assessment” which can include: analysis of existing data sources, and the use of sample surveys, interviews of constituent populations, qualitative research, expert testimony, and focus groups.

How should the program or technology be delivered to address the problem?

Some of the methods already listed apply here, as do detailing methodologies like simulation techniques, or multivariate methods like multiattribute utility theory or exploratory causal modeling; decision-making methods; and project planning and implementation methods like flow charting, PERT/CPM, and project scheduling.

How well is the program or technology delivered?

Qualitative and quantitative monitoring techniques, the use of management information systems, and implementation assessment would be appropriate methodologies here.

The questions and methods addressed under summative evaluation include:

What type of evaluation is feasible?

Evaluability assessment can be used here, as well as standard approaches for selecting an appropriate evaluation design.

What was the effectiveness of the program or technology?

One would choose from observational and correlational methods for demonstrating whether desired effects occurred, and quasi-experimental and experimental designs for determining whether observed effects can reasonably be attributed to the intervention and not to other sources.

What is the net impact of the program?

Econometric methods for assessing cost effectiveness and cost/benefits would apply here, along with qualitative methods that enable us to summarize the full range of intended and unintended impacts.

Clearly, this introduction is not meant to be exhaustive. Each of these methods, and the many not mentioned, are supported by an extensive methodological research literature. This is a formidable set of tools. But the need to improve, update and adapt these methods to changing circumstances means that methodological research and development needs to have a major place in evaluation work.

Cookie Consent

Conjointly uses essential cookies to make our site work. We also use additional cookies in order to understand the usage of the site, gather audience analytics, and for remarketing purposes.

For more information on Conjointly's use of cookies, please read our Cookie Policy .

Which one are you?

I am new to conjointly, i am already using conjointly.

U.S. flag

Official websites use .gov

A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS

A lock ( ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Introduction

  • Introduction to Program Evaluation for Public Health Programs: A Self-Study Guide

‹ View Table of Contents

  • What Is Program Evaluation?
  • Evaluation Supplements Other Types of Reflection and Data Collection
  • Distinguishing Principles of Research and Evaluation
  • Why Evaluate Public Health Programs?
  • CDC’s Framework for Program Evaluation in Public Health
  • How to Establish an Evaluation Team and Select a Lead Evaluator
  • Organization of This Manual

Most program managers assess the value and impact of their work all the time when they ask questions, consult partners, make assessments, and obtain feedback. They then use the information collected to improve the program. Indeed, such informal assessments fit nicely into a broad definition of evaluation as the “ examination of the worth, merit, or significance of an object. ” [4] And throughout this manual, the term “program” will be defined as “ any set of organized activities supported by a set of resources to achieve a specific and intended result. ” This definition is intentionally broad so that almost any organized public health action can be seen as a candidate for program evaluation:

  • Direct service interventions (e.g., a program that offers free breakfasts to improve nutrition for grade school children)
  • Community mobilization efforts (e.g., an effort to organize a boycott of California grapes to improve the economic well-being of farm workers)
  • Research initiatives (e.g., an effort to find out whether disparities in health outcomes based on race can be reduced)
  • Advocacy work (e.g., a campaign to influence the state legislature to pass legislation regarding tobacco control)
  • Training programs (e.g., a job training program to reduce unemployment in urban neighborhoods)

What distinguishes program evaluation from ongoing informal assessment is that program evaluation is conducted according to a set of guidelines. With that in mind, this manual defines program evaluation as “the systematic collection of information about the activities, characteristics, and outcomes of programs to make judgments about the program, improve program effectiveness, and/or inform decisions about future program development.” [5] Program evaluation does not occur in a vacuum; rather, it is influenced by real-world constraints. Evaluation should be practical and feasible and conducted within the confines of resources, time, and political context. Moreover, it should serve a useful purpose, be conducted in an ethical manner, and produce accurate findings. Evaluation findings should be used both to make decisions about program implementation and to improve program effectiveness.

Many different questions can be part of a program evaluation, depending on how long the program has been in existence, who is asking the question, and why the information is needed.

In general, evaluation questions fall into these groups:

  • Implementation: Were your program’s activities put into place as originally intended?
  • Effectiveness: Is your program achieving the goals and objectives it was intended to accomplish?
  • Efficiency: Are your program’s activities being produced with appropriate use of resources such as budget and staff time?
  • Cost-Effectiveness: Does the value or benefit of achieving your program’s goals and objectives exceed the cost of producing them?
  • Attribution: Can progress on goals and objectives be shown to be related to your program, as opposed to other things that are going on at the same time?

All of these are appropriate evaluation questions and might be asked with the intention of documenting program progress, demonstrating accountability to funders and policymakers, or identifying ways to make the program better.

Planning asks, “What are we doing and what should we do to achieve our goals?” By providing information on progress toward organizational goals and identifying which parts of the program are working well and/or poorly, program evaluation sets up the discussion of what can be changed to help the program better meet its intended goals and objectives.

Increasingly, public health programs are accountable to funders, legislators, and the general public. Many programs do this by creating, monitoring, and reporting results for a small set of markers and milestones of program progress. Such “performance measures” are a type of evaluation—answering the question “How are we doing?” More importantly, when performance measures show significant or sudden changes in program performance, program evaluation efforts can be directed to the troubled areas to determine “Why are we doing poorly or well?”

Linking program performance to program budget is the final step in accountability. Called “activity-based budgeting” or “performance budgeting,” it requires an understanding of program components and the links between activities and intended outcomes. The early steps in the program evaluation approach (such as logic modeling) clarify these relationships, making the link between budget and performance easier and more apparent.

While the terms surveillance and evaluation are often used interchangeably, each makes a distinctive contribution to a program, and it is important to clarify their different purposes. Surveillance is the continuous monitoring or routine data collection on various factors (e.g., behaviors, attitudes, deaths) over a regular interval of time. Surveillance systems have existing resources and infrastructure. Data gathered by surveillance systems are invaluable for performance measurement and program evaluation, especially of longer term and population-based outcomes. In addition, these data serve an important function in program planning and “formative” evaluation by identifying key burden and risk factors—the descriptive and analytic epidemiology of the public health problem. There are limits, however, to how useful surveillance data can be for evaluators. For example, some surveillance systems such as the Behavioral Risk Factor Surveillance System (BRFSS), Youth Tobacco Survey (YTS), and Youth Risk Behavior Survey (YRBS) can measure changes in large populations, but have insufficient sample sizes to detect changes in outcomes for more targeted programs or interventions. Also, these surveillance systems may have limited flexibility to add questions for a particular program evaluation.

In the best of all worlds, surveillance and evaluation are companion processes that can be conducted simultaneously. Evaluation may supplement surveillance data by providing tailored information to answer specific questions about a program. Data from specific questions for an evaluation are more flexible than surveillance and may allow program areas to be assessed in greater depth. For example, a state may supplement surveillance information with detailed surveys to evaluate how well a program was implemented and the impact of a program on participants’ knowledge, attitudes, and behavior. Evaluators can also use qualitative methods (e.g., focus groups, semi-structured or open-ended interviews) to gain insight into the strengths and weaknesses of a particular program activity.

Both research and program evaluation make important contributions to the body of knowledge, but fundamental differences in the purpose of research and the purpose of evaluation mean that good program evaluation need not always follow an academic research model. Even though some of these differences have tended to break down as research tends toward increasingly participatory models [6]  and some evaluations aspire to make statements about attribution, “pure” research and evaluation serve somewhat different purposes (See “Distinguishing Principles of Research and Evaluation” table, page 4), nicely summarized in the adage “Research seeks to prove; evaluation seeks to improve.” Academic research focuses primarily on testing hypotheses; a key purpose of program evaluation is to improve practice. Research is generally thought of as requiring a controlled environment or control groups. In field settings directed at prevention and control of a public health problem, this is seldom realistic. Of the ten concepts contrasted in the table, the last three are especially worth noting. Unlike pure academic research models, program evaluation acknowledges and incorporates differences in values and perspectives from the start, may address many questions besides attribution, and tends to produce results for varied audiences.

Research Principles

Program Evaluation Principles

Scientific method

  • State hypothesis.
  • Collect data.
  • Analyze data.
  • Draw conclusions.

Framework for program evaluation

  • Engage stakeholders.
  • Describe the program.
  • Focus the evaluation design.
  • Gather credible evidence.
  • Justify conclusions.
  • Ensure use and share lessons learned.

Decision Making

Investigator-controlled

  • Authoritative.

Stakeholder-controlled

  • Collaborative.
  • Internal (accuracy, precision).
  • External (generalizability).

Repeatability program evaluation standards

  • Feasibility.
  • Descriptions.
  • Associations.
  • Merit (i.e., quality).
  • Worth (i.e., value).
  • Significance (i.e., importance).

Isolate changes and control circumstances

  • Narrow experimental influences.
  • Ensure stability over time.
  • Minimize context dependence.
  • Treat contextual factors as confounding (e.g., randomization, adjustment, statistical control).
  • Understand that comparison groups are a necessity.

Incorporate changes and account for circumstances

  • Expand to see all domains of influence.
  • Encourage flexibility and improvement.
  • Maximize context sensitivity.
  • Treat contextual factors as essential information (e.g., system diagrams, logic models, hierarchical or ecological modeling).
  • Understand that comparison groups are optional (and sometimes harmful).

Data Collection

  • Limited number (accuracy preferred).
  • Sampling strategies are critical.
  • Concern for protecting human subjects.

Indicators/Measures

  • Quantitative.
  • Qualitative.
  • Multiple (triangulation preferred).
  • Concern for protecting human subjects, organizations, and communities.
  • Mixed methods (qualitative, quantitative, and integrated).

Analysis & Synthesis

  • One-time (at the end).
  • Focus on specific variables.
  • Ongoing (formative and summative).
  • Integrate all data.
  • Attempt to remain value-free.
  • Examine agreement on values.
  • State precisely whose values are used.

Conclusions

Attribution

  • Establish time sequence.
  • Demonstrate plausible mechanisms.
  • Control for confounding.
  • Replicate findings.

Attribution and contribution

  • Account for alternative explanations.
  • Show similar effects in similar contexts.

Disseminate to interested audiences

  • Content and format varies to maximize comprehension.

Feedback to stakeholders

  • Focus on intended uses by intended users.
  • Build capacity.
  • Emphasis on full disclosure.
  • Requirement for balanced assessment.
  • To monitor progress toward the program’s goals
  • To determine whether program components are producing the desired progress on outcomes
  • To permit comparisons among groups, particularly among populations with disproportionately high risk factors and adverse health outcomes
  • To justify the need for further funding and support
  • To find opportunities for continuous quality improvement.
  • To ensure that effective programs are maintained and resources are not wasted on ineffective programs

Program staff may be pushed to do evaluation by external mandates from funders, authorizers, or others, or they may be pulled to do evaluation by an internal need to determine how the program is performing and what can be improved. While push or pull can motivate a program to conduct good evaluations, program evaluation efforts are more likely to be sustained when staff see the results as useful information that can help them do their jobs better.

Data gathered during evaluation enable managers and staff to create the best possible programs, to learn from mistakes, to make modifications as needed, to monitor progress toward program goals, and to judge the success of the program in achieving its short-term, intermediate, and long-term outcomes. Most public health programs aim to change behavior in one or more target groups and to create an environment that reinforces sustained adoption of these changes, with the intention that changes in environments and behaviors will prevent and control diseases and injuries. Through evaluation, you can track these changes and, with careful evaluation designs, assess the effectiveness and impact of a particular program, intervention, or strategy in producing these changes.

Recognizing the importance of evaluation in public health practice and the need for appropriate methods, the World Health Organization (WHO) established the Working Group on Health Promotion Evaluation. The Working Group prepared a set of conclusions and related recommendations to guide policymakers and practitioners. [7] Recommendations immediately relevant to the evaluation of comprehensive public health programs include:

  • Encourage the adoption of participatory evaluation approaches that provide meaningful opportunities for involvement by all of those with a direct interest in initiatives (programs, policies, and other organized activities).
  • Require that a portion of total financial resources for a health promotion initiative be allocated to evaluation—they recommend 10%.
  • Ensure that a mixture of process and outcome information is used to evaluate all health promotion initiatives.
  • Support the use of multiple methods to evaluate health promotion initiatives.
  • Support further research into the development of appropriate approaches to evaluating health promotion initiatives.
  • Support the establishment of a training and education infrastructure to develop expertise in the evaluation of health promotion initiatives.
  • Create and support opportunities for sharing information on evaluation methods used in health promotion through conferences, workshops, networks, and other means.

The figure presents the steps and standards of the CDC Evaluation Framework.  The 6 steps are (1) engage stakeholders, (2) describe the program (3) focus the evaluation and its design, (4) gather credible evidence, (5) justify conclusions, and (6)ensure use and share lessons learned.

Program evaluation is one of ten essential public health services [8] and a critical organizational practice in public health. [9] Until recently, however, there has been little agreement among public health officials on the principles and procedures for conducting such studies. In 1999, CDC published Framework for Program Evaluation in Public Health and some related recommendations. [10] The Framework, as depicted in Figure 1.1, defined six steps and four sets of standards for conducting good evaluations of public health programs.

The underlying logic of the Evaluation Framework is that good evaluation does not merely gather accurate evidence and draw valid conclusions, but produces results that are used to make a difference. To maximize the chances evaluation results will be used, you need to create a “market” before you create the “product”—the evaluation. You determine the market by focusing evaluations on questions that are most salient, relevant, and important. You ensure the best evaluation focus by understanding where the questions fit into the full landscape of your program description, and especially by ensuring that you have identified and engaged stakeholders who care about these questions and want to take action on the results.

The steps in the CDC Framework are informed by a set of standards for evaluation. [11] These standards do not constitute a way to do evaluation; rather, they serve to guide your choice from among the many options available at each step in the Framework. The 30 standards cluster into four groups:

Utility: Who needs the evaluation results? Will the evaluation provide relevant information in a timely manner for them?

Feasibility: Are the planned evaluation activities realistic given the time, resources, and expertise at hand?

Propriety: Does the evaluation protect the rights of individuals and protect the welfare of those involved? Does it engage those most directly affected by the program and changes in the program, such as participants or the surrounding community?

Accuracy: Will the evaluation produce findings that are valid and reliable, given the needs of those who will use the results?

Sometimes the standards broaden your exploration of choices. Often, they help reduce the options at each step to a manageable number. For example, in the step “Engaging Stakeholders,” the standards can help you think broadly about who constitutes a stakeholder for your program, but simultaneously can reduce the potential list to a manageable number by posing the following questions: ( Utility ) Who will use these results? ( Feasibility ) How much time and effort can be devoted to stakeholder engagement? ( Propriety ) To be ethical, which stakeholders need to be consulted, those served by the program or the community in which it operates? ( Accuracy ) How broadly do you need to engage stakeholders to paint an accurate picture of this program?

Similarly, there are unlimited ways to gather credible evidence (Step 4). Asking these same kinds of questions as you approach evidence gathering will help identify ones what will be most useful, feasible, proper, and accurate for this evaluation at this time. Thus, the CDC Framework approach supports the fundamental insight that there is no such thing as the right program evaluation. Rather, over the life of a program, any number of evaluations may be appropriate, depending on the situation.

  • Experience in the type of evaluation needed
  • Comfortable with quantitative data sources and analysis
  • Able to work with a wide variety of stakeholders, including representatives of target populations
  • Can develop innovative approaches to evaluation while considering the realities affecting a program (e.g., a small budget)
  • Incorporates evaluation into all program activities
  • Understands both the potential benefits and risks of evaluation
  • Educates program personnel in designing and conducting the evaluation
  • Will give staff the full findings (i.e., will not gloss over or fail to report certain findings)

Good evaluation requires a combination of skills that are rarely found in one person. The preferred approach is to choose an evaluation team that includes internal program staff, external stakeholders, and possibly consultants or contractors with evaluation expertise.

An initial step in the formation of a team is to decide who will be responsible for planning and implementing evaluation activities. One program staff person should be selected as the lead evaluator to coordinate program efforts. This person should be responsible for evaluation activities, including planning and budgeting for evaluation, developing program objectives, addressing data collection needs, reporting findings, and working with consultants. The lead evaluator is ultimately responsible for engaging stakeholders, consultants, and other collaborators who bring the skills and interests needed to plan and conduct the evaluation.

Although this staff person should have the skills necessary to competently coordinate evaluation activities, he or she can choose to look elsewhere for technical expertise to design and implement specific tasks. However, developing in-house evaluation expertise and capacity is a beneficial goal for most public health organizations. Of the characteristics of a good evaluator listed in the text box below, the evaluator’s ability to work with a diverse group of stakeholders warrants highlighting. The lead evaluator should be willing and able to draw out and reconcile differences in values and standards among stakeholders and to work with knowledgeable stakeholder representatives in designing and conducting the evaluation.

Seek additional evaluation expertise in programs within the health department, through external partners (e.g., universities, organizations, companies), from peer programs in other states and localities, and through technical assistance offered by CDC. [12]

You can also use outside consultants as volunteers, advisory panel members, or contractors. External consultants can provide high levels of evaluation expertise from an objective point of view. Important factors to consider when selecting consultants are their level of professional training, experience, and ability to meet your needs. Overall, it is important to find a consultant whose approach to evaluation, background, and training best fit your program’s evaluation needs and goals. Be sure to check all references carefully before you enter into a contract with any consultant.

To generate discussion around evaluation planning and implementation, several states have formed evaluation advisory panels. Advisory panels typically generate input from local, regional, or national experts otherwise difficult to access. Such an advisory panel will lend credibility to your efforts and prove useful in cultivating widespread support for evaluation activities.

Evaluation team members should clearly define their respective roles. Informal consensus may be enough; others prefer a written agreement that describes who will conduct the evaluation and assigns specific roles and responsibilities to individual team members. Either way, the team must clarify and reach consensus on the:

  • Purpose of the evaluation
  • Potential users of the evaluation findings and plans for dissemination
  • Evaluation approach
  • Resources available
  • Protection for human subjects.

The agreement should also include a timeline and a budget for the evaluation.

This manual is organized by the six steps of the CDC Framework. Each chapter will introduce the key questions to be answered in that step, approaches to answering those questions, and how the four evaluation standards might influence your approach. The main points are illustrated with one or more public health examples that are composites inspired by actual work being done by CDC and states and localities. [13] Some examples that will be referred to throughout this manual:

The program aims to provide affordable home ownership to low-income families by identifying and linking funders/sponsors, construction volunteers, and eligible families. Together, they build a house over a multi-week period. At the end of the construction period, the home is sold to the family using a no-interest loan.

Lead poisoning is the most widespread environmental hazard facing young children, especially in older inner-city areas. Even at low levels, elevated blood lead levels (EBLL) have been associated with reduced intelligence, medical problems, and developmental problems. The main sources of lead poisoning in children are paint and dust in older homes with lead-based paint. Public health programs address the problem through a combination of primary and secondary prevention efforts. A typical secondary prevention program at the local level does outreach and screening of high-risk children, identifying those with EBLL, assessing their environments for sources of lead, and case managing both their medical treatment and environmental corrections. However, these programs must rely on others to accomplish the actual medical treatment and the reduction of lead in the home environment.

A common initiative of state immunization programs is comprehensive provider education programs to train and motivate private providers to provide more immunizations. A typical program includes a newsletter distributed three times per year to update private providers on new developments and changes in policy, and provide a brief education on various immunization topics; immunization trainings held around the state conducted by teams of state program staff and physician educators on general immunization topics and the immunization registry; a Provider Tool Kit on how to increase immunization rates in their practice; training of nursing staff in local health departments who then conduct immunization presentations in individual private provider clinics; and presentations on immunization topics by physician peer educators at physician grand rounds and state conferences.

Each chapter also provides checklists and worksheets to help you apply the teaching points.

[4] Scriven M. Minimalist theory of evaluation: The least theory that practice requires. American Journal of Evaluation 1998;19:57-70.

[5] Patton MQ. Utilization-focused evaluation: The new century text. 3rd ed. Thousand Oaks, CA: Sage, 1997.

[6] Green LW, George MA, Daniel M, Frankish CJ, Herbert CP, Bowie WR, et al. Study of participatory research in health promotion: Review and recommendations for the development of participatory research in health promotion in Canada . Ottawa, Canada : Royal Society of Canada , 1995.

[7] WHO European Working Group on Health Promotion Evaluation. Health promotion evaluation: Recommendations to policy-makers: Report of the WHO European working group on health promotion evaluation. Copenhagen, Denmark : World Health Organization, Regional Office for Europe, 1998.

[8] Public Health Functions Steering Committee. Public health in America . Fall 1994. Available at <http://www.health.gov/phfunctions/public.htm>. January 1, 2000.

[9] Dyal WW. Ten organizational practices of public health: A historical perspective. American Journal of Preventive Medicine 1995;11(6)Suppl 2:6-8.

[10] Centers for Disease Control and Prevention. op cit.

[11] Joint Committee on Standards for Educational Evaluation. The program evaluation standards: How to assess evaluations of educational programs. 2nd ed. Thousand Oaks, CA: Sage Publications, 1994.

[12] CDC’s Prevention Research Centers (PRC) program is an additional resource. The PRC program is a national network of 24 academic research centers committed to prevention research and the ability to translate that research into programs and policies. The centers work with state health departments and members of their communities to develop and evaluate state and local interventions that address the leading causes of death and disability in the nation. Additional information on the PRCs is available at www.cdc.gov/prc/index.htm.

[13] These cases are composites of multiple CDC and state and local efforts that have been simplified and modified to better illustrate teaching points. While inspired by real CDC and community programs, they are not intended to reflect the current

Pages in this Report

  • Acknowledgments
  • Guide Contents
  • Executive Summary
  • › Introduction
  • Step 1: Engage Stakeholders
  • Step 2: Describe the Program
  • Step 3: Focus the Evaluation Design
  • Step 4: Gather Credible Evidence
  • Step 5: Justify Conclusions
  • Step 6: Ensure Use of Evaluation Findings and Share Lessons Learned
  • Program Evaluation Resources

E-mail: [email protected]

To receive email updates about this page, enter your email address:

This website may not work correctly because your browser is out of date. Please update your browser .

  • Qualitative research & evaluation methods: integrating theory and practice

Resource link

The fourth edition of Michael Quinn Patton's  Qualitative Research & Evaluation Methods Integrating Theory and Practice,  published by Sage Publications, analyses and provides clear guidance and advice for using a range of different qualitative methods for evaluation.

  • Module 1. How qualitative inquiry contributes to our understanding of the world
  • Module 2. What makes qualitative data qualitative
  • Module 3. Making methods decisions
  • Module 4. The fruit of qualitative methods: Chapter summary and conclusion
  • Module 5. Strategic design principles for qualitative inquiry
  • Module 6. Strategic principles guiding data collection and fieldwork
  • Module 7. Strategic principles for qualitative analysis and reporting findings
  • Module 8: Integrating the 12 strategic qualitative principles in practice
  • Module 9. Understanding the Paradigms Debate: Quants versus Quals
  • Module 10. Introduction to Qualitative Inquiry Frameworks
  • Module 11. Ethnography and Autoethnography
  • Module 12. Positivism, Postpositivism, Empiricism and Foundationalist Epistemologies
  • Module 13. Grounded Theory and Realism
  • Module 14 Phenomenology and Heuristic Inquiry
  • Module 15 Social Constructionism, Constructivism, Postmodernism, and Narrative Inquiry
  • Module 16. Ethnomethodology, Semiotics, and Symbolic Interaction, Hermeneutics and Ecological Psychology
  • Module 17 Systems Theory and Complexity Theory
  • Module 18. Pragmatism, Generic Qualitative Inquiry, and Utilization-Focused Evaluation
  • Module 19 Patterns and themes across inquiry frameworks: Chapter summary and conclusions
  • Module 20. Practical purposes, concrete questions, and actionable answers: Illuminating and enhancing quality
  • Module 21. Program evaluation applications: Focus on outcomes
  • Module 22 Specialized qualitative evaluation applications
  • Module 23 Evaluating program models and theories of change, and evaluation models especially aligned with qualitative methods
  • Module 24 Interactive and participatory qualitative applications
  • Module 25 Democratic evaluation, indigenous research and evaluation, capacity building, and cultural competence
  • Module 26 Special methodological applications
  • Module 27 A vision of the utility of qualitative methods: Chapter summary and conclusion
  • Module 28 Design thinking: Questions derive from purpose, design answers questions
  • Module 29 Date Collection Decisions
  • Module 30 Purposeful sampling and case selection: Overview of strategies and options
  • Module 31 Single-significant-case sampling as a design strategy
  • Module 32 Comparison-focused sampling options
  • Module 33 Group characteristics sampling strategies and options
  • Module 34 Concept and theoretical sampling strategies and options
  • Module 35. Instrumental-use multiple-case sampling
  • Module 36 Sequential and emergence-driven sampling strategies and options
  • Module 37 Analytically focused sampling
  • Module 38 Mixed, stratified, and nested purposeful sampling strategies
  • Module 39 Information-rich cases
  • Module 40 Sample size for qualitative designs
  • Module 41 Mixed methods designs
  • Module 42 Qualitative design chapter summary and conclusion: Methods choices and decisions
  • Module 43 The Power of direct observation
  • Module 44. Variations in observational methods
  • Module 45. Variations in duration of observations and site visits: From rapid reconnaissance to longitudinal studies over years
  • Module 46. Variations in observational focus and summary of dimensions along which fieldwork varies
  • Module 47. What to observe: Sensitizing concepts
  • Module 48. Integrating what to observe with how to observe
  • Module 49. Unobtrusive observations and indicators, and documents and archival fieldwork
  • Module 50. Observing oneself: Reflexivity and Creativity, and Review of Fieldwork Dimensions
  • Module 51. Doing Fieldwork: The Data Gathering Process
  • Module 52. Stages of fieldwork: Entry into the field
  • Module 53. Routinization of fieldwork: The dynamics of the second stage
  • Module 54. Bringing fieldwork to a close
  • Module 55. The observer and what is observed: Unity, separation, and reactivity
  • Module 56. Chapter summary and conclusion: Guidelines for fieldwork
  • Module 57 The Interview Society: Diversity of applications
  • Module 58 Distinguishing interview approaches and types of interviews
  • Module 59 Question options and skilled question formulation
  • Module 60 Rapport, neutrality, and the interview relationship
  • Module 61 Interviewing groups and cross-cultural interviewing
  • Module 62. Creative modes of qualitative inquiry
  • Module 63. Ethical issues and challenges in qualitative interviewing
  • Module 64. Personal reflections on interviewing, and chapter summary and conclusion
  • Module 65. Setting the Context for Qualitative Analysis: Challenge, Purpose, and Focus
  • Module 66. Thick description and case studies: The bedrock of qualitative analysis
  • Module 67. Qualitative Analysis Approaches: Identifying Patterns and Themes
  • Module 68. The intellectual and operational work of analysis
  • Module 69. Logical and matrix analyses, and synthesizing qualitative studies
  • Module 70. Interpreting findings, determining substantive significance, phenomenological essence, and hermeneutic interpretation
  • Module 71. Causal explanation thorough qualitative analysis
  • Module 72. New analysis directions: Contribution analysis, participatory analysis, and qualitative counterfactuals
  • Module 73. Writing up and reporting findings, including using visuals
  • Module 74. Special analysis and reporting issues: Mixed methods, focused communications, and principles-focused report exemplar.
  • Module 75 Chapter summary and conclusion, plus case study exhibits
  • Module 76. Analytical processes for enhancing credibility: systematically engaging and questioning the data
  • Module 77. Four triangulation processes for enhancing credibility
  • Part 1, universal criteria, and traditional scientific research versus constructivist criteria
  • Part 2: artistic, participatory, critical change, systems, pragmatic, and mixed criteria
  • Module 80 Credibility of the inquirer
  • Module 81 Generalizations, Extrapolations, Transferability, Principles, and Lessons learned
  • Module 82 Enhancing the credibility and utility of qualitative inquiry by addressing philosophy of science issues

Patton, M. Q. (2014).  Qualitative Research & Evaluation Methods: Integrative Theory and Practice . SAGE Publications.

'Qualitative research & evaluation methods: integrating theory and practice' is referenced in:

  • Week 47: Rumination #3: Fools' gold: the widely touted methodological "gold standard" is neither golden nor a standard

Framework/Guide

  • Rainbow Framework :  Sample

Back to top

© 2022 BetterEvaluation. All right reserved.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • HHS Author Manuscripts

Logo of nihpa

Innovations in Mixed Methods Evaluations

Lawrence a. palinkas.

1. Department of Children, Youth and Families, Suzanne Dworak-Peck School of Social Work, University of Southern California, Los Angeles, CA

Sapna J. Mendon

Alison b. hamilton.

2. UCLA Department of Psychiatry and Biobehavioral Sciences, University of California, Los Angeles, Los Angeles, CA, 90024-1759, USA.

3. VA Center for the Study of Healthcare Innovation, Implementation, & Policy, VA Greater Los Angeles Healthcare System, Los Angeles, CA

Mixed methods research—i.e., research that draws on both qualitative and qualitative methods in varying configurations—is well suited to address the increasing complexity of public health problems and their solutions. This review focuses specifically on innovations in mixed methods evaluations of intervention, program or policy (i.e., practice) effectiveness and implementation. The article begins with an overview of the structure, function and process of different mixed methods designs and then provides illustrations of their use in effectiveness studies, implementation studies, and combined effectiveness-implementation hybrid studies. The article then examines four specific innovations: procedures for transforming (or “quantitizing”) qualitative data, applying rapid assessment and analysis procedures in the context of mixed methods studies, development of measures to assess implementation outcomes, and strategies for conducting both random and purposive sampling particularly in implementation-focused evaluation research. The article concludes with an assessment of challenges to integrating qualitative and quantitative data in evaluation research.

Introduction

As in any field of science, our understanding of the complexity of public health problems and the solutions to these problems has required more complex tools to advance that understanding. Among the tools that have gained increasing attention in recent years in health services research and health promotion and disease prevention are designs that have been referred to as mixed methods. Mixed methods is defined as “research in which the investigator collects and analyzes data, integrates the findings, and draws inferences using both qualitative and quantitative approaches or methods in a single study or program of inquiry” ( 45 ). However, we qualify this definition in the following manner. First, integration may occur during the design and data collection phases of the research process in addition to the data analysis and interpretation phases ( 18 ). Second, mixed methods is often conducted by a team of investigators rather than a single investigator, with each member contributing specific expertise to the process of integrating qualitative and quantitative methods. Third, a program of inquiry may involve more than one study, but the studies are themselves linked by the challenge of answering a single question or set of related questions. Finally, the use of quantitative and qualitative approaches in combination provides a better understanding of research problems than either approach alone ( 18 , 59 , 71 ). In a mixed method design, each set of methods plays an important role in achieving the overall goals of the project and is enhanced in value and outcome by its ability to offset the weaknesses inherent in the other set and by its “engagement” with the other set of methods in a synergistic fashion ( 73 , 86 , 88 ).

Although mixed methods research is not new ( 72 ), the use of mixed methods designs has become increasingly common in the evaluation of the process and outcomes of health care intervention, program or policy effectiveness and their implementation ( 63 , 64 , 65 , 67 ). A number of guides for conducting mixed methods evaluations are available ( 16 , 38 , 65 ). In this article, we review some recent innovations in mixed methods evaluations in health services effectiveness and implementation. Specifically, we highlight techniques for “quantitizing” qualitative data; applying rapid assessment procedures to collecting and analyzing evaluation data; developing measures of implementation outcomes; and sampling study participants in mixed methods investigations.

Characteristics of Mixed Methods Designs in Evaluation Research

Several typologies exist in mixed methods designs, including convergent, explanatory, exploratory, embedded, transformative, and multiphase designs ( 18 ). These, along with other mixed method designs in evaluation research, can be categorized in terms of their structure, function, and process ( 1 , 4 , 63 , 65 , 73 ).

Quantitative and qualitative methods may be used simultaneously (e.g. QUAN + qual) or sequentially (e.g., QUAN → qual), with one method viewed as dominant or primary and the other as secondary (e.g., QUAL + quan) ( 59 ), although equal weight can be given to both methods (e.g., QUAN + QUAL) ( 18 , 64 , 70 ). Sequencing of methods may also vary according phase of the research process such that quantitative and qualitative data may be collected (dc) simultaneously (e.g., QUAN dc + qual dc ), but analyzed (da) sequentially (e.g., QUAN da → qual da ). However, data collection and analysis of both methods occur in iterative fashion (e.g., QUAN dc/da → qual da → QUAN2 dc/da ).

In evaluation research, mixed methods have been used to achieve different functions. Palinkas and colleagues ( 63 , 65 ) identified five such functions: convergence, where one type of data are used to validate or confirm conclusions reached from analysis of the other type of data (also known as triangulation), or the sequential quantification of qualitative data (also known as transformation) ( 18 ); complementarity, where quantitative data are used to evaluate outcomes while qualitative data are used to evaluate process or qualitative methods are used to provide depth of understanding and quantitative methods are used to provide breadth of understanding; 3) expansion or explanation, where qualitative methods are used to explain or elaborate upon the findings of quantitative studies, but may also serve as the impetus for follow-up quantitative investigations; 4) development, where one method may be used to develop instruments, concepts or interventions that that will enable use of the other method to answer other questions; and 5) sampling ( 80 ), or the sequential use of one method to identify a sample of participants for use of the other method.

The process of integrating quantitative and qualitative data occurs in three forms, merging, connecting, and embedding the data ( 18 , 63 , 65 ). In general, quantitative and qualitative data are merged when the two sets of data are used to provide answers to the same questions, connected when used to provided answers to related questions sequentially, and embedded when used to provide answers to related questions simultaneously.

Illustrations of Mixed Methods Designs in Evaluation Research

To demonstrate the variations in structure, function and process of mixed method designs in evaluation, we provide examples of their use in evaluations of intervention or program effectiveness and/or implementation. Some designs are used to evaluate effectiveness or implementation alone, while other designs are used to conduct simultaneous evaluations of both effectiveness and implementation.

Effectiveness studies

Often, mixed methods are applied in the evaluation of program effectiveness in quasi-experimental and experimental designs. For instance, Dannifer and colleagues ( 25 ) evaluated the effectiveness of a farmers’ market nutrition education program using focus groups and surveys. Grouped by number of classes attended (none, one class, more than one class), a control group of market shoppers were asked about attitudes, self-efficacy, and behaviors regarding fruit and vegetable preparation and consumption (QUAN dc → qual dc ). Bivariate and regression analysis examined differences in outcomes as a function of number of classes attended and qualitative analysis was based on a grounded theory approach ( 14 ). By connecting the results (QUAN da qual da ), qualitative findings were used to expand results from quantitative analysis with respect to changes in knowledge and attitudes.

In other effectiveness evaluations, quantitative methods are used to evaluate program or intervention outcomes, while mixed methods play a secondary role in evaluation of process. For example, Cook and colleagues ( 13 ) proposed to use a stepped wedge randomized design to examine the effect of an alcohol health champions program. A process evaluation will explore the context, implementation and response to the intervention using mixed methods (quan dc + qual dc ) in which the two types of data are merged (qual da →← qual da ) to provide a complementary perspective on these phenomena.

Implementation studies

As with effectiveness studies, studies that focus solely on implementation use mixed methods to evaluate process and outcomes. Hanson and colleagues ( 41 ) describe a design for a non-experimental study of a community-based learning collaborative (CBLC) strategy for implementing trauma-focused cognitive behavioral therapy ( 12 ) by promoting inter-professional collaboration between child welfare and child mental health service systems. Quantitative data will be used to assess individual and organization level measures of interpersonal collaboration (IC), inter-organizational relationships (IOR), penetration, and sustainability. Mixed quantitative/qualitative data will then be collected and analyzed sequentially for three functions: 1) expansion to provide further explanation of the quantitative findings related to CBLC strategies and activities (i.e., explanations of observed trends in the quantitative results; Quan dc → QUAL dc/da ); 2) convergence to examine the extent to which interview data support the quantitative monthly online survey data (i.e., validity of the quantitative data; QUAN da →← qual da ); and 3) complementarity to explore further factors related to sustainment of IC/IOR and penetration/use outcomes over the follow-up period (QUAN da + QUAL da ). Taken together, the results of these analyses will inform further refinement of the CBLC model.

Hybrid designs

Hybrid designs are intended to efficiently and simultaneously evaluate the effectiveness and implementation of an evidence-based practice (EBP). There are three types of hybrid designs ( 20 ). Type I designs are primarily focused on evaluating the effectiveness of the intervention in a real-world setting; while assessing implementation is secondary. Type II designs give equal priority to an evaluation of intervention effectiveness and implementation; which may involve a more detailed examination of the implementation process. Type III designs are primarily focused on the evaluation of an implementation strategy; and, as a secondary priority, may evaluate intervention effectiveness, especially when intervention outcomes may be linked to implementation outcomes.

In Hybrid I designs quantitative methods are typically used to evaluate intervention or program effectiveness, while mixed methods are used to identify potential implementation barriers and facilitators ( 37 ) or to evaluate implementation outcomes such as fidelity, feasibility, and acceptability ( 29 ), or reach, adoption, implementation and sustainability ( 79 ). For instance, Broder-Fingert and colleagues ( 7 ) plan to simultaneously evaluate effectiveness and collect data on implementation of a patient navigation intervention to improve access to services for children with autism spectrum disorders in a two-arm randomized comparative effectiveness trial. A mixed-method implementation evaluation will be structured to achieve three aims that will be carried out sequentially, with each project informing the next (QUAL da/da → QUAL dc/da → QUAN dc/da ). Data will also converge in the final analysis (QUAL da →← QUAN da ) for the purpose of triangulation.

Mixed methods have been used in Hybrid 2 designs to evaluate both process and outcomes of program effectiveness and implementation ( 19 , 50 , 78 ). For instance, Hamilton and colleagues ( 40 ) studied an evidence-based quality improvement approach for implementing supported employment services at specialty mental health clinics in a site-level controlled trial at four implementation sites and four control sites. Data collected included patient surveys and semi-structured interviews with clinicians and administrators before, during, and after implementation; qualitative field notes; structured baseline and follow-up interviews with patients; semi-structured interviews with patients after implementation; and administrative data. Qualitative results were merged to contextualize the outcomes evaluation (QUAN da/dc + QUAL da/dc ) for complementarity.

Hybrid 3 designs are similar to implementation-only studies described above. While quantitative methods are typically used to evaluate effectiveness, mixed methods are used to evaluate both process and outcomes of specific implementation strategies ( 23 , 87 ). For instance, Lewis et al. ( 51 ) conducted a dynamic cluster randomized trial of a standardized versus tailored measurement-based care (MBC) implementation in a large provider of community care. Quantitative data were used to compare the effect of standardized versus tailored MBC implementation on clinician- and client-level outcomes. Quantitative measures of MBC fidelity, and qualitative data on implementation barriers obtained from focus groups were simultaneously mixed in a QUAL + QUAN structure served the function of data expansion for the purposes of evaluation and elaboration using the process of data connection.

Procedures for Collecting Qualitative Data

Mixed methods evaluations often require timely collection and analysis of data to provide information that can inform the intervention itself or the strategy used to successfully implement the intervention. One such method is a technique developed by anthropologists known as Rapid Assessment Procedures (RAP). This approach is designed to provide depth to the understanding of the event and its community context that is critical to the development and implementation of more quantitative approaches involving the use of survey questionnaires and diagnostic instruments ( 5 , 84 ).

With a typically shorter turnaround time, qualitative researchers in implementation science have turned toward rapid analysis techniques in which key concepts are identified in advance to structure and focus the inquiry ( 32 , 39 ). In the rapid analytic approach used by Hamilton ( 39 ), main topics (domains) are drawn from interview and focus group guides and a template is developed to summarize transcripts ( 32 , 49 ). Summaries are analyzed using matrix analysis, and key actionable findings are shared with the implementation team to guide further implementation (e.g., the variable use of implementation strategies) in real time, particularly during phased implementation research such as in a hybrid type II study ( 20 ).

Rapid assessment procedures have been used in evaluation studies of healthcare organization and delivery ( 92 ). However, with few exceptions ( 3 , 51 ), they have been used primarily as standalone investigations with no integration with quantitative methods ( 11 , 36 , 44 , 83 , 97 ). Ackerman and colleagues ( 3 ) used “rapid ethnography” to understand efforts to implement secure websites (patient portals) in “safety net” health care systems that provide services for low-income populations. Site visits at four California safety net health systems included interviews with clinicians and executives, informal focus groups with front-line staff, observations of patient portal sign-up procedures and clinic work, review of marketing materials and portal use data, and a brief survey. However, “researchers conducting rapid ethnographies face tensions between the breadth and depth of the data they collect and often need to depend on participants who are most accessible due to time constraints” ( 93 , pp. 321–322).

More recently, the combination of clinical ethnography and rapid assessment procedures has been modified for use in pragmatic clinical trials ( 66 ). Known as Rapid Assessment Procedure – Informed Clinical Ethnography or RAPICE, the process begins with preliminary discussions with potential sites, follow by training calls and site visits, conducted by the study Principal Investigator acting as a participant observer (PO). During the visit, the PO participates in and observes meetings with site staff, conducts informal or semi-structured interviews to assess implementation progress, collects available documents that record procedures implemented, and completes field notes. Both site-specific logs and domain-specific logs (i.e., trial specific activities, evidence-based intervention implementation, sustainability, and economic considerations) are maintained. Interview transcripts and field notes are subsequently reviewed by the study’s mixed methods consultant (MMC) ( 98 ). A discussion ensues until both the PO and the MMC reach consensus as to the meaning and significance of the data ( 66 ). This approach is consistent with the pragmatic trial requirement for the minimization of time intensive research methods ( 89 ) and the implementation science goal of understanding trial processes that could provide readily implementable intervention models ( 99 ).

The use of RAPICE is illustrated by Zatzick and collagues ( 99 ) in an evaluation of the American College of Surgeons national policy requirements and best practice guidelines used to inform the integrated operation of US trauma centers. In a hybrid trial testing the delivery of high-quality screening and intervention for PTSD across US level 1 trauma centers, the study uses implementation conceptual frameworks and RAPICE methods to evaluate the uptake of the intervention model using site visit data.

Procedures for Analyzing Qualitative Data

Intervention and practice evaluations using mixed methods designs generally rely on semi-structured interviews, focus groups and ethnographic observations as a source of qualitative data. However, the demand for rigor in mixed method designs have led to innovative approaches in both the kind of qualitative data collected and how these data are analyzed. One such approach transforms qualitative data into quantitative values; referred to as “quantitizing” qualitative data ( 56 ). This approach must adhere to assumptions that govern the collection of qualitative, as well as quantitative data simultaneously. Caution must be exercised in making certain that the application of one set of assumptions (e.g., insuring that every participant had an opportunity to answer a question when reporting a frequency or rate) does not violate another set of assumptions (i.e., samples purposively selected to insure depth of understanding). For instance, quantitative data may be used for purposes of description, but may not necessarily satisfy the requirements for application of statistical tests to ascertain the level of significance of differences across groups.

Three particular approaches to quantifying qualitative data are summarized below.

Concept mapping

The technique of concept mapping ( 91 ), where qualitative data elicited from focus groups are quantitized, is an example of convergence through transformation ( 2 , 74 ). Concept mapping is a structured conceptualization process and a participatory qualitative research method that yields a conceptual framework for how a group views a particular topic. Similar to other methods such as the nominal group technique (NGT, 26) and the Delphi method ( 26 ), concept mapping uses inductive and structured small group data collection processes to qualitatively generate different ideas or constructs and then quantize these data for quantitative analysis. In the case of concept mapping, the qualitative data are used to produce illustrative cluster maps depicting relationships of ideas in the form of clusters.

Concept mapping involves six steps: preparation, generation, structuring, representation, interpretation, and utilization. In the preparation stage, focal areas are identified and criteria for participant selection/recruitment are determined. In the generation stage, participants address the focal question and generate a list of items to be used in subsequent data collection and analysis. In the structuring stage, participants independently organize the list of items generated by sorting the items into piles based on perceived similarity. Each item is then rated in terms of its importance or usefulness to the focal question. In the representation stage, data are entered into specialized concept-mapping computer software (Concept Systems), which provides quantitative summaries and visual representations or concept maps based on multidimensional scaling and hierarchical cluster analysis. In the interpretation stage, participants collectively process and qualitatively analyze the concept maps, assessing and discussing the cluster domains, evaluating items that form each cluster, and discussing the content of each cluster. This leads to a reduction in the number of clusters. Finally, in the utilization stage, findings are discussed to determine how best they inform the original focal question.

Waltz and colleagues ( 94 ) illustrate the use of concept mapping in a study to validate the compilation of discrete implementation strategies identified in the Expert Recommendations for Implementing Change (ERIC) study. Hierarchical cluster analysis supported organizing the 73 strategies into 9 categories (see Figure 1 below).

An external file that holds a picture, illustration, etc.
Object name is nihms-1022216-f0001.jpg

Illustration of the graphic output of concept mapping. Point and cluster map of all 73 strategies identified in the ERIC process. Source: Waltz et al. ( 88 ).

Qualitative comparative analysis

Another procedure for quantitizing qualitative data that has gained increasing attention in recent years is qualitative comparative analysis (QCA). Developed in the 1980s ( 75 ), QCA was designed to study the complexities often observed in social sciences research by examining the nature of relationships. QCA can be used with qualitative data, quantitative data, or a combination of the two, and is particularly helpful in conducting studies that may have a small to medium sample size, but can also be used with large sample sizes ( 76 ).

Similar to the constant comparative method used in grounded theory ( 34 ) and thematic analysis ( 53 ) in which the analyst compares and contrasts incidents or codes to create categories or themes to generate a theory, QCA uses a qualitative approach in that it entails an iterative process and dialogue with the data. Findings in QCA, however, are based on quantitative analyses; specifically a Boolean algebra technique that allows for a reductionist approach interpreted in set-theoretic terms. The underlying purpose in using this method is to identify one or multiple configurations that are sufficient to produce an outcome (see Table 1 ) with enough consistency to illustrate that the same pathway will continue to produce the outcome, and a coverage score indicating the percentage of cases where a given configuration is applicable. Pathways are interpreted using logical ANDs, logical ORs, and the presence or absence of a condition. Configuration #3 below, for example, would be interpreted as: the presence of conditions A and B when combined with either E or D, but only in the absence of C, is sufficient to produce outcome X.

Development of causal pathways to outcome identified through qualitative comparative analysis

Original conditions associated with outcome (x)Causal pathways to outcome (x) identified through QCA
A, B, C, D, E → X1) A C E → X
2) A B D → X
3) A B E D C → X

Based on the type of data being used, the context of what is being studied, and what is already known about a particular area of interest, a researcher will begin by selecting one of two commonly used analyses, crisp-set (csQCA) or fuzzy-set (fsQCA). In csQCA, conditions and the outcome are dichotomized, meaning that a given case’s membership to a condition or outcome is either fully in or fully out ( 76 ). Alternatively, membership on a fuzzy-set can fall into three, four, six-point, or continuous value set; enabling the researcher to qualitatively assess the degree of membership most appropriate for a case on any given condition.

Procedures for conducting a QCA are illustrated in Figure 2 below. Prior to beginning formal analyses, several steps including determining outcomes and conditions, identifying cases, and calibrating membership scores inform the development of a data matrix. QCA relies heavily on substantive knowledge, and decisions made throughout the analytics process are guided by a theoretical framework rather than inferential statistics ( 76 ). In the first step, researchers assign weights to constructs based on previous knowledge and theory, rather than basing thresholds on means or medians. The number of conditions is carefully selected, as having as many conditions as cases will result in uniqueness and failure to detect configurations ( 55 ). Once conditions have been defined and operationalized, each case can be dichotomized for membership. In crisp-set analysis, cases are classified as having full non-membership (0) or full membership ( 1 ) in the given outcome by using a qualitative approach (indirect calibration) or a quantitative approach with log odds (direct calibration) ( 76 ).

An external file that holds a picture, illustration, etc.
Object name is nihms-1022216-f0002.jpg

QCA as an approach and as an analytic technique

Source: Kane et al. ( 44 ), with permission.

Once a data matrix has been created, formal csQCA can commence using fs/QCA software, R suite, or other statistical packages for configurational comparative methods (a comprehensive list can be found on www.compass.org/software ). Analyses should begin with determining whether all conditions originally hypothesized to influence the outcome, are, indeed, necessary ( 81 ). Using a Boolean algebra algorithm, a truth table is then designed to provide a reduced number of configurations. A truth table may show contradictions (consistency score = .3-.7), indicating that it is not clear whether this configuration is consistent with the outcome ( 76 ). Several techniques can be used to resolve such contradictions ( 77 ), many of which entail revisiting the operationalization and/or selection of conditions, or reviewing cases for fit.

Once all contradictions are resolved and assigned full non-membership or full membership on the given outcome, sufficiency analyses can be conducted. Initial sufficiency analysis is usually based on the presence of an outcome. The Quine-McCluskey algorithm produces a logical combination or multiple combinations of conditions that are sufficient for the outcome to occur. Three separate solutions are given: parsimonious, intermediate, and complex. Typically, the intermediate solution is selected for the purposes of interpretation ( 76 ). One interest of interpreting findings is to explicitly state that this combination will almost always produce the given outcome. This is measured by consistency. While a perfect score of 1 indicates that this causal pathway will always be consistent with the outcome, a score ≥ .8 is a strong measure of fit ( 76 , 77 ). Complementary to consistency is coverage, or identifying the degree to which all cases were explained by a given causal pathway. While there is often a trade-off between these two measures of fit, without high consistency, it is not meaningful to have high coverage ( 76 ).

QCA has increasingly been used in health services research to evaluate program effectiveness and implementation where outcomes are dependent on interconnected structures and practices ( 15 , 28 , 30 , 46 , 47 , 48 , 90 , 92 ). For instance, Kane and colleagues ( 43 ) used QCA to examine the elements of organizational capacity to support program implementation that result in successful completion of public health program objectives in a public health initiative serving 50 communities. The QCA used case study and quantitative data collected from 22 awardee programs to evaluate the Communities Putting Prevention to Work (CPPW) program. The results revealed two combinations for combining most work plan objectives: 1) having experience implementing public health improvements in combination with having a history of collaboration with partners; and 2) not having experience implementing public health improvements in combination with having leadership support.

Implementation frameworks

A third approach to quantitizing qualitative information used in evaluation research has been the coding and scaling of responses to interviews guided by existing implementation frameworks. These techniques call for assigning a numeric value to qualitative responses to questions pertaining to a set of variables believed to be predictive of successful implementation outcomes and then comparing the quantitative values across implementation domain, different implementation sites, or different stakeholder groups involved in implementation ( 24 , 95 ).

In an illustration of this approach, Damschroder and Lowery ( 22 ) embedded the constructs of the Consolidated Framework for Implementation Research (CFIR) ( 21 ) in semi-structured interviews conducted to describe factors that explained the wide variation in implementation of MOVE!, a weight management program disseminated nationally to Veterans Affairs (VA) medical centers. Interview transcripts were coded and used to develop a case memo for each facility. Numerical ratings were then assigned to each construct to reflect their valence and their magnitude or strength. This process is illustrated in Figure 3 below. The numerical ratings ranged from −2 (construct is mentioned by two or more interviewees a negative influence in the organization, an impeding influence on work processes, and/or an impeding influence in implementation efforts) to +2 (construct is mentioned by two or more interviewees as a positive influence in the organization, an impeding influence on work processes, and/or an impeding influence in implementation efforts). Of the 31 constructs assessed, 10 strongly distinguished between facilities with low versus high MOVE! implementation effectiveness; 2 constructs exhibited a weak pattern in distinguishing between low versus high effectiveness; 16 constructs were mixed across facilities; and 2 had insufficient data to assess.

An external file that holds a picture, illustration, etc.
Object name is nihms-1022216-f0003.jpg

Team-based work flow for case analysis. Source: Damschroder and Lowery, ( 22 ).

In the absence of quantification of the qualitative data in these three analytical approaches, a thematic content analysis approach ( 43 ) might have been used for analysis of the data obtained from the small group concept mapping brainstorming sessions or the interviews or focus groups that are part of the QCA. A qualitative framework approach ( 33 ) might have been used for analysis of the data obtained from the interviews using the CFIR template. The analysis would be inductive for data collected for the concept mapping exercise, inductive-deductive for data collected for the QCA exercise, and deductive for the data collected for the framework exercise. With the quantification, these data are largely used to describe a framework (concept mapping) that could be used to generate hypotheses (implementation framework) or to test hypotheses (qualitative comparative analysis).

Procedures for Measuring Evaluation Outcomes

In addition to their use to evaluate intervention effectiveness and implementation, mixed methods have increasingly been employed to develop innovative measurement tools. Three such recent efforts are described below.

Stages of Implementation Completion (SIC)

The SIC is an 8-stage assessment tool ( 9 ) developed as part of a large-scale randomized implementation trial that contrasted two methods of implementing Treatment Foster Care Oregon (TFCO [formerly Multidimensional Treatment Foster Care] ( 10 ), an EBP for youth with serious behavioral problems in the juvenile justice and child welfare systems. The eight stages range from Engagement (Stage 1) with the developers/ purveyors in the implementation process, to achievement of Competency in program delivery (Stage 8). The SIC was developed to measure a community or organization’s progress and milestones toward successful implementation of the TFCO model regardless of the implementation strategy utilized. Within each of the eight stages, sub activities are operationalized and completion of activities are monitored, along with the length of time taken to complete these activities.

In an effort to examine the utility and validity of the SIC, Palinkas and colleagues ( 64 ) examined influences on the decisions of administrators of youth-serving organizations to initiate and proceed with implementation of three EBPs: Multisystemic Therapy ( 43 ), Multidimensional Family Therapy ( 52 ), and TFCO. Guided by the SIC framework, semi-structured interviews were conducted with 19 agency chief executive officers and program directors of 15 youth-serving organizations. Agency leaders’ self-assessments of implementation feasibility and desirability in the stages that occur prior to (Pre-implementation), during (Implementation), and after (Sustainment) phases were found to be influenced by several characteristics of the intervention, inner setting and outer setting that were unique to a phase in some instances and found to operate in more than one phase in other instances. Findings supported the validity of using the SIC to measure implementation of EBPs other than TFCO in a variety of practice settings, identified opportunities for using agency leader models to develop strategies to facilitate implementation of EBP, and supported using the SIC as standardized framework for guiding agency leader self-assessments of implementation.

Sustainment Measurement System (SMS)

The development of the SMS to measure sustainment of prevention programs and initiatives is another illustration of the use of mixed methods to develop evaluation tools. Palinkas and colleagues ( 69 , 70 ) interviewed 45 representatives of 10 grantees and 9 program officers within 4 SAMHSA prevention programs to identify key domains of sustainability indicators (i.e., dependent variables) and requirements or predictors (i.e., independent variables). The conceptualization of “sustainability” was captured using three approaches: semi-structured interviews to identify experiences with implementation and sustainability barriers and facilitators; a free list exercise to identify how participants conceptualized sustainability, program elements they wished to sustain, and requirements to sustain such elements; and a checklist of CFIR constructs assessing how important each item was to sustainment. Interviews were analyzed using a grounded theory approach ( 14 ), while free lists and CFIR items were quantitized; the former consisting of rank-ordered weights applied to frequencies of listed items and the latter using a numeric scale ranging from 0 (not important) to 2 (very important) ( 69 ). Four sustainability elements were identified by all three data sets (ongoing coalitions, collaborations and networks; infrastructure and capacity to support sustainability; ongoing evaluation of performance and outcomes; and availability of funding and resources) and five elements were identified by two of three data sets (community need for program, community buy-in and support, supportive leadership, presence of a champion, and evidence of positive outcomes).

RE-AIM QuEST

Another innovation in the assessment of implementation outcomes is the RE-AIM Qualitative Evaluation for Systematic Translation (RE-AIM QuEST), a mixed methods framework developed by Forman and colleagues ( 31 ). The RE-AIM (Reach, Efficacy/Effectiveness, Adoption, Implementation, and Maintenance) framework is often used to monitor the success of intervention effectiveness, dissemination, and implementation in real-life settings ( 35 ), and has been used to guide several mixed method implementation studies ( 6 , 50 , 54 , 79 , 82 , 85 ). The RE-AIM QuEST framework represents an attempt to provide guidelines for the systematic application of quantitative and qualitative data for summative evaluations of each of the five dimensions. These guidelines may also be used in conducting formative evaluations to help guide the process of implementation by identifying and addressing barriers in real time.

Forman and colleagues ( 31 ) applied this framework for both real-time and retrospective evaluation in a pragmatic cluster RCT of the Adherence and Intensification of Medications (AIM) program. Researchers found that the QuEST framework expanded RE-AIM in three fundamental ways: 1) allowing investigators to understand whether Reach, Adoption and Implementation varied across and within sites, 2) expanding retrospective evaluation of effectiveness by examining why the intervention worked or failed to work and explain which components of the intervention or the implementation context may have been barriers; and 3) explicating whether or not and in which ways the intervention was maintained. This information permitted researchers to improve implementation during the intervention and inform the design of future interventions.

Procedures for Participant Sampling

Purposeful sampling is widely used in qualitative research for the identification of information-rich cases related to the phenomenon of interest ( 18 , 71 ). While criterion sampling is used most commonly in implementation research ( 68 ), combining sampling strategies may be more appropriate to the aims of implementation research and more consistent with recent developments in quantitative methods ( 8 , 27 ). Palinkas and colleagues ( 68 ) reviewed the principles and practice of purposeful sampling in implementation research, summarized types and categories of purposeful sampling strategies and provided the following recommendations: 1) use of a single stage strategy for purposeful sampling for qualitative portions of a mixed methods implementation study should adhere to the same general principles that govern all forms of sampling, qualitative or quantitative; 2) a multistage strategy for purposeful sampling should begin first with a broader view with an emphasis on variation or dispersion and move to a narrow view with an emphasis on similarity or central tendencies; 3) selection of a single or multistage purposeful sampling strategy should be based, in part, on how it relates to the probability sample, either for the purpose of answering the same question (in which case a strategy emphasizing variation and dispersion is preferred) or for answering related questions (in which case, a strategy emphasizing similarity and central tendencies is preferred); 4) all sampling procedures, whether purposeful or probability, are designed to capture elements of both similarity (i.e., centrality) and differences (i.e., dispersion); and 5) although quantitative data can be generated from a purposeful sampling strategy and qualitative data can be generated from a probability sampling strategy, each set of data is suited to a specific objective and each must adhere to a specific set of assumptions and requirements.

Challenges of Integrating Quantitative and Qualitative Methods

Conducting integrated mixed methods research poses several challenges, from design to analysis and dissemination. Given the many methodological configurations possible, as described above, careful thought about optimal design should occur early in the process in order to have the potential to integrate methods when deemed appropriate to answer the research question(s). Considerations must include resources (e.g., time, funding, expertise; see 58 ), as integrated mixed methods studies tend to be complex and non-linear. After launching an integrated mixed methods study, the team needs to consistently evaluate the extent to which the mixed methods intentions are being realized, as the tendency in this type of study is to work (e.g., collect data) in parallel, even through analysis, only then to find that the sources of data are not reconcilable and the potential of the mixed methods design is not reached. This lack of integration may result in separate publications with quantitative and qualitative results rather than integrated mixed methods papers. Several sets of guidelines and critiques are available to facilitate high-quality integrated mixed methods products (e.g., 17 , 60 , 61 ).

Another consideration of integrating the two sets of methods lies in assessing the advantages and disadvantages of doing so with respect to data collection. Of course, there are tradeoffs involved with each method introduced here. For instance, Rapid Assessment Procedures enable more time efficient data collection but require more coordination of multiple data collectors to insure consistency and reliability. Rapid Assessment Procedure – Informed Clinical Ethnography also enables time efficient field observation and review procedures that constitute ideal “nimble” mixed method approaches for the pragmatic trial, along with minimizing participant burden, allowing for real-time workflow observations, more opportunities to conduct “repeated measures” of qualitative data through multiple site visits, and greater transparency in the integration of investigator and study participant perspectives on the phenomena of interest. However, it discourages use of semi-structured interviews or focus groups that may allow for the collection of data that would provide greater depth of understanding. Concept mapping offers a structured approach to data collection designed to facilitate quantification and visualization of salient themes or constructs at the expense of a semi-structured approach that may provide greater depth of understanding of the phenomenon of interest. Collection of qualitative data on implementation and sustainment processes and outcomes can be used to validate, complement and expand as well as develop quantitative measures such as the SI, SMS and RE-AIM, but can potentially involve additional time and personnel for minimal benefit. The advantages and disadvantages of each method must be weighed when deciding whether or not to use them for evaluation.

Finally, consideration must be given to identifying opportunities for the appropriate use of the innovative methods introduced in this article. Table 2 below outlines the range of mixed method functions, research foci, and study design for each innovative method. For example, Rapid Assessment Procedures could be used to achieve the functions of convergence, complementarity, expansion and development, assess both process and outcomes in effectiveness and implementation studies. However, we anticipate that these methods can and should be applied in ways we have yet to anticipate. Similarly, new innovative methods will inevitably be created to accommodate the functions, foci and design of mixed methods evaluations.

Opportunities for use of innovative methods in mixed methods evaluations based on function, focus and design.

MethodsMixed method functionFocusDesign
Collecting QUAL Data
RAPConvergence
Complementarity
Expansion
Development
Process
Outcomes
Effectiveness/ implementation
RAPICEConvergence
Complementarity
Expansion
Development
Process
Outcomes
Effectiveness/ implementation
Analyzing (Quantitizing) QUAL Data
Concept MappingDevelopmentPredictorsEffectiveness/ implementation
Qualitative Comparative AnalysisDevelopmentPredictors
Outcomes
Effectiveness/ implementation
Implementation FrameworksExpansionPredictorsImplementation
Measuring Evaluation Outcomes
States of Implementation CompletionDevelopment
Convergence
Complementarity
Expansion
Outcomes
Process
Implementation
Sustainment Measurement SystemDevelopment
Convergence
Complementarity
Expansion
Outcomes
Process
Implementation
RE-AIM QuESTConvergence
Complementarity
Expansion
OutcomesEffectiveness/ implementation
SamplingSamplingPredictors
Process
Outcomes
Effectiveness/ implementation

As evaluation research evolves as a discipline, the methods used by evaluation researchers must evolve as well. Evaluations are performed to achieve a better understanding of policy, program or practice effectiveness and implementation. They assess not just the outcomes associated with these activities, but the process and the context in which they occur. Mixed methods are central to this evolution ( 16 , 57 , 96 ). As they facilitate innovations in research and advances in the understanding gained from that research, so they must also change, adapt, and evolve. Options for determining suitability of particular designs are becoming increasingly sophisticated and integrated. This review summarizes only a fraction of the innovations currently underway. With each new application of mixed methods in evaluation research, the need for further change, adaptation and evolution becomes apparent. The key to the future of mixed methods research will be to continue building on what has been learned and to replicate designs that produce the most robust outcomes.

Acknowledgments

We are grateful for support from the National Institute on Drug Abuse (NIDA) (R34DA037516–01A1, L Palinkas, PI and P30DA027828, C. Hendricks Brown, PI) and the Department of Veterans Affairs (QUE 15–272, A. Hamilton, PI).

Disclosure Statement

The authors are not aware of any affiliations, memberships, funding, or financial holdings that might be perceived as affecting the objectivity of this review.

what is research methods evaluation

Summer is here, and so is the sale. Get a yearly plan with up to 65% off today! 🌴🌞

  • Form Builder
  • Survey Maker
  • AI Form Generator
  • AI Survey Tool
  • AI Quiz Maker
  • Store Builder
  • WordPress Plugin

what is research methods evaluation

HubSpot CRM

what is research methods evaluation

Google Sheets

what is research methods evaluation

Google Analytics

what is research methods evaluation

Microsoft Excel

what is research methods evaluation

  • Popular Forms
  • Job Application Form Template
  • Rental Application Form Template
  • Hotel Accommodation Form Template
  • Online Registration Form Template
  • Employment Application Form Template
  • Application Forms
  • Booking Forms
  • Consent Forms
  • Contact Forms
  • Donation Forms
  • Customer Satisfaction Surveys
  • Employee Satisfaction Surveys
  • Evaluation Surveys
  • Feedback Surveys
  • Market Research Surveys
  • Personality Quiz Template
  • Geography Quiz Template
  • Math Quiz Template
  • Science Quiz Template
  • Vocabulary Quiz Template

Try without registration Quick Start

Read engaging stories, how-to guides, learn about forms.app features.

Inspirational ready-to-use templates for getting started fast and powerful.

Spot-on guides on how to use forms.app and make the most out of it.

what is research methods evaluation

See the technical measures we take and learn how we keep your data safe and secure.

  • Integrations
  • Help Center
  • Sign In Sign Up Free
  • What is evaluation research: Methods & examples

What is evaluation research: Methods & examples

Defne Çobanoğlu

You have created a program or a product that has been running for some time, and you want to check how efficient it is. You can conduct evaluation research to get the insight you want about the project. And there are more than one method and way to obtain this information.

Afterward, when you collect the appropriate data about the program on its effectiveness, budget-friendliness, and opinions from customers, you can go one step further. The valuable information you collect from the research allows you to have a clear idea of what to do next. You can discard the project, upgrade it, make changes, or replace it. Now, let us go into detail about evaluation research and its methods.

  • First things first: Definition of evaluation research

Basically, evaluation research is a research process where you measure the effectiveness and success of a particular program, policy, intervention, or project. This type of research lets you know if the goal of that product was met successfully and shows you any areas that need improvement . The data gathered from the evaluation research gives a good insight into whether or not the time, money, and energy put into that project is worth it.

The findings from evaluation research can be used to form decisions about whether to continue, modify, discontinue, and improve future programs or interventions . Therefore, in other words, it means doing research to evaluate the quality and effectiveness of the overall project.

What is evaluation research?

What is evaluation research?

Why conduct evaluation research & when?

Conducting evaluation research is an effective way of usability testing and cost-effectiveness of the current project or product. Findings gathered from evaluative research play a key role in assessing what works and what doesn't and identifying areas of improvement for sponsors and administrators. This type of evaluation is a good means for data collection, and it provides a concrete result for decision-making processes.

There are different methods to collect feedback ranging from online surveys to focus groups. Evaluation research is best used when:

  • You are planning a different approach
  • You want to make sure everything is going as you want them to
  • You want to prove the effectiveness of an activity to the stakeholders and administrators
  • You want to set realistic goals for the future.
  • Methods to conduct an evaluation research

When you want to conduct evaluation research, there are different types of evaluation research methods . You can go through possible methods and choose the most suitable one(s) for you according to your target audience, manpower, and budget to go through with the research steps. Let us look at the qualitative and quantitative research methodologies.

Quantitative methods

These are the type of methods that asks questions to get tangible answers that rely on numerical data and statistical analysis to draw conclusions . These questions can be “ How many people? ”, “ What is the price? ”, “ What is the profit rate? ” etc. Therefore, they provide researchers with quantitative data to draw concrete conclusions. Now, let us look at the quantitative research methods.

1 - Online surveys

Surveys involve collecting data from a large number of people using appropriate evaluation questions to gather accurate feedback . This type of method allows for reaching a wider audience in a short time in a cost-effective manner. You can ask about various topics, from user satisfaction to market research. And, It would be quite helpful to use a free survey maker such as forms.app to help with your next research!

2 - Phone surveys

Phone surveys are a type of survey that involves conducting interviews with participants over the phone . They are a form of quantitative research and are commonly used by organizations and researchers to collect data from people in a short time. During a phone survey, a trained interviewer will call the participant and ask them a series of questions. 

Qualitative methods

This type of research method basically aims to explore audience feedback. These methods are used to study phenomena that cannot be easily measured using statistical techniques, such as opinions, attitudes, and behaviors . Techniques such as observation, interviews, and case studies are used to form evaluation for this method.

1 - Case studies

Case studies involve the analysis of a single case or a small number of cases to be explored further. In a case study, the researcher collects data from a variety of sources, such as interviews, observations, and documents. The data collected from case studies are often analyzed to identify patterns and themes .

2 - Focus groups

Using focus groups means having a small group of people and presenting them with a certain topic. A focus group usually consists of 6-10 people. The focus groups are introduced to a topic, product, or concept, and they present their reviews . Focus groups are a good way to obtain data as the responses are immediate. This method is commonly used by businesses to gain insight into their customers.

  • Evaluation research examples

Conducting evaluation research has helped many businesses to further themselves in the market because a big part of success comes from listening to your audience. For example, Lego found out that only around %10 of their customers were girls in 2011. They wanted to expand their audience. So, Lego conducted evaluation research to find and launch products that will appeal to girls.

  • Surveys questions to use in your own evaluation research

No matter the type of method you decide to go with, there are some essential questions you should include in your research process. If you prepare your questions beforehand and ask the same questions to all participants/customers, you will end up with a uniform set of answers. That will allow you to form a better judgment. Now, here are some good questions to include:

1  - How often do you use the product?

2  - How satisfied are you with the features of the product?

3  - How would you rate the product on a scale of 1-5?

4  - How easy is it to use our product/service?

5  - How was your experience completing tasks using the product?

6  - Will you recommend this product to others?

7  - Are you excited about using the product in the future?

8  - What would you like to change in the product/project?

9  - Did the program produce the intended outcomes?

10  - What were the unintended outcomes?

  • What’s the difference between generative vs. evaluative research?

Generative research is conducted to generate new ideas or hypotheses by understanding your users' motivations, pain points, and behaviors. The goal of generative research is to define the possible research questions and develop new theories and plan the best possible solution for those problems . Generative research is often used at the beginning of a research project or product.

Evaluative research, on the other hand, is conducted to measure the effectiveness of a project or program. The goal of evaluative research is to measure whether the existing project, program, or product has achieved its intended objectives . This method is used to assess the project at hand to ensure it is usable, works as intended, and meets users' demands and expectations. This type of research will play a role in deciding whether to continue, modify, or put an to the project. 

You can determine either to use generative or evaluation research by figuring out what you need to find out. However, of course, both methods can be useful throughout the research process in obtaining different types of evidence. Therefore, firstly determine your goal of conducting evaluation research, and then you can decide on the method to go with.

Conducting evaluation research means making sure everything is going as you want them to in your project or finding areas of improvement for your next steps. There are more than one methods to go with. You can do focus groups or case studies on collecting opinions, or you can do online surveys to get tangible answers. 

If you choose to do online surveys, you can try forms.app, as it is one of the best survey makers out there. It has more than 1000 ready-to-go templates. If you wish to know more about forms.app, you can check out our article on user experience questions !

Defne is a content writer at forms.app. She is also a translator specializing in literary translation. Defne loves reading, writing, and translating professionally and as a hobby. Her expertise lies in survey research, research methodologies, content writing, and translation.

  • Form Features
  • Data Collection

Table of Contents

  • Why conduct evaluation research & when?

Related Posts

Top 10 marketing trends to adopt in 2024 (With examples)

Top 10 marketing trends to adopt in 2024 (With examples)

Işılay Kırbaş

30+ Political survey questions for your next survey

30+ Political survey questions for your next survey

7 best trivia question types (free trivia example)

7 best trivia question types (free trivia example)

forms.app Team

The Chicago School Library Logo

  • The Chicago School
  • The Chicago School Library
  • Research Guides

Quantitative Research Methods

What is quantitative research, about this guide, introduction, quantitative research methodologies.

  • Key Resources
  • Quantitative Software
  • Finding Qualitative Studies

 The purpose of this guide is to provide a starting point for learning about quantitative research. In this guide, you'll find:

  • Resources on diverse types of quantitative research.
  • An overview of resources for data, methods & analysis
  • Popular quantitative software options
  • Information on how to find quantitative studies

Research involving the collection of data in numerical form for quantitative analysis. The numerical data can be durations, scores, counts of incidents, ratings, or scales. Quantitative data can be collected in either controlled or naturalistic environments, in laboratories or field studies, from special populations or from samples of the general population. The defining factor is that numbers result from the process, whether the initial data collection produced numerical values, or whether non-numerical values were subsequently converted to numbers as part of the analysis process, as in content analysis.

Citation: Garwood, J. (2006). Quantitative research. In V. Jupp (Ed.), The SAGE dictionary of social research methods. (pp. 251-252). London, England: SAGE Publications. doi:10.4135/9780857020116

Watch the following video to learn more about Quantitative Research:

(Video best viewed in Edge and Chrome browsers, or click here to view in the Sage Research Methods Database)

Correlational

Researchers will compare two sets of numbers to try and identify a relationship (if any) between two things.

Descriptive

Researchers will attempt to quantify a variety of factors at play as they study a particular type of phenomenon or action. For example, researchers might use a descriptive methodology to understand the effects of climate change on the life cycle of a plant or animal.

Experimental

To understand the effects of a variable, researchers will design an experiment where they can control as many factors as possible. This can involve creating control and experimental groups. The experimental group will be exposed to the variable to study its effects. The control group provides data about what happens when the variable is absent. For example, in a study about online teaching, the control group might receive traditional face-to-face instruction while the experimental group would receive their instruction virtually.

Quasi-Experimental/Quasi-Comparative

Researchers will attempt to determine what (if any) effect a variable can have. These studies may have multiple independent variables (causes) and multiple dependent variables (effects), but this can complicate researchers' efforts to find out if A can cause B or if X, Y, and Z are also playing a role.

Surveys can be considered a quantitative methodology if the researchers require their respondents to choose from pre-determined responses.

  • Next: Key Resources >>
  • Last Updated: Aug 20, 2024 5:29 PM
  • URL: https://library.thechicagoschool.edu/quantitative

EMDD 600 Usability and Evaluation Research Methods

  • New York Times All Access

Profile Photo

Introduction

Library services.

  • University Libraries The main page for the Ball State University Libraries.
  • Ask a Librarian Contact a librarian by chat, phone, email, in-person or schedule a reference appointment.
  • Interlibrary Loan Request a book, full text articles, or another item not available in the Ball State University Libraries.

Citation Styles

  • EMDD Stylebook Center for Emerging Media Design & Development
  • Concise APA Handbook (eBook) Seventh edition, 2020
  • APA Citation Style APA, 7th edition (2020)

Writing Center

  • The Writing Center The Writing Center at Ball State University is where students, faculty, and staff turn for writing support. Schedule an Appointment.

Office of Research Integrity

  • Human Subject Participation The Institutional Review Board (IRB) is a peer-review committee charged with the responsibility of protecting the rights and welfare of humans involved in research.
  • Next: Databases >>
  • Last Updated: Aug 21, 2024 9:41 AM
  • URL: https://bsu.libguides.com/c.php?g=1418788

CRJU 202: Research Methods in Criminology and Criminal Justice

  • CRJU 202 Activity

Useful Databases

Database search tips.

  • Finding a Known Item
  • Criminal Justice Abstracts with Full Text This link opens in a new window Indexing and full text of resources related to criminal justice and criminology.
  • Sociological Abstracts This link opens in a new window Abstracts and indexes the international literature in sociology and related disciplines in the social and behavioral sciences.
  • PsycInfo This link opens in a new window The premier database for psychology and related disciplines such as medicine, psychiatry, education, social work, law, criminology, social science, and organizational behavior.

 Research Tip: How to Narrow Your Search

  • women AND crime
  • gender AND sentencing
  • "public defender"
  • “self evaluation"

 Research Tip: How to Broaden Your Search

  • women OR girl OR female
  • adolescent OR teenager
  • delinquen* (searches for delinquent, delinquents, delinquency, etc.)

In most databases, the truncation symbol is *

 Research Tip: Database Exploration

  • Start your search by doing keyword searches.
  • Look at the subject headings of relevant records to determine the terminology used in the database for your topic. 
  • Use Limits  to limit results to Scholarly/Peer Reviewed articles, by publication date, and more. 
  • << Previous: CRJU 202 Activity
  • Next: Finding a Known Item >>
  • Last Updated: Aug 22, 2024 1:11 PM
  • URL: https://guides.library.sc.edu/crju202
  • Study Protocol
  • Open access
  • Published: 19 August 2024

Evaluating the impact of the global evidence, local adaptation (GELA) project for enhancing evidence-informed guideline recommendations for newborn and young child health in three African countries: a mixed-methods protocol

  • Tamara Kredo   ORCID: orcid.org/0000-0001-7115-9535 1 , 2 , 3 , 4 ,
  • Emmanuel Effa 5 ,
  • Nyanyiwe Mbeye 6 ,
  • Denny Mabetha 1 ,
  • Bey-Marrié Schmidt 1 , 7 ,
  • Anke Rohwer 2 ,
  • Michael McCaul 2 ,
  • Idriss Ibrahim Kallon 2 ,
  • Susan Munabi-Babigumira 8 ,
  • Claire Glenton 8 ,
  • Taryn Young 2 ,
  • Simon Lewin 1 , 9 ,
  • Per Olav Vandvik 10 , 11 &
  • Sara Cooper 2 , 4 , 12  

Health Research Policy and Systems volume  22 , Article number:  114 ( 2024 ) Cite this article

62 Accesses

Metrics details

Poverty-related diseases (PRD) remain amongst the leading causes of death in children under-5 years in sub-Saharan Africa (SSA). Clinical practice guidelines (CPGs) based on the best available evidence are key to strengthening health systems and helping to enhance equitable health access for children under five. However, the CPG development process is complex and resource-intensive, with substantial scope for improving the process in SSA, which is the goal of the Global Evidence, Local Adaptation (GELA) project. The impact of research on PRD will be maximized through enhancing researchers and decision makers’ capacity to use global research to develop locally relevant CPGs in the field of newborn and child health. The project will be implemented in three SSA countries, Malawi, South Africa and Nigeria, over a 3-year period. This research protocol is for the monitoring and evaluation work package of the project. The aim of this work package is to monitor the various GELA project activities and evaluate the influence these may have on evidence-informed decision-making and guideline adaptation capacities and processes. The specific project activities we will monitor include (1) our ongoing engagement with local stakeholders, (2) their capacity needs and development, (3) their understanding and use of evidence from reviews of qualitative research and, (4) their overall views and experiences of the project.

We will use a longitudinal, mixed-methods study design, informed by an overarching project Theory of Change. A series of interconnected qualitative and quantitative data collections methods will be used, including knowledge translation tracking sheets and case studies, capacity assessment online surveys, user testing and in-depth interviews, and non-participant observations of project activities. Participants will comprise of project staff, members of the CPG panels and steering committees in Malawi, South Africa and Nigeria, as well as other local stakeholders in these three African countries.

Ongoing monitoring and evaluation will help ensure the relationship between researchers and stakeholders is supported from the project start. This can facilitate achievement of common goals and enable researchers in South Africa, Malawi and Nigeria to make adjustments to project activities to maximize stakeholder engagement and research utilization. Ethical approval has been provided by South African Medical Research Council Human Research Ethics Committee (EC015-7/2022); The College of Medicine Research and Ethics Committee, Malawi (P.07/22/3687); National Health Research Ethics Committee of Nigeria (01/01/2007).

Peer Review reports

Sub-Saharan Africa (SSA) has the highest under-five mortality rate in the world [ 1 ]. Although the global under-five mortality rate declined from 76 to 38 per 1000 live births between 2000 and 2019, more than half of the deaths in children and youth in 2019 were among children under 5 years, approximately 5.2 million deaths [ 1 ]. Poverty-related diseases including pneumonia, diarrhoea and malaria remain amongst the leading causes of death in children under-5 years [ 2 ].Thus, despite progress in the health of young children globally, most countries in SSA fall below the average gains and do not meet maternal and child health targets set by the United Nations Sustainable Development Goal 3 to ‘ensure healthy lives and promote wellbeing’ (1). As of December 2021, under-five mortality rates were reported as 113.8, 38.6 and 32.2 per 1000 live births for Nigeria, Malawi and South Africa, respectively [ 3 ]. Factors accounting for regional disparities in child mortality rates include poverty, socioeconomic inequities, poor health systems and poor nutrition, with coronavirus disease 2019 (COVID-19) adding substantially to the burden [ 4 ].

Addressing healthcare issues such as these requires an evidence-informed approach, where intervention design and implementation are based on the best available evidence, to ensure that scarce resources are used effectively and efficiently, avoid harm, maximize good and improve healthcare delivery and outcomes [ 5 , 6 , 7 ]. Evidence-informed practices have been growing in SSA [ 6 ], and evidence ecosystems are becoming stronger. The evidence ecosystem reflects the formal and informal linkages and interactions between different actors (and their capacities and resources) involved in the production, translation and use of evidence [ 6 , 8 , 9 ]. Guidance that can be developed through this ecosystem includes evidence-based health technology assessments (HTA) and clinical practice guidelines (CPGs). CPGs include recommendations that are actionable statements that are informed by systematic reviews of evidence, and an assessment of the benefits and harms of alternative care options and are intended to optimize patient care [ 10 ]. They can help bridge the gap between research evidence and practice and are recognized as important quality-improvement tools that aim to standardize care, inform funding decisions and improve access to care, among others.

CPG method advancements, challenges and research gaps

Over the past decade, internationally and in SSA, there has been a rapid growth of CPGs developed for a range of conditions [ 11 ]. In particular, rapid evidence syntheses and guideline development methods has advanced in response to urgent evidence needs, especially during COVID [ 12 , 13 ]. For example, WHO has developed guidelines for all key infectious conditions that cause most deaths. This development has been accompanied by a growing volume of research evidence around CPGs, including the processes for their rapid development, adaptation, contextualization, implementation and evaluation, and further spurred on by COVID. For example, global knowledge leaders, such as the WHO and the GRADE Working Group, have set standards for CPG development, outlining the steps of what is known as ‘de novo’ (from scratch) CPG development [ 14 ]. Another global group, the Guidelines International Network (G-I-N), is a network dedicated to leading, strengthening and supporting collaboration in CPG development, adaptation and implementation. They have published minimum standards and the G-I-N McMaster guideline checklist, which contains a comprehensive list of topics and items outlining the practical steps to consider for developing CPGs [ 15 ].

As CPG standards have evolved, however, so has the complexity of development and adaptation. In the context of poorer settings, such as sub-Saharan Africa (SSA), CPG development is prohibitively human and finance resource intensive. It requires scarce skills, even in the growing evidence-based healthcare (EBHC) community, and financial investments by government where resources are often directed to healthcare services, rather than policymaking processes. Against this backdrop, several studies have found that CPGs in the region often perform poorly on reporting on their rigour of development and editorial independence [ 16 , 17 , 18 ]. Other, more resource-efficient methods for guideline development in SSA are, therefore, essential and urgently needed. Moreover, investment in the overall management of the process is needed, including convening the guideline group and moving stepwise through a rigorous process.

Approaches for and challenges of guideline adaptation

There is also increased international recognition of the value of taking guidelines developed in one country and applying them to other countries. This can avoid duplication of effort and research waste in de novo guideline development, when useful guidelines may exist elsewhere [ 12 , 19 ]. Against this backdrop, several adaptation methods are emerging for contextualization of recommendations to country needs (e.g. ADAPTE, adolopment and SNAP-it, amongst others) [ 19 , 20 , 21 ]. For example, WHO is developing strategies for adapting and implementing their CPGs at country level. One example is the WHO Antenatal Care Recommendations Adaptation Toolkit lead by the Department of Sexual and Reproductive Health and Research [ 22 ]. Their approach is pragmatic and transparent. Another approach is so-called ‘adolopment’, a GRADE method, in which the original guideline evidence is used, either adopted or adapted, considering contextual evidence such as costs and feasibility and local values [ 20 ]. Adolopment involves convening a guideline panel, reviewing available evidence and local contextual evidence and weighing up the panel’s judgements to make recommendations that are fit for purpose [ 20 ].

Despite these advances in CPG adaptation methods, many countries and professional associations in sub-Saharan Africa still use expert opinion-based approaches or proceed to prepare their own systematic reviews and guidelines, ultimately perpetuating resource wastage and duplication of efforts [ 23 ]. Moreover, when countries do adapt and contextualize other countries’ guidelines, there is frequently a lack of transparency and reporting on changes, without clarity on why or by whom. This in turn casts doubts on the recommendation’s credibility. For example, guidelines for child health in sub-Saharan Africa are usually derived from the WHO and UNICEF. However, adaptation of such guidelines and recommendations to national contexts is not well described [ 24 ]. Transparency in guideline adaptation is critical for creating trustworthy, context-sensitive recommendations. What guideline adaptation methods work best and how these can be transparently implemented in the context of lower resource settings, remain key research questions. Therefore, despite the emergence of several guideline adaptation approaches, we need to explore and understand how best to adapt recommendations from one context to another [ 25 ].

Qualitative evidence to inform guideline panels decisions

Another major advancement within guideline research has been growing recognition of the potential contribution of qualitative research evidence [ 26 , 27 ]. Traditionally, guidelines have been informed by systematic reviews of the effectiveness of specific interventions [ 14 ]. Such reviews provide robust evidence about which interventions ‘work’. However, there is appreciation that evidence regarding the potential effectiveness of an intervention is not sufficient for making recommendations or decisions. Policymakers also need to consider other issues, including how different stakeholders’ value different outcomes, the intervention’s acceptability to those affected by it and the feasibility of implementing the intervention [ 28 , 29 , 30 ]. Evidence from qualitative research is particularly well suited to exploring factors that influence an intervention’s acceptability and feasibility [ 31 , 32 ]. The use of qualitative research to inform recommendations by guidelines has become easier in recent years as systematic reviews of qualitative studies have become more common, and the methods for these reviews are now well developed [ 33 ]. The first WHO guideline to systematically incorporate reviews of qualitative studies was published in 2012 in the field of task-shifting for maternal and child health [ 31 ]. The inclusion of this qualitative evidence helped shape the panel’s recommendations [ 32 ], and this approach is now included in the WHO Handbook for Guideline Development and has been applied in many other WHO CPGs [ 34 , 35 ].

However, a key challenge in using findings from systematic reviews of qualitative evidence is communicating often complex findings to users such as guideline panel members to facilitate effective knowledge translation. While there is now considerable research on communicating findings from reviews of intervention effectiveness [ 36 ], there is limited experience on the usefulness of different options for packaging and presenting findings from systematic reviews of qualitative evidence to CPG panels. To make best use of this evidence, we need presentation formats that are accessible to users who may be unfamiliar with qualitative methods, are concise and simple while retaining sufficient detail to inform decisions and clearly present ‘confidence in the evidence from systematic reviews of qualitative evidence’ (GRADE-CERQual) assessments of how much confidence users should place in each finding [ 37 ]. In addition, we need to understand how qualitative evidence included in global guidelines, such as those produced by WHO, is interpreted and used in country-level guideline adaptation processes.

Communicating clinical practice guidelines to end-users

A final key guideline method advancement has been around the development of multi-layered and digitally structured communication formats for end users [ 38 , 39 ]. Guidelines are not an end in themselves. Recommendations may lack impact if not adequately communicated and disseminated to those who need to implement them, namely healthcare providers, managers and the public. Indeed, in a South African study of primary care guideline national policymakers, subnational health managers and healthcare providers agreed that dissemination is a particular gap [ 40 ]. While guidelines typically are produced as static documents (e.g. PDF formats), information technology is needed to enhance dissemination. The MAGIC authoring and publication Platform (MAGICapp/) was developed for this purpose ( https://magicevidence.org/magicapp/ ). MAGICapp is a web-based tool that enables evidence synthesizers and guideline organizations to create, publish and dynamically update trustworthy and digitally structured evidence summaries, guidelines and decision aids in user-friendly formats on all devices. Such digital multi-layered formats allow different users to rapidly find recommendations, while having the supporting evidence for them one click away [ 41 ]. MAGICapp, used by WHO, NICE and professional societies across the world, holds potential to enhance the impact of evidence-informed guideline recommendations in practice, in an enhanced evidence ecosystem [ 9 ]. However, the usability of the MAGICapp in sub-Saharan Africa, based on local user preferences for different communication formats, are key research questions.

Against this backdrop, the Global Evidence, Local Adaptation (GELA) project will maximize the impact of research on poverty-related diseases through enhancing researchers and decision makers’ capacity to use global research to develop locally relevant guidelines for newborn and child health in Malawi, Nigeria and South Africa. These guidelines will build on and add value to the large-scale programme of child health guideline development from agencies such as the WHO, to support adaptation and implementation led by national ministries in collaboration with WHO Afro regional office.

Brief overview of the GELA project aim, objectives and approach

The overarching aim of GELA is to bridge the gap between current processes and global advances in evidence-informed decision-making and guideline development, adaptation and dissemination by building skills and sharing resources in ways that can be sustained beyond the project period. The project has seven linked and related work packages (WPs) to support delivery of the planned project deliverables. Table 1 provides a brief summary of the activities of each WP. This protocol outlines our approach for the monitoring and overall evaluation of the project activities and impact (WP 6).

The project will be implemented in three SSA countries: Malawi, South Africa and Nigeria over a 3-year period. The project adopts a multi-faceted multidisciplinary research and capacity strengthening programme using primary and secondary research, guideline adaptation methodology and digital platforms to support authoring delivery and dynamic adaptation. These processes will offer bespoke capacity strengthening opportunities for policy makers, researchers and civil society. Throughout the project, we plan for innovations in the tools we use, accompanied by comprehensive evaluation of all aspects of the research, research uptake into policy and capacity strengthening.

This current proposal is for WP6: monitoring and evaluation

Ongoing monitoring and evaluation of project processes and activities will help facilitate ongoing engagement between researchers and stakeholders throughout the research project. This will in turn help ensure that the project is centred on a common goal, with clear understandings of the different research activities and potential impact. This can also promote research uptake and enable researchers to make adjustments to project activities, maximizing stakeholder engagement and research utilization.

M&E aims & objectives

The overarching aim of the monitoring and evaluation work package is to monitor and evaluate the various GELA project activities and processes, including whether, how and why activities took place or if goals were met.

The specific monitoring and evaluation objectives are to:

Monitor ongoing engagement with local stakeholders across work packages and explore what worked and didn’t and why;

Assess the capacity development needs of guideline panels and steering group committees and explore their views and experiences of the project’s capacity development activities;

Explore guideline panelists’ experiences with reading and using evidence from reviews of qualitative research, including their preferences regarding how qualitative review findings are summarized and presented;

Evaluate guideline panelists’, steering group committees’ and project team members’ overall views and experiences of the project, including the what works or not, to influence evidence-informed decision-making and guideline adaptation processes

Overall approach

We will use a longitudinal, mixed-methods study design, informed by an overarching project Theory of Change (Table  2 ). The theoretical underpinning for the GELA project across all work packages is related to the three-layered behaviour change wheel comprising opportunity, capability and motivation [ 42 ]. The design, delivery and implementation of multi-stakeholder integrated activities based on identified priority areas and needs is expected to lead to guideline related improved capacity, practice and policy within each country’s health system. Certain objectives also have specific underpinning theoretical frameworks, in addition to the overarching project Theory of Change, which are explained under the respective objectives below. A series of interconnected qualitative and quantitative data collections methods will be used to address each objective.

In what follows, we describe each objective and the methods we will use to achieve it, separately. However, in many cases the qualitative data collection cuts across objectives, with the same interviews and observations being used to explore multiple issues simultaneously (e.g. knowledge translation, capacity, overall views and experiences of the project, etc.). The relationship between the different objectives and associated methods are depicted in Tables 3 and 4 . Table 3 outlines the stakeholder groups included in the monitoring and evaluation work package, including their composition and for which objectives they are targeted. Table 4 provides the timeline for the different data collection methods and how they relate to each across the objectives.

1. Objective 1: monitor ongoing engagement with local stakeholders across work packages and explore what worked and did not work and why

Overall approach for this objective.

This objective will be guided by an integrated knowledge translation (IKT) approach. IKT focuses on the important role of stakeholder engagement in enhancing evidence-informed decision-making [ 43 ]. As part of work package 4 (‘dissemination and communication’), knowledge translation (KT) champions have been identified in each of the three countries and will work together to develop and implement country-level KT strategies. This will include defining KT objectives, identifying and mapping relevant stakeholders, prioritizing those we will actively engage and developing a strategy for engaging each priority stakeholder. We will monitor these engagements through the development and implementation of a tracking sheet, qualitative case studies and semi-structured interviews.

Participants

Participants will comprise of knowledge translation (KT) champions and relevant country-level stakeholders. KT champions are GELA project staff who have dedicated time to work on the communication, dissemination and engagement aspects at a country-level. At least one KT champion has been identified for each of Malawi, Nigeria and South Africa.

Relevant country-level stakeholders will be identified as part of the KT strategy development (WP4) and will comprise any health decision-makers, e.g. health practitioners, community groups, health system managers, policy-makers, researchers and media.

Tracking sheet and qualitative case studies

A tracking sheet will be used to capture information for each stakeholder related to the purpose, message, medium or forum, messenger, timing and resources for engagement. KT champions in each country will be responsible for tracking these details on a continuous basis, and the tracking sheet will be monitored bi-monthly at a meeting with KT champions from the three country teams. This will help us monitor whether and how engagement activities are taking place, as well as the strategies for implementation. The tracking sheets will consist of different in-country stakeholders (e.g. government officers, health professional associations, researchers, media, etc.), and there may be several goals for engaging each individual stakeholder. The engagement strategy will be reviewed and updated as priority stakeholders change over the research stages and project period. As such, the sample size will be determined iteratively.

We will analyse information with descriptive statistics. For example, we will group and count by categories: number and type of stakeholders, type of engagement activities, type of KT products produced, type of forum or medium used for dissemination, frequency and duration of engagement, follow-ups, intensive engagement period and resources required for engagement.

We will also develop case stories (or impact stories) describing engagement activities and processes between project staff and relevant stakeholders. The case studies will help us monitor successful engagement, disseminate best practice scenarios and draw out lessons for future engagements. We will identify case stories through the tracking sheet and at bi-monthly meetings with the KT co-ordinator, where KT champions will be asked to share success stories or learning moments. KT champions will not know which ‘case’ will be selected for the case study in advance. The information will be collected by the KT co-ordinator, who is not involved in any of the country strategy implementation. The information collected from the KT champions (and messenger, if the messenger is not the KT champion) will be via a standard case story template, including aim of engagement, what the engagement was, experiences from both sides (quotes to be included in stories), success of engagement, lessons learnt and any future engagement plans. The number of cases will be determined iteratively. The intention is to develop one case story from each country annually, showcasing different cases, e.g. type of KT goal, type of stakeholder, type of KT medium/forum, etc.

Semi-structured interviews

At project close (month 30), we will conduct semi-structured interviews to explore if, why and how project KT goals were met and what planned stakeholder engagements worked (and did not work) and why. The interviews will be conducted with KT champions, other messengers (e.g. communication officers), country leads and selected stakeholders. At least two people from each county (KT champion and messenger and/or stakeholder) will be interviewed, and so there will be six to eight interviews in total. Participants will be selected purposively for information-rich cases that can help yield insights and in-depth understanding of the nature and success (or not) of our stakeholder engagements [ 44 ].

These interviews will form part of the interviews conducted with project team members more broadly as part of objective 4, the methods of which are therefore described in more detail below.

2. Objective 2: assess the capacity development needs of guideline panels and steering group committees and explore their views and experiences of the project’s capacity development activities.

Overarching theoretical lens.

We will draw on the Kirkpatrick model [ 45 ] as the underpinning theoretical framework for this objective. This model evaluates training effectiveness across four levels: (1) reaction, (2) learning, (3) behaviour and (4) results. The ‘reaction level’ assesses the degree of satisfaction of participants with the training event. The ‘learning level’ examines learning among participants both before and after the training event to determine any change in knowledge [ 46 , 47 ]. The ‘behaviour level’ assesses whether the training event has provided any favourable change in behaviour among participants. The final ‘results level’ assesses the use of knowledge gained through the training event within the workplace [ 46 , 47 ].

To assess the potential difference that project capacity development activities make, the outcomes of interest will be those related to training in evidence-based healthcare (EBHC). An overview of systematic reviews by Young and colleagues identified that EBHC training often aims to ‘improve critical appraisal skills and integration of results into decisions, and improved knowledge, skills, attitudes and behaviour among practising health professionals’ [ 48 , 49 ].

We will employ mixed methods to achieve this objective, including three rounds of online surveys (at baseline, mid-line and at the project close) as well as semi-structured interviews (at project close) and non-participant observations of meetings (various). The first online survey at baseline will assess the capacity needs of the guideline panels and steering group committees in South Africa, Malawi and Nigeria, and the two subsequent online surveys will assess the potential difference project capacity development activities make on these groups across all the four levels of the Kirkpatrick model, i.e. reaction, learning, behaviour and results. The capacity needs and progress of these groups will also be explored qualitatively through semi-structured interviews and observations of meetings.

Details of the project capacity development activities that will be implemented as part of work package 5 (‘capacity strengthening and sharing’) of the GELA project are outlined in Table  1 (above). All members of the guideline panels and steering group committees in South Africa, Malawi and Nigeria will be invited and encouraged to attend all project capacity development activities. ‘On the job’ capacity building will also take place during the various meetings convened with these groups, as they are supported to identify priority topics, to appraise and discuss the evidence used to inform the recommendations and to formulate the final recommendations.

Participants will comprise members of the guideline panels and steering group committees in South Africa, Malawi and Nigeria. Table 3 (above) provides details of the composition of the guideline panels and steering group committees.

Online surveys

Procedures and data collection tools.

At baseline (at approximately 6 months before engagement in any project training activities), at mid-line (month 18) and at the project close (month 30), all members of the guideline panels and steering group committees in South Africa, Malawi and Nigeria will be invited, via email, to participate in a survey. In each of the three countries the guideline development group and steering group committees will include approximately 20 and 10 members, respectively; we will therefore aim to have 90 participants in total complete the survey. The email invitation to all three survey rounds will inform participants about the nature of the study and direct them to an online survey. The landing page of the survey will provide information about the purpose of the research project and what is being requested from the participants, with a consent statement at the end which the participant will be required to agree to before being able to continue with the survey. Data will only be collected from participants who consent to freely participate in the study. The survey will be carried out using a secure online survey platform (such as Microsoft Forms) where all cookies and IP address collectors will be disabled to protect the confidentiality of the participants and to avoid tracking of the participant activities online. Unique identifiers (last six numbers of their ID) will be used to track participants responses over time and link data from baseline to project close.

The baseline survey will be a short (10–25 min) form that will ask participants about their capacity needs and knowledge/skills in evidence-based healthcare (EBHC) and decision-making. The survey will capture demographic variables of participants at baseline, mid-term and at the end of the project. It will assess the training needs of participants at baseline, participants’ satisfaction at the end of each training activity, the knowledge and skills at baseline, mid-term and at the end of the project. Participants’ behaviour will also be assessed using open-ended questions and vignettes. The surveys will focus on all four levels (i.e. reaction, learning, behaviour and results) of the Kirkpatrick model.

Data management and analysis

All data collected on the secure online survey platform will be coded, cleaned and entered into STATA. Data collected for the baseline survey will be analysed using descriptive statistics to determine the frequency of the various training needs and qualitative data gathered using the open-ended questions will be analysed thematically using manual coding (or if available and dataset is large), and NVivo or a similar tool will be used to identify the recurring themes which emerge in the data collected about the key training needs of participants.

Data collected for the surveys conducted at midpoint and at project close will be analysed using descriptive statistics to determine if there has been a change in the learning, knowledge gained and behaviours over time, as well as the extent of the potential application of evidence-based practice, while the data collected using the open-ended questions will be analysed using thematic analysis outlining how project capacity development activities informed particular outcomes and results in the participant’s workplace. To determine change in skills (and trends over time such as confidence improvement or decay), the descriptive statistics will be supplemented by appropriate inferential statistics for repeated measures (paired data) such as McNemar or paired t -tests, reporting change in percentages as mean differences (such as self-reported confidence) with 95% confidence intervals or/and frequencies. Descriptive trends over time will also be presented graphically using line graphs or other visual aids as appropriate. However, these will be interpreted with caution as the primary analysis is descriptive. Statistical significance will be set at a p value of 0.05.

At project close (month 30), we will conduct semi-structured interviews with a sample of members from the guideline panels and steering group committees in South Africa, Malawi and Nigeria. Sampling will be purposive, with the aim of understanding the broad range of needs, experiences and perspectives and ensuring that the sample reflects a range of socio-demographic characteristics and stakeholder categories. We will begin with a sample size of 10–15 participants in each country; however, sampling will continue if we have not reached saturation of the data through the initial sample size [ 44 ].

Participants will be contacted, either by telephone or via email, and invited to participate in an interview. Interviews will be conducted face-to-face or electronically (e.g. using Microsoft Teams) at a date and time chosen by participants. Face-to-face interviews will take place at a location convenient to participants, which is conducive to a confidential exchange. The interviews will last between 45 and 60 min and will be conducted by researchers trained in qualitative research methodologies and interviewing techniques. The interviews will be guided by a semi-structured topic guide and will include questions informed by the four levels (i.e. reaction, learning, behaviour and results) of the Kirkpatrick model. Specifically, the questions will explore participants’ views and experiences regarding their capacity development needs and expectations of the project; whether and why these expectations were met (or not), the project capacity development activities, what they learned (or not) from these activities and what impact participants believe they have had (or may have) on their practices.

Verbal and written information about the study will be provided to all participants taking part in interviews. Written informed consent will be obtained from all participants before proceeding with the interview. With the permission of participants, all interviews will be digitally recorded.

Non-participant observations

We will conduct non-participant observations of guideline panel and steering group committee meetings. Observational methods can provide useful data on what people do, how they interact with each other and how they engage with particular artefacts in situ (rather than their accounts of these) [ 50 ]. The steering group committees in each country will meet approximately twice over the project duration (with the option for additional meetings): an initial meeting for project orientation (month 2/3) and again to identify priority topics and guideline gaps (month 6). Guideline panels in each country will meet approximately three times over the project duration (with the option for additional meetings): an initial meeting for project orientation and outcome prioritization (month 6/7), another potential meeting if necessary to finalize outcome prioritization and a final meeting to draft recommendations for the guideline (months 17–20). Meetings for both groups will be held virtually or in person, informed by preferences of the committee.

With the exception of the initial steering group committee (month 2/3), at least one researcher will be present to observe guideline panel and steering group committee meetings. The observer will aim to identify any capacity-related needs, expectations, gaps, strengths, achievements and challenges and the contexts in which these occur. He or she will also pay particular attention to group dynamics and the interactions between members and different stakeholder groups, and the potential impact of these on capacity-related issues. Observations will be informed by Lofland’s [ 51 ] criteria for organizing analytical observations (acts, activities, meanings, participation, relationships and settings). The observer will take detailed observational notes. With consent of the attendees, all meetings will also be digitally recorded. The recordings will be used to identify further issues not identified and to deepen or clarify issues noted, through the real-time observations of verbal engagements.

Data management and analysis: semi-structured interviews and observations

Interview and meeting recordings will be transcribed verbatim, and all personal identifying information will be removed from transcripts. The anonymized transcripts, together with observational notes, will be downloaded into Nvivo, a software programme that aids with the management and analysis of qualitative data. Analysis of the qualitative data will proceed in several rounds. First, as with all qualitative data analysis, an ongoing process of iterative analysis of the data will be conducted throughout the data collection period. Second, we will use a thematic analysis approach, using the phases described by Braun and Clarke [ 52 ], to identify key themes pertaining to participants’ capacity development needs and expectations and whether, how and why project capacity development activities met (or not) these needs and expectations. Finally, findings from the surveys (as described above) will also be integrated with the findings from the thematic analysis using a ‘narrative synthesis’ approach, a technique recommended by the Cochrane Collaboration as a way of synthesizing diverse forms of qualitative and quantitative evidence in mixed methods studies [ 53 , 54 ]. This approach will allow for both robust triangulation, and a more comprehensive interpretation of the difference project capacity development activities may have made on the guideline panels and steering group committees.

3. Objective 3: explore guideline panelists’ experiences with reading and using evidence from reviews of qualitative research, including their preferences regarding how qualitative review findings are summarized and presented.

Objective 3 of the monitoring and evaluation stakeholder matrix work package explores how guideline panels view and experience evidence from the review(s) of qualitative research, including how it is summarized and presented. Here, we will employ a user testing approach, drawing on the methods and guidance of the SURE user test package 2022 developed by Cochrane Norway ( https://www.cochrane.no/our-user-test-package ) and which has been used to test various evidence-related products [ 55 , 56 , 57 , 58 ]. User testing involves observing people as they engage with a particular product and listening to them ‘think-aloud’. The goal is to gain an understanding of users’ views and experiences, the problems they face and to obtain suggestions for how a product may be improved [ 55 , 56 , 57 , 58 ].

We will begin by identifying or preparing relevant reviews of qualitative research. We will then develop review summary formats and explore guideline panel members’ views and experiences of these formats. We will revise the formats in multiple iterative cycles.

Identifying or preparing relevant reviews of qualitative research

As part of WP2 of the project (‘evidence synthesis’), we will identify relevant review(s) of qualitative research, including reviews exploring how people affected by the interventions of interest value different outcomes, the acceptability and feasibility of the intervention and potential equity, gender and human rights implications of the intervention. These reviews need to be assessed as sufficiently recent and of a sufficient quality. They also need to have applied GRADE-CERQual assessments to the review findings. Where necessary, we will update existing reviews or prepare reviews ourselves.

Developing the review summaries

In WP3 of the project (‘decision-making’) the evidence from these reviews will be provided to guideline panels as part of the evidence-to-decision (‘EtD’) frameworks that will inform the recommendations they develop (see Table  1 for further details about project work packages 2 and 3). Our next step will therefore be to prepare summaries of the reviews in a format that can easily be included in the EtD frameworks.

Each summary needs to present review findings that are relevant to specific parts of the EtD framework (typically the ‘values’, ‘acceptability’, ‘feasibility’ and ‘equity’ components). It also needs to include information about our confidence in these findings. Finally, the summary needs to indicate where this evidence comes from and to allow guideline panels to move from the summary to more detailed information about the evidence.

Most of this information is found in the review’s Summary of Qualitative Findings tables. However, these tables are usually too large for EtD frameworks and are not tailored to each framework component. We will, therefore, start by creating new summaries, using a format that we have previously used in EtD frameworks [ 59 , 60 , 61 ] but that we have not user tested. As opposed to the Summary of Qualitative Findings tables, where each finding and our confidence in the finding, is presented individually in separate rows, this format involves pulling the findings and confidence assessments together in short, narrative paragraphs.

User testing the summary format

For our first set of user tests, we will observe guideline panels participating in the CPG panel simulation workshops. For our second round of user tests, we will observe how the guideline panels experience and interact with this qualitative evidence during the real guideline processes. Third, we will then test a potentially refined format with a selection of guideline panel members using a semi-structured interview guide. Finally, at the end of the project, we will conduct semi-structured interviews with a selection of guideline panel members to explore their broader views and experiences of interpreting and using evidence from reviews of qualitative studies in their deliberation processes. Figure  1 provides a visual depiction of this iterative process.

figure 1

Iterative approach for user testing evidence from reviews of qualitative research

We will draw on the adapted version of Peter Morville’s original honeycomb model of user experience [ 62 ] as the underpinning theoretical framework for this objective [ 63 ] (Fig.  1 ). This adapted version extends and revises the meaning of the facets of user experience depicted in the original model. It includes eight facets: accessibility, findability, usefulness, usability, understandability, credibility, desirability and affiliation. Accessibility involves whether there are physical barriers to gaining access; findability is about whether the person can locate the product or the content that they are looking for; usefulness is about whether the product has practical value for the person; usability comprises how easy and satisfying the product is to use; understandability is about whether the person comprehends correctly both what kind of product it is and the content of the product (and includes both user's subjective perception of her own understanding and an objective measure of actual/correct understanding); credibility comprises whether the product/content is experienced as trustworthy; desirability is about whether the product is something the person wants and has a positive emotional response to it; affiliation involves whether the person identifies with the product, on a personal or a social level, or whether it is alienating and experienced as being not designed for ‘someone like me’. The adapted model also adds to the original model a dimension of user experience over time, capturing the chronological and contingent nature of the different facets.

Participants will comprise members of the guideline panels in South Africa, Malawi and Nigeria. Table 3 (above) provides details of the composition of the guideline panels.

Non-participant observations: guideline panel simulation workshops and guideline panel meetings

We will conduct non-participant observations of the CPG panel simulation workshops and the subsequent guideline panel meetings for developing the recommendations. The CPG panel simulation workshops will run a simulation of a real guideline process and give guideline panels an opportunity to understand how the guideline process works before they participate in real panel meetings. The guideline panels in all three countries will be invited and encouraged to attend these workshops, which will form part of the project capacity development activities of WP5 (Table  1 ).

With the participants’ consent, both the simulation workshops and meetings will be digitally recorded and at least two observers will observe and take notes. The observations will focus on how guideline panel members refer to and interact with the summaries of qualitative evidence. Drawing on a user testing approach ( https://www.cochrane.no/our-user-test-package ), we will also look specifically for both problems and facilitators in the way the qualitative evidence is formatted, including ‘show-stoppers’ (the problem is so serious that it hindered participants from correct understanding or from moving forward), ‘big problems/frustrations’ (participants were confused or found something difficult but managed to figure it out or find a way around the problem eventually), ‘minor issues/cosmetic things’ (small irritations, frustrations and small problems that do not have serious consequences, as well as likes/dislikes), ‘positive/negative feedback’, ‘specific suggestions’, ‘preferences’ and any other ‘notable observations’, e.g. feelings of ‘uncertainty’.

Structured user testing interviews

Based on the insights gained from the non-participant observations (above), we may make changes or refinements to our original summary format (Fig.  1 ). Once the guideline panel meetings have concluded (approximately by month 20), we will then conduct structured user testing interviews to test the potentially refined summary format. These interviews will be conducted with a sample of members from the guideline panels in South Africa, Malawi and Nigeria. Sampling will be purposive, with the aim of understanding the broad range of experiences and perspectives and ensuring the sample reflects a range of socio-demographic characteristics and stakeholder categories. As recommended ( https://www.cochrane.no/our-user-test-package ), we will begin with a sample size of six to eight participants in each country; however, sampling will continue until saturation is achieved [ 44 ].

Participants will be contacted, either telephonically or via email, and invited to participate in an interview. Interviews will be conducted face-to-face or electronically (e.g. using Skype or Teams) at a date and time chosen by participants. Face-to-face interviews will take place at a location convenient to participants, which is conducive to a confidential exchange. In line with the SURE user test package 2022 guidance, the interviews will last approximately 60 min ( https://www.cochrane.no/our-user-test-package ). They will be facilitated by a test leader, who will accompanied by at least one observer who will take notes. Both the test leader and observer(s) will be trained in user testing interviewing methodology and techniques. Verbal and written information about the study will be provided to all participants taking part in interviews. Written informed consent will be obtained from all participants before proceeding with the interview. With the permission of participants, all interviews will be video recorded.

For these interviews we will show panel members the latest version of the format, explore immediate first impressions, and then opinions about different elements of the summary. We may also show panel members different formats where we think this may be helpful. We will use a structured interview guide which draws heavily on other interview guides that been developed to user test evidence-related products [ 55 , 56 , 57 , 58 ]. It will include questions related the participant’s background; their immediate first impressions of the summary format(s); in-depth walk-through of the summary format(s), with prompts to think aloud what they are looking at, thinking, doing and feeling; and suggestions for improving the way the summary is formatted and for improving the user testing itself. We may ask follow-up questions to specific issues we observed in the simulation workshops and guideline panel meetings and/or create scenarios that resemble issues we observed in the workshops/meetings. This will be decided upon based on the findings that emerge from these workshops/meetings. The guide will be finalized once the relevant qualitative evidence (from WP2) has been produced and we have gained insights from the workshops and meetings.

As with the non-participant observations of meetings and workshops, throughout the interview, the observers will make notes about the participant’s experience as heard, observed and understood. Drawing on a user testing approach, they will look specifically for both problems and facilitators, specific suggestions, preferences and any other notable observations (as described above under ‘non-participant observations’).

At project close (month 30), we will also conduct semi-structured interviews with a sample of members from the guideline panels in South Africa, Malawi and Nigeria. These will be the same interviews with guideline panel members as described in objective 2. In addition to exploring participants’ capacity development needs, expectations and achievements, the semi-structured topic guide will also explore their views and experiences of (and specific capacity in) interpreting and using evidence from reviews of qualitative studies in guideline processes. More specifically, questions will investigate participants’ familiarity/experience with qualitative evidence; their perceptions of different types of evidence, what constitutes qualitative evidence and the role of qualitative evidence in guideline processes; and their experiences of using the qualitative evidence in their deliberations as part of the project, including what influenced its use and whether they found it useful. Details pertaining to sampling, data collection procedures and collection tools are described in objective 2.

All interview and meeting recordings will be transcribed verbatim, and all personal identifying information will be removed from transcripts. The anonymized transcripts, together with observational notes (from the workshops, meetings and interviews), will be downloaded into a software programme that aids with the management and analysis of qualitative data. Analysis of the data will be guided by the user testing analysis methods described in the SURE user test package 2022 ( https://www.cochrane.no/our-user-test-package ). The analysis will proceed in several, iterative rounds to develop and revise the summary format and to inform the focus of subsequent data collection. After each user test, we will review our notes, first separately and then together. In line with the SURE user test package 2022 guidance, we will look primarily for barriers and facilitators related to correct interpretation of the summary’s contents, ease of use and favourable reception, drawing on the facets of the revised honeycomb model of user experience (Fig.  2 ). We will trace findings back to specific elements or characteristics of the summaries that appeared to facilitate or hinder problems. Before the next set of user tests, we will discuss possible changes that could address any identified barriers and make changes to the summary format.

figure 2

Adapted version of Peter Morville’s honeycomb model of user experience

4. Objective 4: evaluate guideline panelists’, steering group committees’ and project team members’ overall views and experiences of the project, including what works or not, to influence evidence-informed decision-making and guideline adaptation processes.

This objective explores overall views and experiences of the project, with a focus on guideline panelists, steering group committees and project team members. Specifically, it seeks to gain an understanding of these three stakeholder groups’ more general views and experiences of the project activities they were involved with and whether, why and how these activities may influence (or not) evidence-informed decision-making and guideline adaptation processes. This will be achieved through semi-structured interviews.

Participants will comprise members of the guideline panels and steering group committees in South Africa, Malawi and Nigeria, as well as members of the project team (as described in Table  3 above).

At project close (month 30), we will conduct semi-structured interviews with a sample of members from the guideline panels and steering group committees in South Africa, Malawi and Nigeria. These will be the same interviews and participants as described in objective 2. In addition to exploring issues around capacity development and qualitative evidence, the interviews will also investigate participants’ views and experiences of the various project activities they were involved with, and whether, why and how these activities may influence (or not) evidence-informed decision-making and guideline adaptation processes. Details pertaining to sampling, data collection procedures and collection tools are described in objective 2.

At project close (month 30), we will also conduct semi-structured interviews with members of the project team (see Table  3 for details of project team composition). We will begin by interviewing all project management team members, WP leads and KT champions. Additional participants will be determined iteratively (depending on what emerges from initial interviews) and purposively, with the aim of understanding the broad range of experiences and perspectives and ensuring the sample reflects the various groups which make up the project team. Interviews will be conducted face-to-face or electronically (e.g. using Skype or Teams) at a date and time chosen by the interviewee. The interviews will last between 45 and 60 min and will be guided by a semi-structured topic guide. The questions will explore participants’ views and experiences of the respective work packages in which they were involved, including what the primary goals of the work package were; if, why and how these goals were met; and what worked and what did not work and why.

The same qualitative data analysis procedures and methods will be used as described in objective 2. For this objective, the thematic analysis will identify key themes pertaining to views and experiences of project activities, including what worked (or not) and why, whether, why and how the project may (or not) influence evidence-informed decision-making and guideline development, adaptation and dissemination processes in South Africa, Malawi and Nigeria and potential barriers and facilitators to the sustainability of this influence.

Evidence-based guideline development is a multi-stakeholder, multi-perspective, complex set of tasks. There is limited, if any, research that has followed these steps from the perspectives of policymakers or researchers from start to end. The GELA project protocol sets out to monitor and evaluate various key steps in the process, using in-depth qualitative methods alongside appropriate surveys not only to inform the project as it progresses but also to understand the overall impact of all steps on development of transparent and contextually-rich guideline recommendations. Following WHO’s guideline steps, the tasks range from scoping stakeholder-informed priority topics to conducting relevant data gathering and evidence synthesis, followed by guideline panel meetings to reach consensus decisions and finally to produce recommendations that can be useful to end-users and improve health and care outcomes. The GELA project is undertaking a 3-year project to conduct these tasks in the context of newborn and child health priorities. We are doing this in collaboration with national ministries of health, academics, non-governmental partners and civil society groups in Malawi, Nigeria and South Africa. Overall, we aim build capacity across all collaborators for evidence-informed guideline development, while producing fit for context guideline recommendations, in accessible formats that benefit children, caregivers and health care providers.

As such, this is a practical research project, in that the products should directly impact care decisions at the national level but with the added benefit of being able to learn about what works or does not work for collaborative guideline development in country. We will also be applying emergent guideline adaptation methods to explore reducing duplication of expensive guideline development efforts in our lower resource settings. Our project addresses newborn and child health, keeping this most vulnerable population in our focus, hoping that producing sound evidence-based recommendations has the potential to impact care.

Through some of our formative work, we have completed a landscape analysis identifying and describing all available newborn and child health guidelines in each of the partner countries. In all countries there were similar findings, (1) there is no easy access to guidelines for end-users, thus locating a guideline requires effort and screening through multiple sources; (2) considering national priority conditions in this age group, there were often gaps in available current guidelines for managing children; and (3) when we appraised the guidelines using the global standard, AGREE II tool, we found that the reporting of guideline methods were poor, leaving it uncertain whether the recommendations were credible or whether any influences or interests had determined the direction of a recommendation. Finally, we expected to find many adapted guidelines, based on WHO or UNICEF or similar guidance available globally; however, very few of the identified guidelines stated clearly whether they had been adapted from other sources and, if so, which recommendations were adopted and which adapted.

Given progress globally in methods for guideline development, the continued poor reporting on guideline methods at the country level speak to a breakdown in skills-sharing globally, for example, WHO produces guidelines that are recognized as rigorous and follow good practice and reporting, but the same standards are not supported in country. Overall, GELA aims to address these key gaps in national guideline approaches for adaptation, but we need to recognize that this will be a long term process and that we need to learn from each other about what works and what may not serve us. Therefore, this protocol outlines our approach for monitoring several aspects of the project in our efforts to move closer to trustworthy and credible guidelines that all can use and trust for countries like ours.

Availability of data and materials

Not applicable.

Abbreviations

Poverty-related diseases

Sub-Saharan Africa

  • Clinical practice guidelines

Evidence-informed decision-making

Evidence-based healthcare

Global Evidence Local Adaptation

Knowledge translation

Levels and trends in child mortality: report 2020. Estimates developed by UNICEF. ISBN: 978-92-806-5147-8. https://www.who.int/publications/m/item/levels-and-trends-in-child-mortality-report-2020 . United Nations Children’s Fund. Accessed Aug 2024.

Liu L, Oza S, Hogan D, et al. Global, regional, and national causes of under-5 mortality in 2000–15: an updated systematic analysis with implications for the sustainable development goals. Lancet. 2016;388(10063):3027–35.

Article   PubMed   PubMed Central   Google Scholar  

Under-five mortality. 2021 https://data.unicef.org/topic/child-survival/under-five-mortality/ . United Nations International Children’s Fund. Accessed Aug 2024.

Stewart R, El-Harakeh A, Cherian SA. Evidence synthesis communities in low-income and middle-income countries and the COVID-19 response. Lancet. 2020;396(10262):1539–41.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Mijumbi RM, Oxman AD, Panisset U, Sewankambo NK. Feasibility of a rapid response mechanism to meet policymakers’ urgent needs for research evidence about health systems in a low income country: a case study. Implementation Sci. 2014;9:114.

Article   Google Scholar  

Stewart R, Dayal H, Langer L, van Rooyen C. The evidence ecosystem in South Africa: growing resilience and institutionalisation of evidence use. Palgrave Commun. 2019;5(1):90.

Uneke CJ, Ezeoha AE, Ndukwe CD, Oyibo PG, Onwe F. Promotion of evidence-informed health policymaking in Nigeria: bridging the gap between researchers and policymakers. Glob Public Health. 2012;7(7):750–65.

Article   PubMed   Google Scholar  

Young T, Garner P, Clarke M, Volmink J. Series: clinical epidemiology in South Africa Paper 1: evidence-based health care and policy in Africa.: past, present, and future. J Clin Epidemiol. 2017;83:24–30.

Vandvik PO, Brandt L. Future of evidence ecosystem series: evidence ecosystems and learning health systems: why bother? J Clin Epidemiol. 2020;123:166–70.

Graham R, Mancher M, Wolman D, Greenfield S, Steinberg E. Clinical practice guidelines we can trust. Washington, DC: The National Academy Press; 2011.

Book   Google Scholar  

Kredo T, Bernhardsson S, Machingaidze S, et al. Guide to clinical practice guidelines: the current state of play. Int J Qual Health Care. 2016;28(1):122–8.

McCaul M, Tovey D, Young T, et al. Resources supporting trustworthy, rapid and equitable evidence synthesis and guideline development: results from the COVID-19 evidence network to support decision-making (COVID-END). J Clin Epidemiol. 2022;151:88–95.

Tricco AC, Garritty CM, Boulos L, et al. Rapid review methods more challenging during COVID-19: commentary with a focus on 8 knowledge synthesis steps. J Clin Epidemiol. 2020;126:177–83.

World Health Organization. WHO handbook for guideline development. Geneva: World Health Organization; 2011.

Google Scholar  

Schünemann HJ, Wiercioch W, Etxeandia I, et al. Guidelines 2.0: systematic development of a comprehensive checklist for a successful guideline enterprise. CMAJ. 2014;186(3):E123–42.

Kredo T, Gerritsen A, van Heerden J, Conway S, Siegfried N. Clinical practice guidelines within the Southern African development community: a descriptive study of the quality of guideline development and concordance with best evidence for five priority diseases. Health Res Policy Syst. 2012;10:1.

Malherbe P, Smit P, Sharma K, McCaul M. Guidance we can trust? The status and quality of prehospital clinical guidance in sub-Saharan Africa: a scoping review. African J Emerge Med. 2021;11(1):79–86.

McCaul M, Clarke M, Bruijns SR, et al. Global emergency care clinical practice guidelines: a landscape analysis. African J Emerge Med. 2018;8(4):158–63.

Dizon JM, Grimmer K, Louw Q, Kredo T, Young T, Machingaidze S. South African guidelines excellence (SAGE): adopt, adapt, or contextualise? South African Med J. 2016;106(12):1177–8.

Article   CAS   Google Scholar  

Schünemann HJ, Wiercioch W, Brozek J, et al. GRADE evidence to Decision (EtD) frameworks for adoption, adaptation, and de novo development of trustworthy recommendations: GRADE-ADOLOPMENT. J Clin Epidemiol. 2017;81:101–10.

ADAPTE Collaboration. 2009 The ADAPTE process: Resource toolkit for guideline adaptation, version 2. http://www.g-i-n.net/ . Accessed Aug 2024.

Barreix M, Lawrie TA, Kidula N, et al. Development of the WHO antenatal care recommendations adaptation toolkit: a standardised approach for countries. Health Res Policy Syst. 2020;18(1):70.

Ioannidis JP. The mass production of redundant, misleading, and conflicted systematic reviews and meta-analyses. Milbank Q. 2016;94(3):485–514.

Opiyo N, Shepperd S, Musila N, English M, Fretheim A. The, “child health evidence week” and grade grid may aid transparency in the deliberative process of guideline development. J Clin Epidemiol. 2012;65(9):962–9.

Wang Z, Norris SL, Bero L. The advantages and limitations of guideline adaptation frameworks. Implemen Sci. 2018;13(1):72.

Langlois EV, Tunçalp Ö, Norris SL, Askew I, Ghaffar A. Qualitative evidence to improve guidelines and health decision-making. Bull World Health Organ. 2018. https://doi.org/10.2471/BLT.17.206540 .

Carmona C, Baxter S, Carroll C. Systematic review of the methodological literature for integrating qualitative evidence syntheses into health guideline development. Res Synth Methods. 2021;12(4):491–505.

Guindo LA, Wagner M, Baltussen R, et al. From efficacy to equity: Literature review of decision criteria for resource allocation and healthcare decisionmaking. Cost Eff Res Alloc. 2012;10(1):9.

Verkerk K, Van Veenendaal H, Severens JL, Hendriks EJ, Burgers JS. Considered judgement in evidence-based guideline development. Int J Qual Health Care. 2006;18(5):365–9.

Alonso-Coello P, Schunemann HJ, Moberg J, et al. GRADE Evidence to Decision (EtD) frameworks: a systematic and transparent approach to making well informed healthcare choices. 1: Introduction. BMJ. 2016. https://doi.org/10.1136/bmj.i2016 .

Glenton C, Lewin S, Gülmezoglu AM. Expanding the evidence base for global recommendations on health systems: strengths and challenges of the OptimizeMNH guidance process. Implement Sci. 2016;11:98.

Lewin S, Glenton C. Are we entering a new era for qualitative research? using qualitative evidence to support guidance and guideline development by the World Health Organization. Int J Equity Health. 2018;17(1):126.

Noyes J, Booth A, Cargo M, Flemming K, Harden A, Harris J, Garside R, Hannes K, Pantoja T, Thomas J. Chapter 21: Qualitative evidence. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (editors). Cochrane Handbook for Systematic Reviews of Interventions version 6.4 (updated August 2023). Cochrane, 2023. Available from www.training.cochrane.org/handbook . Accessed Aug 2024.

Glenton C, Lewin S, Norris S. Using evidence from qualitative research to develop WHO guidelines (Chapter 15) World Health Organization Handbook for Guideline Development. 2nd ed. WHO: Geneva; 2016.

World Health Organisation. WHO recommendations: intrapartum care for a positive childbirth experience. Geneva: WHO; 2018.

Rosenbaum SE, Glenton C, Oxman AD. Summary-of-findings tables in Cochrane reviews improved understanding and rapid retrieval of key information. J Clin Epidemiol. 2010;63(6):620–6.

Lewin S, Bohren M, Rashidian A, et al. Applying GRADE-CERQual to qualitative evidence synthesis findings-paper 2: how to make an overall CERQual assessment of confidence and create a summary of qualitative findings table. Implement Sci. 2018;13(Suppl 1):10.

Treweek S, Oxman AD, Alderson P, et al. Developing and evaluating communication strategies to support informed decisions and practice based on evidence (DECIDE): protocol and preliminary results. Implement Sci. 2013;8:6.

Brandt L, Vandvik PO, Alonso-Coello P, et al. Multilayered and digitally structured presentation formats of trustworthy recommendations: a combined survey and randomised trial. BMJ Open. 2017;7(2): e011569.

Kredo T, Cooper S, Abrams AL, et al. ’Building on shaky ground’-challenges to and solutions for primary care guideline implementation in four provinces in South Africa: a qualitative study. BMJ Open. 2020;10(5): e031468.

Vandvik PO, Brandt L, Alonso-Coello P, et al. Creating clinical practice guidelines we can trust, use, and share: a new era is imminent. Chest. 2013;144(2):381–9.

Michie S, van Stralen MM, West R. The behaviour change wheel: a new method for characterising and designing behaviour change interventions. Implement Sci. 2011;6:42.

Jessani NS, Rohwer A, Schmidt B-M, Delobelle P. Integrated knowledge translation to advance noncommunicable disease policy and practice in South Africa: application of the exploration, preparation, implementation, and sustainment (EPIS) framework. Health Res Policy Syst. 2021;19(1):82.

Francis JJ, Johnston M, Robertson C, et al. What is an adequate sample size? Operationalising data saturation for theory-based interview studies. Psychol Health. 2010;25(10):1229–45.

Roos M, Kadmon M, Kirschfink M, et al. Developing medical educators–a mixed method evaluation of a teaching education program. Med Educ Online. 2014;19:23868.

Bijani M, Rostami K, Momennasab M, Yektatalab S. Evaluating the effectiveness of a continuing education program for prevention of occupational exposure to needle stick injuries in nursing staff based on Kirkpatrick’s model. J Natl Med Assoc. 2018;110(5):459–63.

PubMed   Google Scholar  

Carlfjord S, Roback K, Nilsen P. Five years’ experience of an annual course on implementation science: an evaluation among course participants. Implement Sci. 2017;12(1):101.

Young T, Rohwer A, Volmink J, Clarke M. What are the effects of teaching evidence-based health care (EBHC)? Overview of systematic reviews. PLoS ONE. 2014;9(1): e86706.

Young T, Dizon J, Kredo T, et al. Enhancing capacity for clinical practice guidelines in South Africa. Pan Afr Med J. 2020;36:18.

Green G, Thorogood N. Qualitative methods for health research. London: Sage; 2004.

Lofland J. Analyzing social settings. Belmont, CA: Wadsworth Publishing Company; 1971.

Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. 2006;3:77–101.

Popay J, Roberts H, Sowden A. Guidance on the conduct of narrative synthesis in systematic reviews a product from the ESRC methods programme. I Health Research: Lancaster; 2006.

Pope C, Mays N, Popay J. How can we synthesize qualitative and quantitative evidence for healthcare policy-makers and managers? Healthc Manage Forum. 2006;19(1):27–31.

Glenton C, Santesso N, Rosenbaum S, et al. Presenting the results of Cochrane systematic reviews to a consumer audience: a qualitative study. Med Dec Making. 2010;30(5):566–77.

Rosenbaum SE, Glenton C, Cracknell J. User experiences of evidence-based online resources for health professionals: user testing of The Cochrane library. BMC Med Inform Decis Mak. 2008;8:34.

Rosenbaum SE, Glenton C, Nylund HK, Oxman AD. User testing and stakeholder feedback contributed to the development of understandable and useful Summary of Findings tables for Cochrane reviews. J Clin Epidemiol. 2010;63(6):607–19.

Rosenbaum SE, Glenton C, Wiysonge CS, et al. Evidence summaries tailored to health policy-makers in low- and middle-income countries. Bull World Health Organ. 2011;89(1):54–61.

Downe S, Finlayson KW, Lawrie TA, et al. Qualitative evidence synthesis (QES) for guidelines: Paper 1 - using qualitative evidence synthesis to inform guideline scope and develop qualitative findings statements. Health Res Policy Syst. 2019;17(1):76.

Lewin S, Glenton C, Lawrie TA, et al. Qualitative evidence synthesis (QES) for guidelines: Paper 2 - using qualitative evidence synthesis findings to inform evidence-to-decision frameworks and recommendations. Health Res Policy Syst. 2019;17(1):75.

Glenton C, Lewin S, Lawrie TA et al. Qualitative Evidence Synthesis (QES) for Guidelines: Paper 3 – Using qualitative evidence syntheses to develop implementation considerations and inform implementation processes. Health Res Policy Sys. 2019;17:74. https://doi.org/10.1186/s12961-019-0450-1

Morville P. 2004 User Experience Design [honeycomb model]. http://www.semanticstudios.com/publications/semantics/000029.php .

Rosenbaum S. 2010 Improving the user experience of evidence: A design approach to evidence-informed health care. PhD thesis. Oslo, Norway: The Oslo School of Architecture and Design.

Download references

Acknowledgements

We gratefully acknowledge the representatives from the National Ministries of Health in Nigeria, Malawi and South Africa for their support and partnership. We would also like to thank the appointed Steering Committees who have been providing input for the research project and guiding the prioritization of topics. We would also like to thank Joy Oliver and Michelle Galloway for their contribution an support of the project.

The GELA project is funded by EDCTP2 programme supported by the European Union (grant number RIA2020S-3303-GELA). The funding will cover all the activities for this Monitoring and Evaluation work package, including costs for personnel and publication of papers.

Author information

Authors and affiliations.

Health Systems Research Unit, South African Medical Research Council, Cape Town, South Africa

Tamara Kredo, Denny Mabetha, Bey-Marrié Schmidt & Simon Lewin

Division of Epidemiology and Biostatistics, Department of Global Health, Stellenbosch University, Cape Town, South Africa

Tamara Kredo, Anke Rohwer, Michael McCaul, Idriss Ibrahim Kallon, Taryn Young & Sara Cooper

Division of Clinical Pharmacology, Department of Medicine, Stellenbosch University, Cape Town, South Africa

Tamara Kredo

School of Public Health and Family Medicine, University of Cape Town, Cape Town, South Africa

Tamara Kredo & Sara Cooper

University of Calabar Teaching Hospital, Calabar, Nigeria

Emmanuel Effa

Kamuzu University of Health Science, Lilongwe, Malawi

Nyanyiwe Mbeye

School of Public Health, University of the Western Cape, Cape Town, South Africa

Bey-Marrié Schmidt

Western Norway University of Applied Sciences, Bergen, Norway

Susan Munabi-Babigumira & Claire Glenton

Norwegian University of Science and Technology, Trondheim, Norway

Simon Lewin

MAGIC Evidence Ecosystem Foundation, Oslo, Norway

Per Olav Vandvik

Department of Health Economics and Health Management, Institute for Health and Society, University of Oslo, Oslo, Norway

Cochrane South Africa, South African Medical Research Council, Cape Town, South Africa

Sara Cooper

You can also search for this author in PubMed   Google Scholar

Contributions

T.K., S.C., T.Y., S.L., C.G. and P.O.V. conceptualized the protocol idea and S.C. drafted the protocol with input from TK, D.M., A.R., B.M., M.M., I.I., C.G., T.Y., S.L. and P.O.V.; all authors approved the final version for submission for publication.

Corresponding author

Correspondence to Tamara Kredo .

Ethics declarations

Ethics approval and consent to participate.

Ethics approval has been obtained in each partner country (South Africa, Malawi and Nigeria) from the respective Health Research Ethics Committees or Institutional Review Boards. Information about the project will be provided to, and consent obtained from, all participants completing the online surveys and interviews and all participants taking part in the meetings. The consent forms will make explicit the voluntary nature of participation, that there will be no negative consequences if they decide not to participate and in the case of the interviews and meetings observations will ask explicitly for permission for the interview or meeting to be recorded. The online surveys will ask participants to provide the last six numbers of their ID as a unique identifier to track their capacity development needs and progress throughout the project. To help protect their confidentiality, the information they provide will be private, deidentified and no names will be used. In addition, all cookies and IP address collectors will be disabled to ensure confidentiality. All interview and meeting recordings on the digital recorders will be destroyed following safe storage and transcription, and any identifying information will be redacted from all transcripts. All study data, including recordings, will be stored electronically using password-controlled software only accessible to key project members and project analysts. Reports of study findings will not identify individual participants. We do not anticipate any specific harms or serious risks to participants. However, there is a risk of breaches of confidentiality for participants who take part in guideline panel and steering group committee project meetings. At the start of all meetings, participants will be introduced to each other. The member names of these groups will not be anonymous as they will play an ongoing role in the GELA project. At the start of each meeting, we will discuss the importance of maintaining confidentiality by everyone. As part of guideline development processes, all guideline members will need to declare conflicts of interests and sign a confidentiality agreement. We will explain, however, that while the researchers undertake to maintain confidentiality, we cannot guarantee that other meeting participants will, and there is, thus, a risk of breaches of confidentiality. We will ensure participants are aware of this risk. Participants may also feel anxiety or distress expressing negative views about project activities. Where there is this potential and where participants identify concerns, we will reassure participants of the steps that will be taken to ensure confidentiality.

Consent for publication

Competing interests.

All authors declared no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .

Reprints and permissions

About this article

Cite this article.

Kredo, T., Effa, E., Mbeye, N. et al. Evaluating the impact of the global evidence, local adaptation (GELA) project for enhancing evidence-informed guideline recommendations for newborn and young child health in three African countries: a mixed-methods protocol. Health Res Policy Sys 22 , 114 (2024). https://doi.org/10.1186/s12961-024-01189-5

Download citation

Received : 02 June 2023

Accepted : 20 July 2024

Published : 19 August 2024

DOI : https://doi.org/10.1186/s12961-024-01189-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Related diseases
  • Newborn and child health
  • Saharan Africa (SSA)
  • Informed decision
  • Capacity development
  • Monitoring and evaluation;
  • Research impact

Health Research Policy and Systems

ISSN: 1478-4505

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

what is research methods evaluation

  • Our Program Divisions
  • Our Three Academies
  • Government Affairs
  • Statement on Diversity and Inclusion
  • Our Study Process
  • Conflict of Interest Policies and Procedures
  • Project Comments and Information
  • Read Our Expert Reports and Published Proceedings
  • Explore PNAS, the Flagship Scientific Journal of NAS
  • Access Transportation Research Board Publications
  • Coronavirus Disease 2019 (COVID-19)
  • Diversity, Equity, and Inclusion
  • Economic Recovery
  • Fellowships and Grants
  • Publications by Division
  • Division of Behavioral and Social Sciences and Education
  • Division on Earth and Life Studies
  • Division on Engineering and Physical Sciences
  • Gulf Research Program
  • Health and Medicine Division
  • Policy and Global Affairs Division
  • Transportation Research Board
  • National Academy of Sciences
  • National Academy of Engineering
  • National Academy of Medicine
  • Publications by Topic
  • Agriculture
  • Behavioral and Social Sciences
  • Biography and Autobiography
  • Biology and Life Sciences
  • Computers and Information Technology
  • Conflict and Security Issues
  • Earth Sciences
  • Energy and Energy Conservation
  • Engineering and Technology
  • Environment and Environmental Studies
  • Food and Nutrition
  • Health and Medicine
  • Industry and Labor
  • Math, Chemistry, and Physics
  • Policy for Science and Technology
  • Space and Aeronautics
  • Surveys and Statistics
  • Transportation and Infrastructure
  • Searchable Collections
  • New Releases

Reliability and Quality of Service Evaluation Methods for Rural Highways: A Guide

VIEW LARGER COVER

Reliability and Quality of Service Evaluation Methods for Rural Highways

Rural highways account for a significant portion of the National Highway System and serve many vital mobility purposes. The Highway Capacity Manual , the standard reference for traffic analysis methodologies, contains analysis methodologies for all of the individual segments or intersections that may constitute a rural highway; however, it does not include a methodology or guidelines for connecting the individual roadway segments into a connected, cohesive, facility-level analysis.

NCHRP Research Report 1102: Reliability and Quality of Service Evaluation Methods for Rural Highways: A Guide , from TRB's National Cooperative Highway Research Program, presents a guide for traffic analysis of rural highways that connects the individual highway segments into a connected, cohesive, facility-level analysis.

Supplemental to the report is NCHRP Web-Only Document 392: Developing a Guide for Rural Highways: Reliability and Quality of Service Evaluation Methods .

RESOURCES AT A GLANCE

  • Ohio Case Study
  • Oregon Case Study
  • ↓ Scroll Down for More Resources
  • Transportation and Infrastructure — Design
  • Transportation and Infrastructure — Operations and Traffic Management
  • Transportation and Infrastructure — Planning and Forecasting

Suggested Citation

National Academies of Sciences, Engineering, and Medicine. 2024. Reliability and Quality of Service Evaluation Methods for Rural Highways: A Guide . Washington, DC: The National Academies Press. https://doi.org/10.17226/27895. Import this citation to: Bibtex EndNote Reference Manager

Publication Info

Chapters skim
i-x
1-2
3-7
8-33
34-47
48-66
67-68
69-70
71-80
81-93
94-101
102-106
107-112
113-117
118-121
122-149
150-164
165-166
167-173
174-202
203-227
228-255
256-265
266-272
273-278

What is skim?

The Chapter Skim search tool presents what we've algorithmically identified as the most significant single chunk of text within every page in the chapter. You may select key terms to highlight them within pages of each chapter.

  • Tennessee Case Study

What is a prepublication?

What is a prepublication image

An uncorrected copy, or prepublication, is an uncorrected proof of the book. We publish prepublications to facilitate timely access to the committee's findings.

What happens when I pre-order?

The final version of this book has not been published yet. You can pre-order a copy of the book and we will send it to you when it becomes available. We will not charge you for the book until it ships. Pricing for a pre-ordered book is estimated and subject to change. All backorders will be released at the final established price. As a courtesy, if the price increases by more than $3.00 we will notify you. If the price decreases, we will simply charge the lower price. Applicable discounts will be extended.

Downloading and Using eBooks from NAP

What is an ebook.

An ebook is one of two file formats that are intended to be used with e-reader devices and apps such as Amazon Kindle or Apple iBooks.

Why is an eBook better than a PDF?

A PDF is a digital representation of the print book, so while it can be loaded into most e-reader programs, it doesn't allow for resizable text or advanced, interactive functionality. The eBook is optimized for e-reader devices and apps, which means that it offers a much better digital reading experience than a PDF, including resizable text and interactive features (when available).

Where do I get eBook files?

eBook files are now available for a large number of reports on the NAP.edu website. If an eBook is available, you'll see the option to purchase it on the book page.

View more FAQ's about Ebooks

Types of Publications

Internet Explorer is no longer supported by Microsoft. To browse the NIHR site please use a modern, secure browser like Google Chrome, Mozilla Firefox, or Microsoft Edge.

National Institute for Health and Care Research logo | Homepage

NIHR takes on management of Better Methods for Better Research (BMBR) Programme

what is research methods evaluation

Published: 21 August 2024

The management of the Better Methods for Better Research (BMBR) Programme will be moving from UKRI Medical Research Council (MRC) to NIHR this summer. NIHR will also increase its financial contribution, enhancing opportunities for high quality research.

The BMBR Programme is a collaboration between MRC and NIHR. It aims to ensure that optimal research methods are used to advance biomedical, health and care research, policy and delivery.

BMBR has funded a variety of important methods research. This includes methods for adaptive clinical trials, which allowed multiple treatments for Covid-19 to be rapidly tested in the RECOVERY trial. BMBR has also improved how insights into treatments can be measured using real-world evidence.

The BMBR Programme will remain a close collaboration between MRC and NIHR, and the remit and approach will stay the same. The regular schedule for advertising funding opportunities will continue. Both MRC and NIHR will also continue to fund methods research through other schemes.

Researchers will now need to apply for BMBR funding through NIHR’s application system. While the process will be very similar, there may be some differences. In particular, NIHR has requirements for  patient and public involvement and research inclusion . NIHR also remains keen to encourage applications led by early-career researchers.

In addition to taking over management, NIHR’s extra funding is increasing the overall budget of the BMBR Programme. In 2025, there will also be a complete review of the programme to reflect on its achievements to date. This will help to optimise future effectiveness.

Professor Danny McAuley, Scientific Director for NIHR Programmes, said: “This is an exciting next step for the BMBR Programme. It is a clear indication of NIHR’s commitment to funding research that underpins life-changing impact. We look forward to continuing to work with MRC to help researchers improve and enhance the methods they use.”

MRC Executive Chair, Professor Patrick Chinnery, said: “BMBR has supported excellence in methodology research since its inception in 2008 and MRC welcomes the matched increase in NIHR contribution to support the programme’s continuing impact. MRC remains committed to supporting methods research via BMBR and our other funding opportunities, and we are looking forward to working with NIHR in this new phase of the BMBR Programme”.

The next BMBR funding opportunity will open in September 2024. Full application guidance will be available.

If you have any queries before then, you can contact the programme at [email protected] .

Latest news

NIHR awards £33.2m to inspire students into research

Ketamine-assisted therapy for alcohol disorder trial now recruiting

Half a million people sign up to Be Part of Research

PANORAMIC study learnings are key to pandemic preparedness

New guiding principles for community engagement and involvement in global health research

  • Usability testing

Run remote usability tests on any digital product to deep dive into your key user flows

  • Product analytics

Learn how users are behaving on your website in real time and uncover points of frustration

  • Research repository

A tool for collaborative analysis of qualitative data and for building your research repository and database.

Trymata Blog

How-to articles, expert tips, and the latest news in user testing & user experience

Knowledge Hub

Detailed explainers of Trymata’s features & plans, and UX research terms & topics

  • Plans & Pricing

Get paid to test

  • For UX & design teams
  • For product teams
  • For marketing teams
  • For ecommerce teams
  • For agencies
  • For startups & VCs
  • Customer Stories

How do you want to use Trymata?

Conduct user testing, desktop usability video.

You’re on a business trip in Oakland, CA. You've been working late in downtown and now you're looking for a place nearby to grab a late dinner. You decided to check Zomato to try and find somewhere to eat. (Don't begin searching yet).

  • Look around on the home page. Does anything seem interesting to you?
  • How would you go about finding a place to eat near you in Downtown Oakland? You want something kind of quick, open late, not too expensive, and with a good rating.
  • What do the reviews say about the restaurant you've chosen?
  • What was the most important factor for you in choosing this spot?
  • You're currently close to the 19th St Bart station, and it's 9PM. How would you get to this restaurant? Do you think you'll be able to make it before closing time?
  • Your friend recommended you to check out a place called Belly while you're in Oakland. Try to find where it is, when it's open, and what kind of food options they have.
  • Now go to any restaurant's page and try to leave a review (don't actually submit it).

What was the worst thing about your experience?

It was hard to find the bart station. The collections not being able to be sorted was a bit of a bummer

What other aspects of the experience could be improved?

Feedback from the owners would be nice

What did you like about the website?

The flow was good, lots of bright photos

What other comments do you have for the owner of the website?

I like that you can sort by what you are looking for and i like the idea of collections

You're going on a vacation to Italy next month, and you want to learn some basic Italian for getting around while there. You decided to try Duolingo.

  • Please begin by downloading the app to your device.
  • Choose Italian and get started with the first lesson (stop once you reach the first question).
  • Now go all the way through the rest of the first lesson, describing your thoughts as you go.
  • Get your profile set up, then view your account page. What information and options are there? Do you feel that these are useful? Why or why not?
  • After a week in Italy, you're going to spend a few days in Austria. How would you take German lessons on Duolingo?
  • What other languages does the app offer? Do any of them interest you?

I felt like there could have been a little more of an instructional component to the lesson.

It would be cool if there were some feature that could allow two learners studying the same language to take lessons together. I imagine that their screens would be synced and they could go through lessons together and chat along the way.

Overall, the app was very intuitive to use and visually appealing. I also liked the option to connect with others.

Overall, the app seemed very helpful and easy to use. I feel like it makes learning a new language fun and almost like a game. It would be nice, however, if it contained more of an instructional portion.

All accounts, tests, and data have been migrated to our new & improved system!

Use the same email and password to log in:

Legacy login: Our legacy system is still available in view-only mode, login here >

What’s the new system about? Read more about our transition & what it-->

What is a UX Audit? Definition, Methods, Example and Process

' src=

What is a UX Audit?

A UX (user experience) audit is defined as a comprehensive evaluation process where a website, application, or product is analyzed to understand its usability, accessibility, and overall user experience.

The primary aim is to identify issues that hinder users from achieving their goals efficiently and effectively. A UX audit typically involves reviewing the design, structure, and functionality of the interface, alongside gathering and analyzing user feedback and behavior data. This process helps in pinpointing areas that require improvement and provides actionable insights to enhance user satisfaction and engagement.

During a UX audit, various methods and tools are employed to scrutinize the user experience and interface. These can include usability testing, UX testing, UI testing, user interviews etc.

The benefits of conducting a UX audit are numerous. Firstly, it helps in identifying and rectifying usability issues, leading to a more intuitive and user-friendly product.

Additionally, a UX audit can help in uncovering conversion bottlenecks, thereby improving key performance indicators (KPIs) such as conversion rates, engagement levels, and user retention. By addressing these issues, businesses can optimize their product to better meet user needs and expectations, ultimately driving business growth and success.

Key Components of UX Audit

The key components of a UX audit encompass several aspects that collectively provide a holistic evaluation of the user experience. These components include heuristic evaluation, usability testing, user research, and analytics review.

  • Heuristic Evaluation: This component involves experts assessing the product against established usability principles or heuristics. They systematically examine the interface to identify usability issues, such as inconsistencies, confusing navigation, or visual clutter. The most common heuristics used are Nielsen’s 10 usability heuristics, which cover aspects like error prevention, recognition rather than recall, and aesthetic and minimalist design. Heuristic evaluation helps in pinpointing obvious usability flaws that can be quickly addressed.
  • Usability Testing: Usability testing is a crucial component where real users interact with the product in a controlled environment. This method involves observing users as they complete specific tasks, noting any difficulties or frustrations they encounter. Usability testing provides direct insight into how actual users experience the product, revealing pain points that might not be evident through expert evaluation alone.
  • User Research: User research involves gathering qualitative and quantitative data directly from users through various methods such as surveys, interviews, and focus groups. This component aims to understand users’ needs, behaviors, and preferences. User personas and user journey maps are often created based on this research to represent different user types and their interactions with the product. Understanding the target audience deeply ensures that design decisions are aligned with user expectations and requirements.
  • Analytics Review: The analytics review component focuses on analyzing data collected from web analytics tools to understand user behavior patterns. Metrics such as bounce rates, session duration, click paths, and conversion rates are examined to identify areas where users might be dropping off or experiencing friction. This quantitative data complements the qualitative insights from usability testing and user research, providing a comprehensive view of the user experience. The analytics review helps in identifying trends and validating the findings from other components of the audit.

UX Audit Process: Key Steps

The UX audit process involves several key steps that ensure a thorough and systematic evaluation of the user experience.

  • Planning and Preparation: The first step in the UX audit process involves defining the scope and objectives of the audit. It’s crucial to gather background information about the product, its target audience, and any existing user feedback or data. This stage also involves selecting the appropriate methods and tools for the audit, such as heuristic evaluation, usability testing, and analytics review.
  • Heuristic Evaluation: Usability experts review the product against established heuristics to identify obvious usability issues.
  • Usability Testing: Real users are observed as they interact with the product, performing specific tasks to identify pain points and areas of confusion.
  • User Research: Surveys, interviews, and focus groups are conducted to gather qualitative data on user needs, behaviors, and preferences.
  • Analytics Review: Web analytics data is analyzed to understand user behavior patterns and identify areas where users may encounter difficulties.
  • Aggregating and synthesizing data from different sources (heuristic evaluation, usability testing, user research, and analytics review).
  • Identifying common pain points, usability issues, and areas where users struggle.
  • Creating user personas and journey maps to visualize user interactions and experiences.
  • Design improvements to address identified usability issues.
  • Enhancements to navigation, content, and visual design.
  • Suggestions for improving user workflows and interactions.
  • Strategies for ongoing user testing and feedback collection to ensure continuous improvement.

Types of UX Audits with Examples

UX audits can be categorized based on their focus and methodologies. Here are four common types of UX audits, each with examples to illustrate their applications:

  • Focus area: This type of audit uses established usability principles, or heuristics, to evaluate a product’s user interface.
  • Example: A heuristic evaluation of an e-commerce website might reveal that the checkout process is overly complex, causing users to abandon their carts. By simplifying the process and providing clearer instructions, the website can reduce cart abandonment rates and improve conversion rates.
  • Focus area: This audit involves observing real users as they interact with the product. Users are asked to complete specific tasks while their behaviors and feedback are recorded. The goal is to identify pain points and usability issues from a user’s perspective.
  • Example area: A usability testing audit for a mobile banking app might reveal that users struggle to find the bill payment feature. By reorganizing the app’s navigation and making the bill payment option more prominent, the bank can enhance user satisfaction and reduce support inquiries.
  • Focus area: This audit relies on quantitative data from web analytics tools to understand user behavior patterns. Metrics such as bounce rates, session duration, and click paths are analyzed to identify areas where users experience friction.
  • Example: An analytics-based audit of a news website might show that users frequently leave the site after viewing a single article. By analyzing the data, the site can identify which articles fail to engage users and implement strategies to improve content relevance and retention, such as recommending related articles or improving headline quality.
  • Focus area: This type of audit examines the quality, consistency, and effectiveness of the product’s content. It involves evaluating text, images, videos, and other media to ensure they meet user needs and support the overall user experience.
  • Example: A content audit for a corporate website might uncover that the language used in the product descriptions is too technical for the average user. By simplifying the language and adding more explanatory visuals, the company can make its products more accessible and appealing to a broader audience.

Best Practices for UX Audit in 2024

Conducting a UX audit effectively in 2024 involves adopting best practices that reflect current trends, technologies, and user expectations. Here are some key best practices:

  • Utilize AI and Machine Learning: Employ AI tools and machine learning algorithms to analyze large datasets, uncover patterns, and predict user behavior. These technologies can help identify subtle usability issues and provide deeper insights into user interactions.
  • Behavioral Analytics: Go beyond traditional metrics by using tools that track detailed user behavior, such as mouse movements, scroll depth, and heatmaps. This provides a granular view of how users engage with the product.
  • Accessibility Audits: Ensure your product meets the latest accessibility standards (such as WCAG 2.1). Conduct audits that specifically focus on accessibility issues, using tools that check for compliance and involving users with disabilities in testing.
  • Inclusive Design: Design for a diverse user base, considering factors like age, ability, and cultural background. This involves creating personas that represent a wide range of users and testing the product with these diverse groups.
  • Mobile Usability Testing: With mobile usage continuing to rise, prioritize mobile usability testing. Ensure that the mobile experience is seamless and that the design is responsive across various devices and screen sizes.
  • Performance Optimization: Mobile users expect fast and smooth experiences. Conduct performance audits on attributes such as load times, responsiveness, and overall performance.
  • Regular Audits: Make UX auditing a continuous process rather than a one-time activity. Regularly scheduled audits help keep the product aligned with evolving user needs and technological advancements.
  • Agile Methodologies: Incorporate UX audits into agile development cycles. Conduct mini-audits at the end of each sprint to ensure ongoing improvements and quick iterations based on user feedback.
  • User Feedback Integration: Actively gather and integrate user feedback throughout the audit process using methods such as interviews, surveys, open ended discussions, focus groups etc.
  • Data-Driven Decision Making: Base your recommendations on data and evidence. Combine qualitative insights from user testing with quantitative data from analytics to make informed decisions about design improvements.
  • Involve Stakeholders: Engage various stakeholders, including designers, developers, marketers, and customer support teams, in the audit process. This ensures that diverse perspectives are considered and that solutions are feasible and aligned with business goals.
  • Clear Communication: Clearly communicate findings and recommendations. Use visual aids like charts, graphs, and user journey maps to convey insights effectively to all team members.
  • Emotional Impact: Evaluate the emotional response of users to the product. Consider how design elements such as color, typography, and imagery affect user emotions and overall experience.
  • Delight and Engagement: Aim to create moments of delight that enhance user engagement. Small design details, animations, and personalized touches can significantly improve the user experience.

Interested in learning more about the fields of product, research, and design? Search our articles here for helpful information spanning a wide range of topics!

14 Best Performance Testing Tools for Application Reliability

A complete guide to usability testing methods for better ux, ux mapping methods and how to create effective maps, a guide to the system usability scale (sus) and its scores.

IMAGES

  1. What is evaluation research: Methods & examples

    what is research methods evaluation

  2. Evaluative Research: Definition, Methods & Types

    what is research methods evaluation

  3. Evaluation Research: Definition, Methods and Examples

    what is research methods evaluation

  4. 15 Types of Research Methods (2024)

    what is research methods evaluation

  5. PPT

    what is research methods evaluation

  6. Generative Vs. Evaluation Research

    what is research methods evaluation

COMMENTS

  1. Evaluation Research: Definition, Methods and Examples

    Evaluation research also requires one to keep in mind the interests of the stakeholders. Evaluation research is a type of applied research, and so it is intended to have some real-world effect. Many methods like surveys and experiments can be used to do evaluation research. The process of evaluation research consisting of data analysis and ...

  2. Evaluating Research

    Evaluating Research refers to the process of assessing the quality, credibility, and relevance of a research study or project. This involves examining the methods, data, and results of the research in order to determine its validity, reliability, and usefulness. Evaluating research can be done by both experts and non-experts in the field, and ...

  3. Evaluation Research Design: Examples, Methods & Types

    Evaluation Research Methodology. There are four major evaluation research methods, namely; output measurement, input measurement, impact assessment and service quality. Output measurement is a method employed in evaluative research that shows the results of an activity undertaking by an organization.

  4. Research Methods & Evaluation

    Sage Research Methods & Evaluation is at the forefront of research and scholarship, providing classic and cutting-edge books, video collections, reference materials, cases built on real research, key journals, and our online platform for the community Methodspace.. Download special issues, collections, and a selection of most read articles.Browse our journal portfolio.

  5. Evaluation Research

    Evaluation research examines whether interventions to change the world work, and if so, how and why. Qualitative inquiries serve diverse evaluation purposes. Purpose is the controlling force in determining evaluation use. Decisions about design, data collection, analysis, and reporting all flow from evaluation purpose.

  6. What Is Evaluation?: Perspectives of How Evaluation Differs (or Not

    Overall, evaluators believed research and evaluation intersect, whereas researchers believed evaluation is a subcomponent of research. Furthermore, evaluators perceived greater differences between evaluation and research than researchers did, particularly in characteristics relevant at the beginning (e.g., purpose, questions, audience) and end ...

  7. Evaluation Research

    This evaluation comprised 4 components: technical accessibility, user experience (UX), quality and learning design; 10 experts were involved in its design and validation. "The combination of qualitative studies through interviews with MOOC providers and learners and the quantitative information provided by the MOOC survey data has provided an ...

  8. PDF An Introduction to Evaluation

    evaluation opportunities and challenges, for the sake of brevity our default position will be to refer to 'programme' evaluation unless there is a need to distinguish one from another. Defining evaluation according to method Another group of evaluation definitions identified by Mark et al. (2006) outline evalu - ation in terms of methods.

  9. Evaluation Research Methods

    Evaluation is a set of research methods and associated methodologies with a distinctive purpose. They provide a means to judge actions and activities in terms of values, criteria and standards. At the same time evaluation is also a practice that seeks to enhance effectiveness in the public sphere and policy making. In order to improve as well ...

  10. Research Evaluation

    Evaluation is an essential aspect of research. It is ubiquitous and continuous over time for researchers. Its main goal is to ensure rigor and quality through objective assessment at all levels. It is the fundamental mechanism that regulates the highly critical and competitive research processes.

  11. What is Evaluation Research? + [Methods & Examples]

    Evaluation research, also known as program evaluation, is a systematic analysis that evaluates whether a program or strategy is worth the effort, time, money, and resources spent to achieve a goal. Based on the project's objectives, the study may target different audiences such as: Employees. Stakeholders.

  12. [PDF] Evaluation Research: An Overview

    It is concluded that evaluation research should be a rigorous, systematic process that involves collecting data about organizations, processes, programs, services, and/or resources that enhance knowledge and decision making and lead to practical applications. Evaluation research can be defined as a type of study that uses standard social research methods for evaluative purposes, as a specific ...

  13. Evaluation Approaches

    Approaches (on this site) refer to an integrated package of methods and processes. For example, ' Randomized Controlled Trials ' (RCTs) use a combination of the methods random sampling, control group and standardised indicators and measures. Evaluation approaches have often been developed to address specific evaluation questions or challenges.

  14. Understanding Evaluation Methodologies: M&E Methods and Techniques for

    Monitoring and Evaluation (M&E) methods encompass the tools, techniques, and processes used to assess the performance of projects, programs, or policies. These methods are essential in determining whether the objectives are being met, understanding the impact of interventions, and guiding decision-making for future improvements.

  15. Evaluation.gov

    Building evidence is not one-size-fits all, and different questions require different methods and approaches. The Administration for Children & Families Common Framework for Research and Evaluation describes, in detail, six different types of research and evaluation approaches - foundational descriptive studies, exploratory descriptive studies, design and development studies, efficacy ...

  16. Finding your way: the difference between research and evaluation

    A broadly accepted way of thinking about how evaluation and research are different comes from Michael Scriven, an evaluation expert and professor. He defines evaluation this way in his Evaluation Thesaurus: "Evaluation determines the merit, worth, or value of things.". He goes on to explain that "Social science research, by contrast, does ...

  17. Introduction to Evaluation

    Introduction to Evaluation. Evaluation is a methodological area that is closely related to, but distinguishable from more traditional social research. Evaluation utilizes many of the same methodologies used in traditional social research, but because evaluation takes place within a political and organizational context, it requires group skills ...

  18. What is Evaluative Research? Definition, Method and Examples

    Evaluative research is defined as a research method that assesses the design, implementation, and outcomes of programs, policies, or products. ... This means that the evaluation considers the unique characteristics, needs, and conditions of the environment and population being studied. Context-specific evaluations provide more relevant and ...

  19. Design and Implementation of Evaluation Research

    Evaluation has its roots in the social, behavioral, and statistical sciences, and it relies on their principles and methodologies of research, including experimental design, measurement, statistical tests, and direct observation. What distinguishes evaluation research from other social science is that its subjects are ongoing social action programs that are intended to produce individual or ...

  20. Program Evaluation Guide

    Academic research focuses primarily on testing hypotheses; a key purpose of program evaluation is to improve practice. Research is generally thought of as requiring a controlled environment or control groups. In field settings directed at prevention and control of a public health problem, this is seldom realistic.

  21. Qualitative research & evaluation methods: integrating theory and

    The fourth edition of Michael Quinn Patton's Qualitative Research & Evaluation Methods Integrating Theory and Practice, published by Sage Publications, analyses and provides clear guidance and advice for using a range of different qualitative methods for evaluation. Contents. Part 1. Framing Qualitative Inquiry: Theory Informs Practice, Practice Informs Theory

  22. Innovations in Mixed Methods Evaluations

    Characteristics of Mixed Methods Designs in Evaluation Research. Several typologies exist in mixed methods designs, including convergent, explanatory, exploratory, embedded, transformative, and multiphase designs ().These, along with other mixed method designs in evaluation research, can be categorized in terms of their structure, function, and process (1, 4, 63, 65, 73).

  23. What is evaluation research: Methods & examples

    Basically, evaluation research is a research process where you measure the effectiveness and success of a particular program, policy, intervention, or project. This type of research lets you know if the goal of that product was met successfully and shows you any areas that need improvement. The data gathered from the evaluation research gives a ...

  24. What is Quantitative Research?

    Research involving the collection of data in numerical form for quantitative analysis. The numerical data can be durations, scores, counts of incidents, ratings, or scales. Quantitative data can be collected in either controlled or naturalistic environments, in laboratories or field studies, from special populations or from samples of the ...

  25. EMDD 600 Usability and Evaluation Research Methods

    Welcome to the EMDD 600 Usability & Evaluation Research Methods research guide. Library Services. University Libraries. The main page for the Ball State University Libraries. Ask a Librarian. Contact a librarian by chat, phone, email, in-person or schedule a reference appointment.

  26. CRJU 202: Research Methods in Criminology and Criminal Justice

    Research Tip: Database Exploration. Start your search by doing keyword searches. Look at the subject headings of relevant records to determine the terminology used in the database for your topic. Use Limits to limit results to Scholarly/Peer Reviewed articles, by publication date, and more. <<

  27. Evaluating the impact of the global evidence, local adaptation (GELA

    Overall approach. We will use a longitudinal, mixed-methods study design, informed by an overarching project Theory of Change (Table 2).The theoretical underpinning for the GELA project across all work packages is related to the three-layered behaviour change wheel comprising opportunity, capability and motivation [].The design, delivery and implementation of multi-stakeholder integrated ...

  28. Reliability and Quality of Service Evaluation Methods for Rural

    NCHRP Research Report 1102: Reliability and Quality of Service Evaluation Methods for Rural Highways: A Guide, from TRB's National Cooperative Highway Research Program, presents a guide for traffic analysis of rural highways that connects the individual highway segments into a connected, cohesive, facility-level analysis.

  29. NIHR takes on management of Better Methods for Better Research (BMBR

    It aims to ensure that optimal research methods are used to advance biomedical, health and care research, policy and delivery. BMBR has funded a variety of important methods research. This includes methods for adaptive clinical trials, which allowed multiple treatments for Covid-19 to be rapidly tested in the RECOVERY trial. BMBR has also ...

  30. What is a UX Audit? Definition, Methods, Example and Process

    Usability testing provides direct insight into how actual users experience the product, revealing pain points that might not be evident through expert evaluation alone. User Research: User research involves gathering qualitative and quantitative data directly from users through various methods such as surveys, interviews, and focus groups.