U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • MedEdPublish (2016)
  • PMC10702664

Logo of mededpublish

Outcome-based education: evaluation, implementation and faculty development

Shazia iqbal.

1 Alfarabi College of Medicine Riyadh KSA

2 University of Liverpool UK

Turky H. Almigbal

3 King Saud University Riyadh KSA

Abdullah Aldahmash

Samer rastam.

This article was migrated. The article was marked as recommended.

Developments in Outcome-based education (OBE) and innovative shifts in its pedagogical approaches have reshaped the learning environment of curricula at medical schools. This instructional design has gained popularity due to its authenticity and systematic approach. However, this needs organized supervision and faculty training in order to achieve the desired goals for the program.

Aim: This article examines the evaluation of OBE at a private medical school in Saudi Arabia. It describes the curriculum review process and the characteristics of the curriculum reviewers involved. It evaluates the curriculum using Harden’s OBE implementation inventory. OBE reviewers’ satisfaction about OBE implementation was evaluated using the OBE inventory.

Results: This analysis shows our institutional profile to be similar to the ‘transition to beavers’ symbol in Harden’s representation. At the program level, the study identifies gaps and suggests suitable recommendations to enhance the enactment of OBE.

Conclusion: We strongly encourage medical educators to apply the nine components of the OBE implementation inventory to evaluate their level of implementation of OBE. To further build up this model, the authors propose a mnemonic “ADAPTIVE Species” as an instructional prompt to develop these qualities in medical faculty. “ADAPTIVE Species” stands for Assertive, Developer, Assessors, Prime-movers, Transparent, Innovators, Vigilant, Evaluators, and Selectors.

Introduction

Developments in Outcome-based education (OBE) and innovative shifts in its pedagogical approaches have reshaped the learning environment of curricula at medical schools. This instructional design has gained popularity due to its authenticity and systematic approach ( Rubin and Franchi-Christopher, 2002 ) This evolving paradigm and evidence-based medical practices in the health care system are provoking a continuous review of curricula and specific learning outcomes. However, OBE implementation needs organized supervision and faculty training in order to achieve the desired goals for the program ( Gruppen, 2012 ).

The OBE approach involves not only set of specific learning outcomes but it demands successful implementation ( Harden, 2007a , 2009). Effective evaluation requires a set of parameters that serve as a guide to assess the degree of OBE curriculum implementation. Also, there is a great concern to ensure that faculty are able to recognize the importance of OBE and sufficiently skilled to ensure its effective application.

In medical education, there are different evaluation tools to gauge the implementation of OBE; for instance, course reports appraisals, self-study evaluations, program annual reports, student surveys, and external and internal reviews. Additionally, there are models that help medical educators to map the curriculum and to ascertain the progress of implementation of OBE throughout the program in terms of depth, scope, value, and proficiency ( Harden, 2007a , 2007b ).

There is an established instrument, the ‘Outcome-based Education implementation inventory’, introduced in 2007 at the University of Dundee by Professor Harden ( Harden, 2007b ). The nine components in this model are sufficiently comprehensive and reliable to scrutinize the extent of implementation of OBE and to guide the areas of reform by detecting gaps in the curriculum.

It is important to carry out studies that investigate OBE implementation in diverse learning environments and educational cultures. There are few studies that explore the role of faculty and faculty development programs that train faculty in OBE evaluation and implementation. Additionally, there is lack of evidence to develop faculty development strategies by identifying the loopholes in OBE ( Steinert et al. , 2016 ).

This article examines the application of OBE at a private medical school in Saudi Arabia. It highlights a pragmatic approach to apply the SPICES model in the undergraduate medical curriculum. The SPICES term is abbreviated for; student-centered, problem-based, integrated, community-based, elective and systematic approach. It describes the curriculum review process and the characteristics of the curriculum reviewers involved. In addition, it evaluates the curriculum using Harden’s Outcome-based Education implementation inventory ( Harden, 2007b ).

At the program level, the study explores the extent of application of OBE in the MBBS undergraduate curriculum, identifies gaps and suggests suitable recommendations to enhance the enactment of OBE. Furthermore, it assists with faculty development strategies by proposing a model in the form of mnemonic “ADAPTATION Species” to be utilized for faculty development in order to enhance OBE. These suggestions can be generalized to similar OBE programs and educational cultures to support OBE implementation in comparable contexts.

Implementation of Outcome-based education

In Saudi Arabia, Alfarabi College of Medicine in Riyadh provides a Bachelor’s Degree in Medicine and Surgery (MBBS). The program utilizes OBE and its design is based on the SPICES model ( Harden, Sowden and Dunn, 1984 ). This conveys a set of core knowledge, skills, and behaviors that are expected to be achieved by the medical graduates through the specific learning outcomes. Significant progress has been made in implementing the OBE and SPICES models at Alfarabi College of Medicine. In the MBBS curriculum, program learning outcomes have been specified in all courses.

The curriculum integrates basic and clinical sciences and focuses on acquiring knowledge through a problem-based learning and student-centered learning approach ( Neville, 2009 ). With this approach undergraduates develop the ability to seek and address societal, and community issues in the healthcare system ( Hmelo-Silver, 2004 ). In addition, this framework brings medical students closer to patients as early as possible in the MBBS program ( Preeti, Ashish and Shriram, 2013 ).

The curriculum is reviewed externally by the Centre for Medical Education CenMEDIC Committee and its framework is considered similar to the CanMEDs Framework that was established by the Canadian Royal College of Physicians and Surgeons in 1996 ( Shadid et al., 2019 ). Also, it is based on the World Federation for Medical Education (WFME) global standards for basic and quality medical education. In short, this structure provides the students with a strong foundation, which is essential for competent physicians to work in their own diverse cultures and follow regional laws ( Cheng et al., 2014 ). There is a need to ensure the employment of this curriculum meets international standards, without losing the local perspective.

The curriculum management committee

The Curriculum Management Committee (CMC) is responsible for the development and reforms of the MBBS curriculum. During regular CMC meetings, the Medical Education Unit aims to make the learning outcomes explicit and to emphasize the use of the specific learning outcomes as a basis for decisions about curriculum reforms. The Medical Education Unit oversees content mapping and alignment of the learning outcomes with the teaching strategies and assessment. It also carries out regular appraisals of course specifications to build and promote those strategies that can cultivate an exciting and engaging learning environment for medical students.

The Medical Education Unit also provides short courses, workshops and 1:1 support for faculty development. A systematic review has shown that this type of support is important for improving teaching effectiveness in medical education. In the past, most of the focus has been on support for individuals, but it is reported that there is also a need for faculty development that supports change across teaching teams and programs ( Steinert et al. , 2016 ).

Alfarabi’s curriculum had been reviewed in a series of six cycles (over each semester) for the last three years. The course reports and content experts’ recommendations were considered as the main drivers for aligning the pedagogical strategies, assessment and the learning outcomes. Focusing on these elements of curriculum alignment aimed to ensure the implementation of the curriculum.

The Outcome-based Education implementation inventory

In the Outcome-based Education implementation inventory there are three types of institutions/groups of educators; named as ‘the Ostrich’, ‘the Peacock’ and ‘the Beaver’ ( Harden, 2007b ). Eloquently, the author has described the characteristics of each group and their approach in relation to OBE.

The Ostriches are a group of medical educators who do not believe in the use of learning outcomes. This group neither favors the use of learning outcomes in the curriculum nor uses them in their teaching. This attitude is a real risk for the sustainability of programs in the current medical education era, which strongly supports and recognizes the values of OBE. The approach of this cluster of instructors is unlikely to survive in the future.

On the other hand, the peacocks are those faulty who agree to put learning outcomes in the papers and proclaim the value of OBE. However, in practice, they fail to implement and apply OBE. For that reason, the applied OBE does not match the displayed learning outcomes in the papers. So, they are merely showing off for visitors or external reviewers and pretend to be committed to this task.

Finally, there are the beavers who are not only strong advocates for setting learning outcomes in the planned curriculum but are also efficient and dedicated to implementation. They aim to work effectively and their efforts are reflected clearly in the form of an impact on the curriculum reforms. Their values, beliefs, and ability to work as catalysts in education environments can promote and reshape medical schools. Undoubtedly, the future survival and success in the competitive high-stakes healthcare education environment belongs to them ( Harden, 2007b ).

In order to assess the level of adoption of OBE in the Alfarabi curriculum, we applied the Outcome-based education implementation inventory. A survey was designed on a Google form. This included the nine components of the OBE implementation inventory profile. This form was disseminated through email to the CMC members, including the external reviewers.

These members of CMC were not only content experts but also context specialists for curriculum review. All of them and the external reviewers have more than 10 years of teaching experience and the CMC members hold key positions at the medical school as course directors/course organizers. They were regularly involved in the review process and have a wide range of experience; including teaching, assessing, quality assurance, and medical education.

Thirty participants were sent a Google form survey designed on the five point Likert scale against nine components of the OBE implementation inventory. Participants were required to choose the satisfaction level for the achievements of each component mentioned in the OBE implementation inventory.

They were instructed to use their estimations of satisfaction levels based on their three years of experience involved in the MBBS curriculum review at Alfarabi College of Medicine. Out of 30 participants, 23 responded and filled out the survey. The responses for each factor were used to draw the OBE implementation profile of our institution ( Harden, 2007b ). These components were further simplified with a brief description of each section to elaborate on the meanings for participants.

The following are the components of the implementation inventory:

  • 1. Statements of learning outcomes (This dimension reflects the extent to which there is a clear statement of the learning outcomes in courses)
  • 2. Communication with staff/students about the learning outcomes (This aspect is a measure of the extent to which staff and students in an institution are made aware of the existence of an outcome statement and are familiar with it)
  • 3. The educational strategies adopted (The choice and use of teaching methods including lectures, small group work, blended learning, and independent study should reflect the learning outcomes)
  • 4. The learning opportunities available, (the use of new learning technologies, use of simulators, skill labs, technology-enhanced learning, high fidelity simulators, audience response systems, poll systems in lectures)
  • 5. The course content (Consideration of the learning outcomes and the danger of information gap or overload, curriculum congestion, while content mapping)
  • 6. Student progression through the course, (Learning outcomes are usually expressed as the gained competencies expected at the end of an education program)
  • 7. Assessment of students (The summative and formative assessment methods adopted must reflect the agreed learning outcomes and informed decisions taken, as to whether a student has or has not achieved the stated outcomes)
  • 8. The educational environment (The learning outcomes should inform what was seen as a desirable learning environment. For example, if the ability to work as a member of a team is a learning outcome, an educational environment that supports collaborative working was more appropriate than the more typical environment where competition is rewarded)
  • 9. Student selection (In Outcome-based education, the approach was adopted while admitting the students, based on the level of achievement expected of students prior to entry to medical studies in each of the outcome domains such as communication skills, decision making, attitudes, ethics, and practical skills)

Results/Analysis

Assessing the OBE by utilizing the implementation inventory facilitates evaluation of the operational and planned curriculum. It not only helps to enhance the curriculum but also helps to bring transformation in pedagogical schemes to align the taught content and exit program learning outcomes.

Almost half of reviewers (n 11= 47.8%) strongly agreed (score 5) that we have clear and explicit program learning outcomes matching with course learning outcomes distributed over the six years of the MBBS program as attached in Appendix Figure 3 (a-i) . Nearly half of reviewers (n 10= 43.5%), agreed (score 4) that programme learning outcomes had been communicated to students clearly.

Regarding the course content, student progression, assessment methods, and educational environment, the participants were agreed on the following highest percentages (n 11= 47.8%) agreed (score 4), (n 15= 65.2%) agreed (score 4), (n 13= 56.5%) agreed (score 4), (n 11= 47.8%) agreed (score 4).

All most half of reviewers (n 11= 47.8%) strongly agreed (score 5) in educational strategies adopted and learning opportunities components. About the student selection, a low number of participants (n 5= 21.7%) strongly agreed, however, the majority of reviewers (n 6= 26.7%) showed a neutral Score (3) or (n 6= 26.7%) agreed score (4).

Statements of learning outcomes

The level of agreement in this component demonstrated that the programme is achieving this component to the reasonably satisfactory level. It was indicative of possessing the Beavers’ profile in Harden’s OBE inventory profile, which is quite inspiring for the programme reviewers as shown in Figure 1 .

An external file that holds a picture, illustration, etc.
Object name is mep-9-20004-g0000.jpg

Communication with staff/students

This analysis signified the areas of improvement and gave ideas to bridge this gap by setting an effective means of communication about the learning outcomes to students in a more explicit manner. Nearly half of reviewers (n 10= 43.5%, score 4) depicted that we are going in the right direction however need to work hard in order to be ‘the Beaver’. Also, we need to inquire further about the reasons for this lack of communication. We must involve the stakeholders (faculty, students, owners, policymakers, medical educators) of the curriculum to diminish this weakness. Any further neglect of this aspect will likely push the institution towards the peacock’s profile, which would be alarming.

Educational strategies adopted and learning opportunities

As far as these two parameters were concerned the results aligned with the beaver profile. This satisfaction level is encouraging for the organization and faculty to remain confident and progress towards the goal of the institution.

As there is always room for improvement, henceforward, the authors suggested creating more exciting and engaging learning opportunities. This could be augmented by designing instructional methods that ensure interaction between the tutors and students to provoke meaningful learning experiences. The use of novel learning software, practice on trainers (Standardized patients); high fidelity simulators supported by technology-enhanced learning; audience response systems (Poll-everywhere), and artificial intelligence can boost the learners’ interest in the subject.

Course content, Student progression, Assessment, and Educational environment

Perhaps the most important aspect for our program is to focus on is the content mapping and the monitoring the students’ progress throughout the courses by introducing robust formative assessments. Overall, these scores from the implementation inventory profile indicated that we fell in the category of strugglers to be the beaver as shown in Figure 1 . This analysis shaped our institutional profile almost similar to the transition to beavers. Significantly, these findings supported the efforts to implement the OBE and to continue the efforts to establish the profile of the beavers.

In addition, the authors suggest that formative feedback, formative assessments, and continuous motivation of students, faculty, and educators throughout the program can build a vigorous team which can considerably boost the OBE employment. Ultimately, it will promote an atmosphere to cultivate the optimal level of medical practitioners and institutions will propagate in the medical profession.

Student selection

Certainly, the score in these components was a true reflection of the prevailing situation and demanded most of the stakeholders’ attention. Our findings a real need to improve student selection criteria and are closely aligned with “the peacock tail”. The authors suggest that student selection is a key area to develop robust selection criteria for admission in the MBBS program. In fact, these findings require further research as to the reasons for the reviewers’ opinions. Identifying the failings can help institutions to make plans to combat the gaps and modify the student selection criteria.

Finally, it is suggested that one must not only regard the selection criteria in terms of mark sheets/reports or summative assessments but it is equally crucial to match the selection criteria to the program learning outcomes. For instance, if we require that medical students must demonstrate excellence in ethics, communication skills, and professionalism, then there could be a preliminary assessment of communication skills, professionalism and ethical attributes at the point of entry to the program.

In this study, the researchers determined that in order to be true to the beaver, we need to be as adaptive as the beaver and secure transformation in the educational ecosystem. Faculty and medical educators have to possess the unique survival characteristics of the beaver such as fortitude, grit, and acceptance to change ( Fullan, 2011 ). Only then we can claim to be the actual beaver, which is a symbol of determination, intuition, and diligence.

Besides, medical educators and institutions ought to instill the spirit of real beavers and implement the OBE curriculum reforms. Implementing the innovations in OBE and enhancing the educational environment will assist t institutions to flourish and strengthen the capability to keep pace with international standards of medical education.

Faculty development: “To Be beavers; Be adaptive”

The authors of this article strongly encourage medical educators to apply the nine components of the OBE implementation inventory to evaluate their level of implementation of OBE. To further build up this model, the authors propose a mnemonic “ADAPTIVE Species” as an instructional prompt to develop these qualities in medical faculty as shown in Figure 2 .

An external file that holds a picture, illustration, etc.
Object name is mep-9-20004-g0001.jpg

It highlights that medical school stakeholders need to be A ssertive in the planning and application of program learning outcomes in the program. They must D evelop the vision to enhance effective and efficient communication of program learning outcomes among students and faculty to bridge the communication gap ( Yardley, Irvine and Lefroy, 2013 ). As A ssessors, they have to devise educational strategies which ensure constructive alignment in OBE curricula (Davis et al. , 2007).

Furthermore, faculty members have to be determined and committed to cultivating favourable and exciting developments corresponding with OBE. They must serve as key P rime movers and motivators to enhance the learning environment and keep the learner engaged in the process of learning ( Artino, Holmboe and Durning, 2012 ). While establishing constructive alignment and revisiting the course content; medical educators are encouraged to be T ransparent and accountable. Medical institutions can support faculty development by providing opportunities for frequent training workshops and by creating a climate that facilitates open discussion and learning.

In short, as I nnovators, medical schools ought to engage in those pedagogical transformations that can provide effective feedback and ensure students’ progress throughout the program (Thomas et al ., 2016). At the same time, we need to be very V igilant in assessment planning to ensure constructive configuration with learning outcomes and teaching methods especially in clinical teaching ( Barrow, McKimm and Samarasekera, 2010 ).

As curriculum E valuators, our approach must be pragmatic and logical to ensure a vigorous educational atmosphere and we must propose strategies which underpin the idea of active learning ( Murdoch-Eaton and Whittle, 2012 ). While making decisions about the selection of potential medical students, as S electors our approach should be holistic and realistic ( Donnon, Paolucci and Violato, 2007 ). While choosing for entry in the MBBS program, consideration must be given to effective communication skills, professionalism, and ethical values as they are core constituents of OBE ( Howe et al. , 2004 ).

OBE is demanding in current medical education and requires a vigorous evaluation strategy to assess its implementation. In order to evaluate the application of OBE and identify the gaps, medical educators must design and follow the guidelines. This article utilized the OBE implementation inventory as an OBE evaluation tool and considered it a highly effective instrument to assess the application of OBE at program level.

The representation of faculty satisfaction regarding the employment of OBE is based on the implementation inventory profile. The final shape depicts the operational state of OBE and assists us to identify the areas of improvement. This is promising and encouraging for the institution in that we stand in “the transition to the beaver” and foresee our institution to soon to be the beavers. In order to reach that point, the suggested model of “ADAPTION Species” provides a comprehensive approach to support the development of these characteristics in faculty in order to augment the nine components of the OBE inventory.

To sum up, institutions must promote faculty development courses or workshops to train their faculty in improving the skills and attitudes needed to implement OBE. Apart from medical education, this proposed instructional mnemonic can be generalized to all those programs in higher education which are grounded in an outcome-based curriculum.

Take Home Messages

  • • Outcome-based education (OBE) is demanding in current medical education and requires vigorous evaluation strategy.
  • • OBE evaluation helps to identify gaps and areas of faculty development.
  • • The OBE implementation inventory proposed by Prof. Harden can serve as an evaluation tool and is considered to be a highly effective instrument to assess the implementation of OBE.
  • • There is a need for faculty development through workshops and refresher short courses to implement OBE.
  • • Faculty developers must design and follow the guidelines of OBE evaluation tools.
  • • The authors propose a mnemonic “ADAPTIVE Species” (as an instructional prompt) to develop the required qualities in faculty

Notes On Contributors

This study was supported by Research Unit, Alfarabi College of medicine Riyadh. The author of this research is thankful for Dr. Ian Willis, Dr. Samer Rastam & Dr. Abdullah M. Aldahmash for reviewing this article and Dr. Turky H. Almigbal for being coauthors of this article.

Dr. Shazia Iqbal is working as Director Medical Education at Alfarabi College of Medicine, Riyadh, Saudi Arabia. She assists in the development and review of OBE/ integrated curriculum at medical institutions with a special interest in pedagogical techniques and innovative educational technologies.

Dr. Ian Willis supervises internationalization theses on the University of Liverpool’s online Professional Doctorate in Higher Education (EdD) UK. He is a principal fellow of the Higher Education Academy UK (Advance HE) and formerly the Head of the Educational Development Division at UoL.

Dr. Turky H. Almigbal is assistant professor, family and community medicine department, College of Medicine, King Saud University, Riyadh. His has keen interest in curriculum development and reviews.

Dr. Abdullah M. Aldahmash is Dean of Alfarabi College of Medicine Riyadh. He is founder and director of Stem Cell Unit, King Saud University, Riyadh. He has profound interest in faculty development and innovation in medical education.

Dr. Samer Rastam is supervisor Research Unit, Alfarabi College of medicine Riyadh. He has keen interest in integrated curriculum and content mapping. He is expert in quantitative data analysis.

Acknowledgments

The authors of this research are thankful to the faculty members of Alfarabi College of Medicine for their contribution to participation in the study. Authors are thankful to Dr Shahzad Ahmad, Dr Bushra Bano and Dr Arif Malik for peer review of this manuscript.

Figure 1 : Author is Dr Shazia Iqbal; the creator/owner of copyright.

Figure 2 : Author is Dr Shazia Iqbal; the creator/owner of copyright.

Figures 3(a-i) Appendices: Author is Dr Shazia Iqbal; the creator/owner of copyright.

Figure 3 (a-i) : Summary of Alfarabi Outcome-based Education (OBE) implementation inventory Score

An external file that holds a picture, illustration, etc.
Object name is mep-9-20004-g0002.jpg

[version 1; peer review: This article was migrated, the article was marked as recommended]

Declarations

The author has declared that there are no conflicts of interest.

Ethics Statement

At Alfarabi College of Medicine, this research was approved by ethical approval members, dated on 22.10.2019, Reference: CMC#04101019. The study was in accordance with the Declaration of Helsinki. All participants were inquired for their consent to take part voluntarily in the study.

External Funding

This article has not had any External Funding

Bibliography/References

  • Artino A. R. Holmboe E. S. and Durning S. J.(2012) Can achievement emotions be used to better understand motivation, learning, and performance in medical education? Medical Teacher. 34 ( 3 ), pp.240–244. 10.3109/0142159X.2012.643265 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Barrow M. McKimm J. and Samarasekera D.(2010) Strategies for planning and designing medical curricula and clinical teaching. Education, 4(1), 2-8 curricula and clinical teaching. South-East Asian Journal of Medical Education. 4 ( 1 ),2–8. (4), (1), 2-8. [ Google Scholar ]
  • Cheng A., Eppich W., Grant V., Sherbino J., et al.(2014) Debriefing for technology‐enhanced simulation: a systematic review and meta‐analysis. Medical Education. 48 ( 7 ), 657-666, pp.657–666. 10.1111/medu.12432 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Davis M. H., Amin Z., Grande J. P., O’Neill A. E., et al.(2007) Case studies in outcome-based education. Medical Teacher. 29 ( 7 ), pp.717–722. 10.1080/01421590701691429 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Donnon T. Paolucci E. O. and Violato C.(2007) The predictive validity of the MCAT for medical school performance and medical board licensing examinations: a meta-analysis of the published research. Academic Medicine: Journal of the Association of American Medical Colleges. 82 ( 1 ), pp.100–106. 10.1097/01.ACM.0000249878.25186.b7 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Fullan M.(2011) The six secrets of change: What the best leaders do to help their organizations survive and thrive. San Francisco, Calif: Jossey-Bass. [ Google Scholar ]
  • Gruppen L. D.(2012) Outcome-based medical education: implications, opportunities, and challenges. Korean Journal of Medical Education. 24 ( 4 ), pp.281–285. 10.3946/kjme.2012.24.4.281 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Harden R. M.(2007a) Learning outcomes as a tool to assess progression. Medical Teacher. 29 ( 7 ), pp.678–682. 10.1080/01421590701729955 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Harden R. M.(2007b) Outcome-based education--the ostrich, the peacock and the beaver. Medical Teacher. 29 ( 7 ), pp.666–671. 10.1080/01421590701729948 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Harden R. M.(2009) Outcome-Based Education: the future is today. Medical teacher. 29 ( 7 ), pp.625–629. 10.1080/01421590701729930 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Harden R. M. Sowden S. and Dunn W. R.(1984) Educational strategies in curriculum development: the SPICES model. Medical Education. 18 ( 4 ), pp.284–297. 10.1111/j.1365-2923.1984.tb01024.x [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hmelo-Silver C. E.(2004) Problem-Based Learning: What and How Do Students Learn? Educational Psychology Review. 16 ( 3 ), pp.235–266. 10.1023/B:EDPR.0000034022.16470.f3 [ CrossRef ] [ Google Scholar ]
  • Howe A. Campion P. Searle J. and Smith H.(2004) New perspectives--approaches to medical education at four new UK medical schools. BMJ (Clinical Research Ed.). 329 ( 7461 ), pp.327–331. 10.1136/bmj.329.7461.327 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Murdoch-Eaton D. and Whittle S.(2012) Generic skills in medical education: developing the tools for successful lifelong learning. Medical Education. 46 ( 1 ), pp.120–128. 10.1111/j.1365-2923.2011.04065.x [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Neville A. J.(2009) Problem-Based Learning and Medical Education Forty Years On. Medical Principles and Practice. 18 ( 1 ), pp.1–9. 10.1159/000163038 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Preeti B. Ashish A. and Shriram G.. (2013) Problem-based learning (PBL)-an effective approach to improve learning outcomes in medical teaching. Journal of Clinical and Diagnostic Research: JCDR. 7 ( 12 ),2896. https://dx.doi.org/10.7860%2FJCDR%2F2013%2F7339.3787 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Rubin P. and Franchi-Christopher D.(2002) New edition of Tomorrow’s Doctors. Medical Teacher. 24 ( 4 ), pp.368–369. 10.1080/0142159021000000816 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Shadid A. M., Abdulrahman A. K. B., Dahmash A. B., Aldayel A. Y., et al. 2019. SaudiMEDs and CanMEDs frameworks: Similarities and differences. Advances in Medical Education and Practice. 10 ,273. https://dx.doi.org/10.2147%2FAMEP.S191705 [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Steinert Y., Mann K., Anderson B., Barnett B. M., et al.(2016) A systematic review of faculty development initiatives designed to enhance teaching effectiveness: A 10-year update: BEME Guide No. 40. Medical Teacher. 38 ( 8 ), pp.769–786. 10.1080/0142159X.2016.1181851 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Yardley S. Irvine A. W. and Lefroy J.(2013) Minding the gap between communication skills simulation and authentic experience. Medical Education. 47 ( 5 ), pp.495–510. 10.1111/medu.12146 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Version 1. MedEdPublish (2016). 2020; 9: 121.

Reviewer response for version 1

Megan anakin.

1 University of Otago

This review has been migrated. The reviewer awarded 3 stars out of 5 Thank you for inviting me to review your article, Shazia. I read your article with interest. My medical school is undertaking a curriculum review process so I was curious to see how you evaluated the programme at your medical school. The introduction describes outcomes-based education (OBE) and states the evaluation approach used in your study. To enhance understanding of your evaluation approach, please consider describing the nine components of the evaluation model and the mnemonic ‘ADAPTATION Species’ so the reader can better understand how you used it to evaluate your programme. The context of the study is well described and it includes a description of the OBE programme, curriculum management committee, and an OBE implementation inventory. To enhance the introduction for the reader, please consider moving the aim of your study to the end of this section and explain the relevance of these last three study context components and how they relate to the aim of your study. Please consider including the survey that was used and adding a paragraph to the methods section that explains how you analysed the data you collected from participants using the implementation inventory. Please consider moving the first two sentences of the results section to the beginning of the discussion section because they are statements that interpret the results before the results of the study have been presented. Please consider providing demographic information about the participants/reviewers such as gender, ethnicity, role in the medical programme, professional background/expertise, and length of time as a clinical educator. Please explain how scores in the inventory profile were interpreted to correspond to the Beaver profile in Harden’s OBE inventory in the methods section and please consider reporting this as a result in the results section. This additional information and change to the article will help the reader appreciate the significance of the Beaver profile of staff at your university and the implications for curriculum review that readers can learn from if they have staff with Beaver profiles at their medical schools. To enhance the presentation of the mnemonic ‘ADAPTATION Species’ in the discussion, the authors may with to discuss how they might use and evaluate this approach with their staff in a future study. Missing from the discussion is a paragraph reflecting on the strengths and limitations of the study design including a statement about how the evaluation model used allowed particular features of the curriculum to be studied and how sample of participants had an impact on the interpretation of the quantitative results including generalisability, representativeness, and applicability. For example, as a reader I am wondering about the views of other stakeholders such as students and patients. I would like to encourage the authors to submit a revised version of their article to strengthen their message to readers because the authors have taken a productive approach to reviewing curriculum in their study.

Reviewer Expertise:

No decision status is available

Sateesh Babu Arja

1 Avalon University School of Medicine

This review has been migrated. The reviewer awarded 4 stars out of 5 I read this article with much interest and thoroughly enjoyed while reading this manuscript. I must commend the curriculum committee and Medical Education Unit for implementing the outcome-based education and for providing faculty development activities. The framework designed for the inventory matching of implementation of outcome-based education seems appropriate. But as Professor Gibbs pointed out, the methods could have been more appropriate if there is a separation of responses or results between internal reviewers and external reviewers. Thank you.

Trevor Gibbs

This review has been migrated. The reviewer awarded 3 stars out of 5 I enjoyed reading this paper although I feel that there could be a lot of improvement in the research methodology. It seems to me that the questions related to the evluation of OBE was biased towards the positive, given that the evaluators were evaluating their own work. I recognise that the authors suggested that external reviewers were involved, so this situation could have been improved by having those reviewers evaluations taken out and discussed separately. I also feel that, in this piece of research, the users of the produce i.e. the students, would provide valuable feedback.I enjoyed reading the use of ADAPTIVE approaches and that I feel reflects a very useful approach.Clearly the faculty and its curriculum committee have done a good job in developing this OBE curriculum and if they had re-structured their research approach more effectively, then I would have been inclined to give it a higher star rating.It is well worthwhile running another evaluation exercise with a wider, more diverse, less-biased evaluation team.

P Ravi Shankar

1 American International Medical University

This review has been migrated. The reviewer awarded 4 stars out of 5 The authors provide a detailed description of the outcomes-based curriculum at their university. They have evaluated the implantation of outcomes-based education (OBE) at the institution using the OBE implementation inventory. The ADAPTIVE species mnemonic proposed is interesting and informative. The authors can provide more background information about the college, the educational program followed, the student selection procedure and characteristics of the admitted students. This will help readers make better sense of the study findings. Among the 23 respondents it will be interesting to know how many were external reviewers? Were there any differences in perception between external reviewers and members of the CMC? Figure 2 is interesting and informative. One of the limitations could be those surveyed were also involved in developing and monitoring the OBE curriculum. The key words are very long and could be shortened. The quality of written English is good. In certain places however, I am of the opinion that some of the words used can be replaced by more appropriate ones. The article will be of broad interest to educators especially those involved with outcomes or competency-based curricula.

J Hartmark-Hill

1 University of Arizona College of Medicine-Phoenix

This review has been migrated. The reviewer awarded 1 stars out of 5 As medical education transitions to outcomes- based and competency-based education, the need for expanded, evidence-based program evaluation measures and iterative faculty development is an area of importance to curricular leaders.The authors provide adequate information in the background to understand the need for this research and subsequent study.However, research question(s)/hypotheses are not clearly developed, and application of an attitudinal survey of faculty toward efficacy of various curricular elements does not address the original premises.While presented as experienced content experts, participants in the survey were course directors and others who had been responsible for creation of the curriculum. Therefore, their own satisfaction with such is subject to bias and blindspots. One example of a more helpful approach would have been a knowledge-based survey regarding levels of implementation (and measures of success) across the curriculum (with comparison to actual status of such). A disconnect between the two could support a case for the need for further faculty development. No demographic data or methodology for statistical analysis of groups was provided. Only 30 faculty were surveyed at a single point in time, and of those, less than 80% of those surveyed responded, likely further skewing results.While the conceptual model presented is of interest for further study, conclusions cannot be made regarding either the data as supporting programmatic success, the need for specific faculty development or generalizability to other institutions.

Felix Silwimba

1 University of Lusaka

This review has been migrated. The reviewer awarded 5 stars out of 5 this is a clearly written easy to follow study report and motivating . I copy the idea adaptive species. I will introduce it in my medical school

Shahzad Ahmad

1 Fatima Memorial medical and dental college Lahore Pakistan

This review has been migrated. The reviewer awarded 5 stars out of 5 I enjoyed reading this paper with great interest. Article is well structured, clear and the idea of evaluating the OBE curriculum is well justified. Author’s recommendations are thought provoking and prompt the medical educators to bring further ideas for OBE curriculum evaluation. At institutional level, medical education demands a structured framework for faculty development and this manuscript has highlighted those essential components and faculty characteristics required in operational OBE curriculum. Especially, the idea of “ADAPTIVE Species” is well fitted in current situation where institutions are moving towards innovative approach of learning and teaching through maximising use of technology.I shall strongly recommend applying this method of evaluating OBE curriculum at instructions.

Comparing ChatGPT's correction and feedback comments with that of educators in the context of primary students' short essays written in English and Greek

  • Published: 27 July 2024

Cite this article

thesis on outcome based education

  • Emmanuel Fokides   ORCID: orcid.org/0000-0003-3962-0314 1 &
  • Eirini Peristeraki   ORCID: orcid.org/0009-0002-4689-1841 1  

This research analyzed the efficacy of ChatGPT as a tool for the correction and provision of feedback on primary school students' short essays written in both the English and Greek languages. The accuracy and qualitative aspects of ChatGPT-generated corrections and feedback were compared to that of educators. For the essays written in English, it was found that ChatGPT outperformed the educators both in terms of quantity and quality. It detected more mistakes, provided more detailed feedback, its focus was similar to that of educators, its orientation was more balanced, and it was more positive although more academic/formal in terms of style/tone. For the essays written in Greek, ChatGPT did not perform as well as educators did. Although it provided more detailed feedback and detected roughly the same number of mistakes, it incorrectly flagged as mistakes correctly written words and/or phrases. Moreover, compared to educators, it focused less on language mechanics and delivered less balanced feedback in terms of orientation. In terms of style/tone, there were no significant differences. When comparing ChatGPT's performance in English and Greek short essays, it was found that it performed better in the former language in both the quantitative and qualitative parameters that were examined. The implications of the above findings are also discussed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

thesis on outcome based education

Data availability

Data will be made available on reasonable request.

Abdullayeva, M., & Musayeva, Z. M. (2023). The impact of Chat GPT on student's writing skills: An exploration of Ai-assisted writing tools . Proceedings of the International Conference of Education, Research and Innovation (Vol. 1, No. 4), 61-66. ICERI. https://doi.org/10.5281/ZENODO.7876800

Ai, H. (2017). Providing graduated corrective feedback in an intelligent computer-assisted language learning environment. ReCALL, 29 (3), 313–334. https://doi.org/10.1017/S095834401700012X

Article   Google Scholar  

Ali, J. K. M., Shamsan, M. A. A., Hezam, T. A., & Mohammed, A. A. (2023). Impact of ChatGPT on learning motivation: teachers and students’ voices. Journal of English Studies in Arabia Felix, 2 (1), 41–49. https://doi.org/10.56540/jesaf.v2i1.51

Altamimi, A. B. (2023). Effectiveness of ChatGPT in essay autograding. Proceedings of the 2023 International Conference on Computing, Electronics & Communications Engineering (iCCECE) , 102–106. IEEE. https://doi.org/10.1109/iCCECE59400.2023.10238541

Altstaedter, L. L., & Doolittle, P. (2014). Students’ perceptions of peer feedback. Argentinian Journal of Applied Linguistics, 2 (2), 60–76.

Google Scholar  

Athanassopoulos, S., Manoli, P., Gouvi, M., Lavidas, K., & Komis, V. (2023). The use of ChatGPT as a learning tool to improve foreign language writing in a multilingual and multicultural classroom. Advances in Mobile Learning Educational Research, 3 (2), 818–824. https://doi.org/10.25082/AMLER.2023.02.009

Aydın, Ö., Karaarslan, E. (2022). OpenAI ChatGPT generated literature review: Digital twin in healthcare. In Ö. Aydın (Ed.), Emerging Computer Technologies 2 (pp. 22–31). İzmir Akademi Dernegi. https://doi.org/10.2139/ssrn.4308687

Azmi, A. M., Al-Jouie, M. F., & Hussain, M. (2019). AAEE-Automated evaluation of students’ essays in Arabic language. Information Processing & Management, 56 (5), 1736–1752. https://doi.org/10.1016/j.ipm.2019.05.008

Banihashem, S. K., Kerman, N. T., Noroozi, O., Moon, J., & Drachsler, H. (2024). Feedback sources in essay writing: Peer-generated or AI-generated feedback? INternational Journal of Educational Technology in Higher Education, 21 (1), 23. https://doi.org/10.1186/s41239-024-00455-4

Barrett, C. A., Truckenmiller, A. J., & Eckert, T. L. (2020). Performance feedback during writing instruction: A cost-effectiveness analysis. School Psychology, 35 (3), 193–200. https://doi.org/10.1037/spq0000356

Barrot, J. S. (2023). Using automated written corrective feedback in the writing classrooms: Effects on L2 writing accuracy. Computer Assisted Language Learning, 36 (4), 584–607. https://doi.org/10.1080/09588221.2021.1936071

Borji, A., & Mohammadian, M. (2023). Battle of the Wordsmiths: Comparing ChatGPT, GPT-4, Claude, and Bard. GPT-4, Claude, and Bard. SSRN, 2023 . https://doi.org/10.2139/ssrn.4476855

Borji, A. (2023). A categorical archive of ChatGPT failures. arXiv preprint arXiv:2302.03494 . https://doi.org/10.48550/ARXIV.2302.03494

Boud, D., & Falchikov, N. (Eds.). (2007). Rethinking assessment in higher education: Learning for the longer term. Routledge.

Brookhart, S. M. (2008). How to give effective feedback to your students. ASCD.

Carless, D., Salter, D., Yang, M., & Lam, J. (2011). Developing sustainable feedback practices. Studies in Higher Education, 36 (4), 395–407. https://doi.org/10.1080/03075071003642449

Cavalcanti, A. P., Barbosa, A., Carvalho, R., Freitas, F., Tsai, Y. S., Gašević, D., & Mello, R. F. (2021). Automatic feedback in online learning environments: A systematic literature review. Computers and Education: Artificial Intelligence, 2 , 100027. https://doi.org/10.1016/j.caeai.2021.100027

Chang, Y., Wang, X., Wang, J., Wu, Y., Yang, L., Zhu, K., Chen, H., Yi, X., Wang, C., Wang, Y., Ye, W., Zhang, Y., Chang, Y., Yu, P., Yang, Q., & Xie, X. (2024). A survey on evaluation of large language models. ACM Transactions on Intelligent Systems and Technology, 15 (3), 1–45. https://doi.org/10.1145/3641289

Chatterjee, S., & Bhattacharjee, K. K. (2020). Adoption of artificial intelligence in higher education: A quantitative analysis using structural equation modelling. Education and Information Technologies, 25 (5), 3443–3463. https://doi.org/10.1007/s10639-020-10159-7

Chen, Y., Wang, R., Jiang, H., Shi, S., & Xu, R. (2023). Exploring the use of large language models for reference-free text quality evaluation: A preliminary empirical study. arXiv preprint arXiv:2304.00723 . https://doi.org/10.18653/v1/2023.findings-ijcnlp.32

Chiu, T. K. F., Meng, H., Chai, C.-S., King, I., Wong, S., & Yam, Y. (2022). Creation and evaluation of a pretertiary artificial intelligence (AI) curriculum. IEEE Transactions on Education, 65 (1), 30–39. https://doi.org/10.1109/TE.2021.3085878

Clariana, R., Wagner, D., & Murphy, L. (2000). Applying a connectionist description of feedback timing. Educational Technology Research and Development, 48 , 5–22. https://doi.org/10.1007/BF02319855

Cohen, J. (2013). Statistical power analysis for the behavioral sciences. Academic Press . https://doi.org/10.4324/9780203771587

Conati, C., Barral, O., Putnam, V., & Rieger, L. (2021). Toward personalized XAI: A case study in intelligent tutoring systems. Artificial Intelligence, 298 , 103503. https://doi.org/10.1016/j.artint.2021.103503

Crosthwaite, P., Storch, N., & Schweinberger, M. (2020). Less is more? The impact of written corrective feedback on corpus-assisted L2 error resolution. Journal of Second Language Writing, 49 , 100729. https://doi.org/10.1016/j.jslw.2020.100729

Dai, W., Lin, J., Jin, F., Li, T., Tsai, Y.-S., Gasevic, D., & Chen, G. (2023). Can large language models provide feedback to students? A case study on ChatGPT. EdArXiv. https://doi.org/10.35542/osf.io/hcgzj

De Winter, J. C., Dodou, D., & Stienen, A. H. (2023). ChatGPT in education: Empowering educators through methods for recognition and assessment. Informatics, 10 (4), 87. https://doi.org/10.3390/informatics10040087

Dehouche, N. (2021). Plagiarism in the age of massive Generative Pre-trained Transformers (GPT-3). Ethics in Science and Environmental Politics, 21 , 17–23. https://doi.org/10.3354/esep00195

Di Placito, M. L., & Mortensen, E. (2023). Applying AI efforts to student assessments: That is, alternative innovations! The Interdisciplinary Journal of Student Success, 2 , 93–108.

Escalante, J., Pack, A., & Barrett, A. (2023). AI-generated feedback on writing: Insights into efficacy and ENL student preference. International Journal of Educational Technology in Higher Education, 20 (1), 57. https://doi.org/10.1186/s41239-023-00425-2

Fang, T., Yang, S., Lan, K., Wong, D. F., Hu, J., Chao, L. S., & Zhang, Y. (2023). Is ChatGPT a highly fluent grammatical error correction system? A comprehensive evaluation. arXiv preprint arXiv:2304.01746 .

Farrokhnia, M., Banihashem, S. K., Noroozi, O., & Wals, A. (2023). A SWOT analysis of ChatGPT: Implications for educational practice and research. Innovations in Education and Teaching International , 1–15. https://doi.org/10.1080/14703297.2023.2195846

Fitria, T. N. (2021). Grammarly as AI-powered English writing assistant: Students’ alternative for writing English. Metathesis: Journal of English Language, Literature, and Teaching, 5 (1), 65. https://doi.org/10.31002/metathesis.v5i1.3519

Floridi, L., & Chiriatti, M. (2020). GPT-3: Its nature, scope, limits, and consequences. Minds and Machines, 30 (4), 681–694. https://doi.org/10.1007/s11023-020-09548-1

Fuchs, K. (2023). Exploring the opportunities and challenges of NLP models in higher education: Is Chat GPT a blessing or a curse? Frontiers in Education, 8 , 1166682. https://doi.org/10.3389/feduc.2023.1166682

Glaser, N. (2023). Exploring the potential of ChatGPT as an educational technology: An emerging technology report. Technology, Knowledge and Learning, 28 (4), 1945–1952. https://doi.org/10.1007/s10758-023-09684-4

Gong, X., Tang, Y., Liu, X., Jing, S., Cui, W., Liang, J., & Wang, F.-Y. (2020). K-9 artificial intelligence education in Qingdao: Issues, challenges and suggestions. Proceedings of the 2020 IEEE International Conference on Networking, Sensing and Control (ICNSC) , 1–6. IEEE. https://doi.org/10.1109/ICNSC48988.2020.9238087

González-Calatayud, V., Prendes-Espinosa, P., & Roig-Vila, R. (2021). Artificial intelligence for student assessment: A systematic review. Applied Sciences, 11 (12), 5467. https://doi.org/10.3390/app11125467

Graham, S. (2018). Instructional Feedback in Writing. In The Cambridge handbook of instructional feedback, 145–168. Cambridge University Press. https://doi.org/10.1017/9781316832134.009

Graham, S., Harris, K. R., & Santangelo, T. (2015a). Research-based writing practices and the common core: Meta-analysis and meta-synthesis. Elementary School Journal, 115 , 498–522. https://doi.org/10.1086/681964

Graham, S., Hebert, M., & Harris, K. R. (2015b). Formative assessment and writing: A meta-analysis. Elementary School Journal, 115 , 523–547. https://doi.org/10.1086/681947

Han, T., & Sari, E. (2022). An investigation on the use of automated feedback in Turkish EFL students' writing classes. Computer Assisted Language Learning , 1–24. https://doi.org/10.1080/09588221.2022.2067179

Han, Y., & Xu, Y. (2020). The development of student feedback literacy: The influences of teacher feedback on peer feedback. Assessment & Evaluation in Higher Education, 45 (5), 680–696. https://doi.org/10.1080/02602938.2019.1689545

Hattie, J. (2012). Visible learning for teachers: Maximizing impact on learning. Routledge, Taylor & Francis Group.

Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77 (1), 81–112. https://doi.org/10.3102/003465430298487

Henderson, M., Phillips, M., Ryan, T., Boud, D., Dawson, P., Molloy, E., & Mahoney, P. (2019). Conditions that enable effective feedback. Higher Education Research & Development, 38 (7), 1401–1416. https://doi.org/10.1080/07294360.2019.1657807

Huang, S., & Renandya, W. A. (2020). Exploring the integration of automated feedback among lower-proficiency EFL learners. Innovation in Language Learning and Teaching, 14 (1), 15–26. https://doi.org/10.1080/17501229.2018.1471083

Hussein, M. A., Hassan, H., & Nassef, M. (2019). Automated language essay scoring systems: A literature review. PeerJ Computer Science, 5 , e208. https://doi.org/10.7717/peerj-cs.208

Jacobsen, L. J., & Weber, K. E. (2023). The promises and pitfalls of ChatGPT as a feedback provider in higher education: An exploratory study of prompt engineering and the quality of AI-driven feedback. OSF preprints, 2023. https://doi.org/10.31219/osf.io/cr257

Jang, E. E., Hunte, M., Barron, C., & Hannah, L. (2023). Exploring the role of self-regulation in young learners' writing assessment and intervention using BalanceAI automated diagnostic feedback. In Fundamental considerations in technology mediated language assessment , 31–48. Routledge. https://doi.org/10.4324/9781003292395-4

Jansen, T., Höft, L., Bahr, L., Fleckenstein, J., Möller, J., Köller, O., & Meyer, J. (2024). Empirische Arbeit: Comparing Generative AI and Expert Feedback to Students’ Writing: Insights from Student Teachers. Psychologie in Erziehung Und Unterricht, 71 (2), 80–92. https://doi.org/10.2378/peu2024.art08d

Jia, Q., Young, M., Yunkai, X., Jialin, C., Chengyuan L., Rashid, P., & Gehringer, E. (2022). Insta-Reviewer: A data-driven approach for generating instant feedback on students' project reports. Proceedings of the 15th International Conference on Educational Data Mining , 1-12. International Educational Data Mining Society. 10.5281/ZENODO.6853099

Katz, A., Wei, S., Nanda, G., Brinton, C., & Ohland, M. (2023). Exploring the efficacy of ChatGPT in analyzing student teamwork feedback with an existing taxonomy. arXiv preprint arXiv:2305.11882 .

Koltovskaia, S. (2020). Student engagement with automated written corrective feedback (AWCF) provided by Grammarly: A multiple case study. Assessing Writing, 44 , 100450. https://doi.org/10.1016/j.asw.2020.100450

Liang, K.-H., Davidson, S., Yuan, X., Panditharatne, S., Chen, C.-Y., Shea, R., Pham, D., Tan, Y., Voss, E., & Fryer, L. (2023). ChatBack: Investigating methods of providing grammatical error feedback in a GUI-based language learning chatbot. Proceedings of the 18th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2023) , 83–99. https://doi.org/10.18653/v1/2023.bea-1.7

Lin, J., Dai, W., Lim, L.-A., Tsai, Y.-S., Mello, R., Khosravi, H., Gasevic, D., & Chen, G. (2022). Learner-centred analytics of feedback content in higher education. EdArXiv . https://doi.org/10.35542/osf.io/ub5dy

Link, S., Mehrzad, M., & Rahimi, M. (2022). Impact of automated writing evaluation on teacher feedback, student revision, and writing improvement. Computer Assisted Language Learning, 35 (4), 605–634. https://doi.org/10.1080/09588221.2020.1743323

Lossio-Ventura, J. A., Weger, R., Lee, A. Y., Guinee, E. P., Chung, J., Atlas, L., Linos, E., & Pereira, F. (2024). A comparison of chatgpt and fine-tuned open pre-trained transformers (opt) against widely used sentiment analysis tools: Sentiment analysis of COVID-19 survey data. JMIR Mental Health, 11 , e50150. https://doi.org/10.2196/50150

Lu, X. (2019). An empirical study on the artificial intelligence writing evaluation system in China CET. Big Data, 7 (2), 121–129. https://doi.org/10.1089/big.2018.0151

Maclellan, E. (2005). Academic achievement: The role of praise in motivating students. Active Learning in Higher Education, 6 (3), 194–206. https://doi.org/10.1177/1469787405057750

Marrs, S., Zumbrunn, S., McBride, C., & Stringer, J. K. (2016). Exploring elementary student perceptions of writing feedback. Journal on Educational Psychology, 10 (1), 16–28.

Matcha, W., Gašević, D., Uzir, N. A., Jovanović, J., & Pardo, A. (2019). Analytics of learning strategies: Associations with academic performance and feedback. Proceedings of the 9th International Conference on Learning Analytics & Knowledge , 461–470. https://doi.org/10.1145/3303772.3303787

McLaren, B. M., DeLeeuw, K. E., & Mayer, R. E. (2011). A politeness effect in learning with web-based intelligent tutors. International Journal of Human-Computer Studies, 69 (1–2), 70–79. https://doi.org/10.1016/j.ijhcs.2010.09.001

Mizumoto, A., & Eguchi, M. (2023). Exploring the potential of using an AI language model for automated essay scoring. Research Methods in Applied Linguistics, 2 (2), 100050. https://doi.org/10.1016/j.rmal.2023.100050

Murphy, R. (2019). Artificial intelligence applications to support K-12 teachers and teaching: A review of promising applications, challenges, and risks. RAND Corporation . https://doi.org/10.7249/PE315

Narciss, S., Sosnovsky, S., Schnaubert, L., Andrès, E., Eichelmann, A., Goguadze, G., & Melis, E. (2014). Exploring feedback and student characteristics relevant for personalizing feedback strategies. Computers & Education, 71 , 56–76. https://doi.org/10.1016/j.compedu.2013.09.011

Nicol, D. J., & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. Studies in Higher Education, 31 (2), 199–218. https://doi.org/10.1080/03075070600572090

O’Cain, A., Fedoruk, B. D., Masri, Z., Frost, R., & Alahmar, A. (2023). A system for the improvement of educational assessment using intelligent conversational agents. SSRN Electronic Journal , 1–9. https://doi.org/10.2139/ssrn.4393234

Osakwe, I., Chen, G., Whitelock-Wainwright, A., Gašević, D., Pinheiro Cavalcanti, A., & Ferreira Mello, R. (2022). Towards automated content analysis of educational feedback: A multi-language study. Computers & Education: Artificial Intelligence, 3 , 100059. https://doi.org/10.1016/j.caeai.2022.100059

Osawa, K. (2023). Integrating automated written corrective feedback into e-portfolios for second language writing: Notion and notion AI. RELC Journal, 00336882231198913. https://doi.org/10.1177/00336882231198913

Parikh, A., McReelis, K., & Hodges, B. (2001). Student feedback in problem based learning: A survey of 103 final year students across five Ontario medical schools. Medical Education, 35 (7), 632–636. https://doi.org/10.1046/j.1365-2923.2001.00994.x

Parr, J. M., & Timperley, H. S. (2010). Feedback to writing, assessment for teaching and learning and student progress. Assessing Writing, 15 (2), 68–85. https://doi.org/10.1016/j.asw.2010.05.004

Pennebaker, J. W., Boyd, R. L., Booth, R. J., Ashokkumar, A., & Francis, M. E. (2022). Linguistic Inquiry and Word Count: LIWC-22 . Pennebaker Conglomerates. https://www.liwc.app

Poulos, A., & Mahony, M. J. (2008). Effectiveness of feedback: The students’ perspective. Assessment & Evaluation in Higher Education, 33 (2), 143–154. https://doi.org/10.1080/02602930601127869

Ramesh, D., & Sanampudi, S. K. (2022). An automated essay scoring systems: A systematic literature review. Artificial Intelligence Review, 55 (3), 2495–2527. https://doi.org/10.1007/s10462-021-10068-2

Ryan, T., Henderson, M., Ryan, K., & Kennedy, G. (2021). Designing learner-centred text-based feedback: A rapid review and qualitative synthesis. Assessment & Evaluation in Higher Education, 46 (6), 894–912. https://doi.org/10.1080/02602938.2020.1828819

Ryan, T., Henderson, M., Ryan, K., & Kennedy, G. (2023). Identifying the components of effective learner-centred feedback information. Teaching in Higher Education, 28 (7), 1565–1582. https://doi.org/10.1080/13562517.2021.1913723

Sanosi, A. B. (2022). The impact of automated written corrective feedback on EFL learners' academic writing accuracy. Journal of Teaching English for Specific and Academic Purposes, 301 . https://doi.org/10.22190/JTESAP2202301S

Schirmer, B. R., & Bailey, J. (2000). Writing assessment rubric: An instructional approach with struggling writers. Teaching Exceptional Children, 33 (1), 52–58. https://doi.org/10.1177/004005990003300110

Sein, M. (2022). AI-assisted knowledge assessment techniques for adaptive learning environments. Computers and Education: Artificial Intelligence, 3 , 100050. https://doi.org/10.1016/j.caeai.2022.100050

Shute, V. J. (2008). Focus on formative feedback. Review of Educational Research, 78 (1), 153–189. https://doi.org/10.3102/0034654307313795

Steiss, J., Tate, T., Graham, S., Cruz, J., Hebert, M., Wang, J., Moon, Y., Tseng, W., Warschauer, M., & Olson, C. B. (2024). Comparing the quality of human and ChatGPT feedback of students’ writing. Learning and Instruction, 91 , 101894. https://doi.org/10.1016/j.learninstruc.2024.101894

Stern, L. A., & Solomon, A. (2006). Effective faculty feedback: The road less traveled. Assessing Writing, 11 (1), 22–41. https://doi.org/10.1016/j.asw.2005.12.001

Tian, L., & Zhou, Y. (2020). Learner engagement with automated feedback, peer feedback and teacher feedback in an online EFL writing context. System, 91 , 102247. https://doi.org/10.1016/j.system.2020.102247

Troia, G. A. (2006). Writing instruction for students with learning disabilities. In C. A. MacArthur, S. Graham, & J. Fitzgerald (Eds.), Handbook of writing research (pp. 324–336). Guilford.

Vasilyeva, E., Puuronen, S., Pechenizkiy, M., & Rasanen, P. (2007). Feedback adaptation in web-based learning systems. International Journal of Continuing Engineering Education and Life-Long Learning, 17 (4/5), 337. https://doi.org/10.1504/IJCEELL.2007.015046

Wang, X., Lee, Y., & Park, J. (2022). Automated evaluation for student argumentative writing: A survey. arXiv preprint arXiv:2205.04083 . https://doi.org/10.48550/ARXIV.2205.04083

Wang, N., Johnson, W. L., Mayer, R. E., Rizzo, P., Shaw, E., & Collins, H. (2008). The politeness effect: Pedagogical agents and learning outcomes. International Journal of Human-Computer Studies, 66 (2), 98–112. https://doi.org/10.1016/j.ijhcs.2007.09.003

Wang, Z., & Han, F. (2022). The effects of teacher feedback and automated feedback on cognitive and psychological aspects of foreign language writing: A mixed-methods research. Frontiers in Psychology, 13 , 909802. https://doi.org/10.3389/fpsyg.2022.909802

Weaver, M. R. (2006). Do students value feedback? Student perceptions of tutors’ written responses. Assessment & Evaluation in Higher Education, 31 (3), 379–394. https://doi.org/10.1080/02602930500353061

Wei, P., Wang, X., & Dong, H. (2023). The impact of automated writing evaluation on second language writing skills of Chinese EFL learners: A randomized controlled trial. Frontiers in Psychology, 14 , 1249991. https://doi.org/10.3389/fpsyg.2023.1249991

Woo, D. J., Susanto, H., Yeung, C. H., Guo, K., & Fung, A. K. Y. (2023). Exploring AI-generated text in student writing: How does AI help? arXiv preprint arXiv:2304.02478 .

Wu, H., Wang, W., Wan, Y., Jiao, W., & Lyu, M. (2023). Chatgpt or Grammarly? Evaluating ChatGPT on grammatical error correction benchmark. arXiv preprint arXiv:2303.13648

Yoon, S. Y., Miszoglad, E., & Pierce, L. R. (2023). Evaluation of ChatGPT feedback on ELL writers' coherence and cohesion. arXiv preprint arXiv:2310.06505 .

Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education where are the educators? International Journal of Educational Technology in Higher Education, 16 (1), 39. https://doi.org/10.1186/s41239-019-0171-0

Zhang, P., & Tur, G. (2023). A systematic review of ChatGPT use in K‐12 education. European Journal of Education , 1–22. https://doi.org/10.1111/ejed.12599

Zhong, Q., Ding, L., Liu, J., Du, B., & Tao, D. (2023). Can ChatGPT understand too? a comparative study on ChatGPT and fine-tuned Bert. arXiv preprint arXiv:2302.10198 .

Download references

The study received no funding.

Author information

Authors and affiliations.

University of the Aegean, Department of Primary Education, 1 Dimokratias Str., 85132, Rhodes, Greece

Emmanuel Fokides & Eirini Peristeraki

You can also search for this author in PubMed   Google Scholar

Contributions

All authors contributed equally to the study.

Corresponding author

Correspondence to Emmanuel Fokides .

Ethics declarations

Ethical statement.

The research was conducted in accordance with all pertinent legislation and institutional protocols. The Research Ethics Committee of the Department of Primary Education, University of the XXXX reviewed and approved the methodologies and practices of this study. All participants were briefed and their informed consent was obtained; they retained the right to withdraw at any point. The privacy and rights of the individuals involved were protected; no personal data were collected and/or processed.

Competing interests

The authors declare that have no conflict of interest.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Fokides, E., Peristeraki, E. Comparing ChatGPT's correction and feedback comments with that of educators in the context of primary students' short essays written in English and Greek. Educ Inf Technol (2024). https://doi.org/10.1007/s10639-024-12912-8

Download citation

Received : 09 April 2024

Accepted : 12 July 2024

Published : 27 July 2024

DOI : https://doi.org/10.1007/s10639-024-12912-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • ChatGPT-generated feedback
  • Instructor-provided feedback
  • Primary school
  • Short essays
  • Find a journal
  • Publish with us
  • Track your research
  • Share full article

Advertisement

Supported by

Trump Appeals $454 Million Fraud Judgment, Saying It Was Excessive

Lawyers for Donald J. Trump challenged the judgment handed down by Justice Arthur F. Engoron, who found that Mr. Trump had conspired to manipulate his net worth to receive favorable terms on loans.

Donald Trump, wearing a dark suit and blue tie, speaks in a courthouse with lawyers standing on either side of him.

By Jesse McKinley and Ben Protess

Lawyers for Donald J. Trump filed an appeal on Monday evening seeking to dismiss or drastically reduce the $454 million judgment levied against him this year in a New York civil fraud case, the latest maneuver in the former president’s multiple legal battles.

The filing made a raft of arguments questioning the judgment handed down in February by Justice Arthur F. Engoron , who found that Mr. Trump had conspired to manipulate his net worth and lied about the value of his properties to receive more favorable terms on loans.

The suit was brought by Attorney General Letitia James of New York, a Democrat, who hailed her victory over Mr. Trump as having demonstrated that “there cannot be different rules for different people in this country.”

In their lengthy appeal to the First Department of the State Supreme Court’s Appellate Division, however, Mr. Trump’s lawyers argued that many of the deals in question in Ms. James’s suit had occurred long ago and that the statute of limitations for violations it cited had run out.

They also questioned the size of the judgment awarded by Justice Engoron, calling it disproportionate and suggesting that the judge had overcounted damages and miscalculated the profits from some of the properties named in Ms. James’s suit.

Taken as a whole, the appeal — peppered with talking points from Mr. Trump’s campaign and his public criticism of the case — seeks to show that the former president’s dealings were business as usual, and that no harm was caused.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit and  log into  your Times account, or  subscribe  for all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?  Log in .

Want all of The Times?  Subscribe .

COMMENTS

  1. (PDF) Curriculum and Evaluation in Outcome-Based Education

    The "Outcome-Based Education" (OBE) model is being adopted at a fast pace in education institutions. it's considered an enormous breakthrough to enhance education across the world.

  2. Outcome-Based Education: An Open Framework

    A great advantage of OBE is that the worthiness or desirability of the whole course can be pre-judged before its implementation by the defensibility of its objectives, viz. the outcomes, and how they can be achieved through the several steps contained in the process (Hager, 2006; Green, et al., 2009).Precisely drawn-specific outcomes provide clarity of purpose in teaching/learning (Bond et al ...

  3. PDF A Review on Outcome Based Education and Factors That Impact Student

    ies.ccsenet.org International Education Studies Vol. 14, No. 2; 2021 2 country lacks educated and skillful human resource that can take part in the development of country.

  4. PDF Teachers' perception and experience on outcomes-based education

    International Journal of Evaluation and Research in Education (IJERE) Vol. 10, No. 4, December 2021, pp. 1213~1220 ISSN: 2252-8822, DOI: 10.11591/ijere.v10i4.21548 1213

  5. Studying the Status of Outcome Based Education in Educational

    Outcome-based Education (OBE) is an educational approach that focuses on measuring the outcomes or results of learning, rather than just the process of learning itself.

  6. PDF Implementation of Outcomes-Based Education in an Interdisciplinary

    iii • Proactivity and reflexivity in my teaching practice with the goal of self-improvement and growth based on personal beliefs and social values.

  7. PDF Outcomes-Based Assessment and Learning: Trialling Change in a

    purposes of this paper, "institution" refers to the whole university; "program" refers to a sequence of study ending in a degree (in this case, a master of science in civil engineering); "courses" are

  8. Outcome-Based Education: the future is today

    Medical education, perhaps more than at any other time, faces pressures for change in response to the rapid developments in medical and health care delivery, advances in information technology ...

  9. PDF Outcome-Based Education: A Conceptual Framework and Review of Empirical

    www.ijcrt.org © 2022 IJCRT | Volume 10, Issue 11 November 2022 | ISSN: 2320-2882

  10. Empowering Students through Outcome-Based Education (OBE)

    Spady W. (1994) Outcome-based Education: Critical Issues and Answers. Arlington, VA: American Association of School Administrators, in Killen, R. (2000) 'Out-comes-based education: Principles and possibilities', unpublished manuscript, University of Newcastle, Faculty of Education.

  11. Meta-analysis of The Effectiveness of Project-based Learning Approach

    iii analyze the effect of the instructional hours of project-based learning per week, the dissertation researcher needed more data. Two or fewer instructional hours, however, were more effective

  12. PDF Outcomes-based Education in South Africa Since 1994: Policy Objectives

    i SUMMARY The primary objective of the research for this thesis is to propose an implementation model for outcomes-based education which could be implemented in South Africa.

  13. PDF The Implementation of Outcome-Based Education at a Philippine ...

    The Implementation of Outcome-Based Education at a Philippine University _____ Asia Pacific Journal of Multidisciplinary Research, Vol. 7, No. 4, Part III, November, 2019 C-C

  14. PDF Re-examining the Philosophy of Outcome-Based Education

    98 Jonathan G. Florendo higher education curricula, and as a guide for a philosophy of children in various countries.5 Dewey's notion of child-centered learning has been used as a basic principle in basic education.6 Child-centered learning is anchored on the educator's

  15. Outcome-based education: evaluation, implementation and faculty

    This article was migrated. The article was marked as recommended. Developments in Outcome-based education (OBE) and innovative shifts in its pedagogical approaches have reshaped the learning environment of curricula at medical schools. This instructional ...

  16. PDF A Novel Outcome-Based Educational Model and its Effect on Student

    A Novel Outcome-Based Educational Model 204 Program Model (APM) is designed to continuously improve the curriculum and provide students with the knowledge and skills to succeed in a rapidly changing world.

  17. 45. a Study on Outcome-based Education

    IN 2474-5146 (Online) 2474-5138 (Print) One Day Online International Conference Organised by IRBE Publications, Denver, USA 273

  18. PDF Level of Awareness of The Maritime Students on The Outcomes Based Education

    Educational Research InternationalEducational Research International ISSN-L: 2307-3713, ISSN: 2307-3721 Vol. 2 NoVol. 2 No. 1 .. 11. 1 AugustAugustAugust 20132013

  19. (PDF) Outcomes Based Education

    PDF | On May 20, 2002, Parag Omar Gandhi published Outcomes Based Education | Find, read and cite all the research you need on ResearchGate

  20. An examination of outcome-based education practices, standards, and

    This study sought to determine the extent to which selected Iowa schools that had made a sustained effort to implement outcome-based education (OBE) were successful. The study also attempted to determine change factors that enhanced the implementation of OBE and teacher perceptions of the effects of OBE implementation. A total of 303 teachers from 21 schools volunteered to participate in this ...

  21. PDF Outcome Based Education: A Conceptual Framework

    International Journal of Research and Innovation in Social Science (IJRISS) |Volume II, Issue IX, September 2018|ISSN 2454-6186 www.rsisinternational.org Page 3

  22. (PDF) Case studies in outcome-based education

    Outcome-based education is one of the most significant global developments in medical education in recent years. This paper presents four case studies of outcome-based education from medical ...

  23. Comparing ChatGPT's correction and feedback comments with ...

    This research analyzed the efficacy of ChatGPT as a tool for the correction and provision of feedback on primary school students' short essays written in both the English and Greek languages. The accuracy and qualitative aspects of ChatGPT-generated corrections and feedback were compared to that of educators. For the essays written in English, it was found that ChatGPT outperformed the ...

  24. Trump Appeals $454 Million Judgment in N.Y. Civil Fraud Case

    Lawyers for Donald J. Trump filed an appeal on Monday evening seeking to dismiss or drastically reduce the $454 million judgment levied against him this year in a New York civil fraud case, the ...