Abstracting and Indexing

  • PubMed NLM
  • Google Scholar
  • Semantic Scholar
  • Scilit
  • CrossRef
  • WorldCat
  • ResearchGate
  • Academic Keys
  • DRJI
  • Microsoft Academic
  • Academia.edu
  • OpenAIRE
  • Scribd
  • Baidu Scholar

Strengths and Limitations of Using ChatGPT in OSCE: A Preliminary Examination of Generative AI in Medical Education

Author(s): Charlotte A. Taylor-Drigo, Anshul Kumar

Introduction: Objective Structured Clinical Examinations (OSCEs) are essential components of medical education, designed to assess clinical competence through structured tasks such as history-taking, physical examinations, and patient communication across multiple stations. Examiners utilize standardized rubrics to ensure fairness and objectivity in evaluation. The COVID-19 pandemic accelerated the use of technology in OSCEs, with virtual platforms introduced to maintain assessments while observing safety protocols. These changes highlighted the need for innovative, interactive, and realistic simulations. Artificial intelligence (AI) tools such as ChatGPT offer promising opportunities in this context. With advanced conversational abilities, ChatGPT can replicate patient interactions and provide immediate feedback, fostering active learning, cognitive engagement, and experiential skill development. Grounded in established educational frameworks, ChatGPT represents a novel strategy to augment OSCEs by strengthening history-taking training and enhancing the assessment of clinical competence.

Method: A pilot study was conducted with 20 faculty members responsible for designing and evaluating OSCE scenarios. Participants engaged with ChatGPT in three simulated cases structured to resemble traditional OSCE encounters. Following the sessions, participants completed a survey via Qualtrics to evaluate ChatGPT’s usability and effectiveness in supporting history-taking exercises.

Results: Faculty valued ChatGPT’s ability to serve as a consistent, responsive simulated patient, noting its role in improving clinical reasoning while minimizing intimidation. Limitations included the absence of non-verbal communication, limited empathy, and the inability to perform physical examinations. Technical inconsistencies also posed challenges. While 20% of participants expressed interest in future integration, most favored a hybrid model combining AI with standardized patients to balance realism with experiential learning.

Conclusion: Integrating ChatGPT into OSCEs provides an innovative approach to medical education, with the potential to enrich assessment accuracy and enhance student preparedness for real-world clinical practice.

Journal Statistics

Impact Factor: * 6.124

Acceptance Rate: 76.33%

Time to first decision: 10.4 days

Time from article received to acceptance: 2-3 weeks

Discover More: Recent Articles

Grant Support Articles

© 2016-2025, Copyrights Fortune Journals. All Rights Reserved!