Peer Reviewed

1

Document Type

Article

Publication Date

26-5-2017

Keywords

Medical education, simulation technology, competency assessment, generaliz-ability theory.

Comments

The original article is available at https://www.dovepress.com

Abstract

Experience with simulated patients supports undergraduate learning of medical consultation skills. Adaptive simulations are being introduced into this environment. The authors investigate whether it can underpin valid and reliable assessment by conducting a generalizability analysis using IT data analytics from the interaction of medical students (in psychiatry) with adaptive simulations to explore the feasibility of adaptive simulations for supporting automated learning and assessment. The generalizability (G) study was focused on two clinically relevant variables: clinical decision points and communication skills. While the G study on the communication skills score yielded low levels of true score variance, the results produced by the decision points, indicating clinical decision-making and confirming user knowledge of the process of the Calgary-Cambridge model of consultation, produced reliability levels similar to what might be expected with rater-based scoring. The findings indicate that adaptive simulations have potential as a teaching and assessment tool for medical consultations.

Disciplines

Computer Sciences | Education | Educational Technology | Medicine and Health Sciences

Citation

Bruen C, Kreiter C, Wade V, Pawlikowska T. Investigating a self-scoring interview simulation for learning and assessment in the medical consultation. Advances in Medical Education and Practice. 2018;8:353-358

PubMed ID

28603434

DOI Link

10.2147/AMEP.S128321

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution-Share Alike 3.0 License.

Share

COinS