Starting as early as their second week of medical school, young doctors-in-training may encounter their first patients. But luckily for them and for us, these first cases are usually fake--involving hired actors who take on the role of patients in mock clinical scenarios. The actors, called simulated patients or standardized patient instructors, offer medical students opportunities to test out their knowledge and delivery in controlled, yet realistic learning environments. Standardized patient encounters are perhaps the birthplace of bedside manner. And a new technology, called MPathic-VR, is shaking up this tried-and-trusted medical education experience.
MPathic-VR is an interactive, virtual human software that replaces the standardized patient with a realistic virtual human patient who can listen and respond to medical trainees in real-time. MPathic-VR utilizes a computer monitor, microphone, webcam, and Microsoft Kinect, to take in information about the doctor’s voice, facial expressions, and movement, as well as their verbal responses. The virtual human then assesses this input and responds. MPathic-VR was designed as both an interactive teaching and learning tool, as well as a technology for learning assessment.
Department of Family Medicine assistant professor Timothy Guetterman, Ph.D. led a validation study of the MPathic-VR tool, testing its ability to grade student learning. The results show promise for its use in medical learning assessment.
Taking a Hybrid Approach
Medical residents participated as subjects in the study. Randomized into two groups, participants took a three-hour course on how to “break bad news” to patients, including disclosing cancer and leukemia diagnoses. The MPathic-VR technology was employed in Group A as a method of pretesting and posttesting students’ abilities. That is, participants in this group were tasked with breaking bad news to a virtual patient both before and after their in-person seminar. Group B had MPathic-VR pretest, but did complete an MPathic-VR learning module with a virtual human after they attended the seminar. The mean results from Group A’s pre- and post-testing showed significant knowledge gain. The study also compared post-test results from both Group A and B, which were not significantly different. This comparison ensures that learning assessed by MPathic-VR can be attributed to the in-person seminar and not pre-testing bias.
This validation study demonstrated MPathic-VR’s ability to assess learning, when paired with in-person instruction. The study is a critical early step in refining MPathic-VR as an effective tool in medical training.
An Opportunity to Innovate
Where human actors fail, MPathic-VR thrives. Simulated patients are costly and may become tired or inconsistent with repeated scenes, making their work difficult to standardize and scale. Virtual human patients, on the other hand, can be designed for standardization and scale and quickly become cost-effective when compared to patient actors. This study is the first to test the reliability of virtual humans in the assessment of medical learning. MPathic-VR will continue to be tested and refined and raises both questions and answers to issues in how we train doctors.
Article citation: Guetterman TC, Kron FW, Campbell TC, Scerbo MW, Zelenski AB, Cleary JF, Fetters MD. Initial construct validity evidence of a virtual human application for competency assessment in breaking bad news to a cancer patient. Advances in Medical Education and Practice. Available online 25 July 2017, 8:505—512. doi: 10.2147/AMEP.S138380
Browse the latest research on clinical informatics and technology.