|Year : 2018 | Volume
| Issue : 2 | Page : 44-49
Comparison of a novel card game and conventional case-based studying for learning urologic differential diagnoses in veterinary radiology
Christopher P Ober
Department of Veterinary Clinical Sciences, University of Minnesota College of Veterinary Medicine, St. Paul, MN, USA
|Date of Web Publication||7-Feb-2019|
Dr. Christopher P Ober
Department of Veterinary Clinical Sciences, University of Minnesota College of Veterinary Medicine, 1365 Gortner Avenue, St. Paul, MN 55108
Source of Support: None, Conflict of Interest: None
Introduction: Generation of appropriate lists of differential diagnoses for various radiographic findings can be challenging for veterinary students and practicing veterinarians. Methods: In this randomized, controlled, experimental trial, an educational card game was developed to help students learn differential diagnoses associated with different radiographic renal appearances. Third-year veterinary students in an imaging class took a pretest and were then randomly assigned to either play the card game or study conventional radiographic cases to learn differential diagnoses. Participants in both groups then took a posttest and a 1-week follow-up test to assess their learning. Test performance was compared between students who played the game and those who studied radiographic cases. Results: On the immediate posttest, students who played the game scored higher than those who studied by conventional means (8.1 vs. 5.5 out of 10 possible points, P < 0.0001) and students who played the game also scored higher on the follow-up test (13.1 vs. 10.4 out of 20 possible points, P < 0.0001). Conclusion: Educational gameplay may be more beneficial than conventional case study for learning differential diagnoses. However, the relatively narrow focus of the game used in this study will prevent it from replacing conventional learning for all applications (such as lesion identification).
Keywords: Card game, gamification, kidney, radiography game, uroradiology
|How to cite this article:|
Ober CP. Comparison of a novel card game and conventional case-based studying for learning urologic differential diagnoses in veterinary radiology. Educ Health Prof 2018;1:44-9
|How to cite this URL:|
Ober CP. Comparison of a novel card game and conventional case-based studying for learning urologic differential diagnoses in veterinary radiology. Educ Health Prof [serial online] 2018 [cited 2019 Feb 20];1:44-9. Available from: http://www.ehpjournal.com/text.asp?2018/1/2/44/251899
| Introduction|| |
When interpreting radiographic studies, successful interpretation depends on recognition of normal anatomy and variants, identification and description of potentially pathologic lesions, and development of a prioritized list of differential diagnoses for those lesions. While the number of differentials that should be considered will vary by imaging findings and other case factors, the radiologists at the author's institution have found that veterinary students often fail to consider an appropriate breadth of differential diagnoses for a given lesion and that this problem is more common than failure to identify a radiographic lesion. This persistent deficiency indicates that the staples of didactic instruction and case-based laboratory sessions may not be sufficient to teach differential list generation.
Other instructional methods such as educational board and card games have been utilized in the health sciences with limited objective assessment.,, Several studies in human medicine,,, and veterinary medicine,, indicate that students demonstrate an improved factual recall or ability to apply information from the topics in question following gameplay and that students generally enjoy playing the games. However, to the author's knowledge, there are no studies in the health professions using objective research methodologies to directly compare use of educational games to relevant and comparable conventional studying in terms of their effects on student learning. This information is important to determine if gameplay in some circumstances may be able to replace conventional methodologies or if it is better suited as an adjunct form of study.
The purpose of this study was to determine if an educational card game could be used to improve learning of appropriate uroradiologic differential diagnoses by veterinary students enrolled in a veterinary radiology course, as compared to conventional case-based learning. The hypothesis was that students who played an educational card game related to uroradiologic radiographs would demonstrate higher test scores than those who studied conventional radiographic cases.
| Methods|| |
This study is a randomized, controlled, experimental trial. The author previously developed a card game for use in a 3rd-year veterinary imaging course to aid in students' learning of principles of urologic radiology, with a specific emphasis on providing appropriate differential diagnoses for various radiographic appearances of the kidneys. Participating students were randomly divided into two groups: participants in the Game group played the educational card game and those in the Control group studied radiographic cases as was conventional for radiology laboratory sessions. All participants took a pretest before their respective activities, a posttest immediately following their respective activities and a follow-up test 1 week later. Test scores between the two groups were compared to determine if there were differences in performance. The Institutional Review Board of (the University of Minnesota College of Veterinary Medicine) granted the study a Category I Exemption for the study of an instructional strategy in an educational setting (US Federal Guidelines 45 CFR Part 46.101(b) category #1).
Design of the urinary radiology game used in this study has been previously described. In brief, the game includes both radiograph cards, which depict different renal appearances (e.g., “large, irregular kidney”), and disease cards, each representing a renal condition or disease. Point values are indicated on each radiograph card and players try to collect these points by correctly pairing them with disease cards depicting conditions that result in the given appearance. During a round of play, a radiograph card is placed face up on the table, and each player places one of the disease cards in his or her hand face down on the table in response. The round is won by the person who played a card representing a disease that can cause the given radiographic appearance and that shows the highest rank on its face. The winner of the round puts the radiograph card in his or her success pile and play continues to the next radiograph card. Once all radiograph cards have been claimed, the players tally the points on their claimed radiograph cards and the player with the highest score wins.
The goal of the upper urinary radiology game is to encourage students to think about reasonable differential diagnoses for various radiographic appearances of the kidneys, as players can score points only through correct matches. As the game is made to be played by students without a judge or moderator, an answer key is included with the game. When students are first learning the topic, they are encouraged to use the answer key as a study guide when they play their disease cards. However, once the students are more comfortable with the differential diagnoses, they should use the answer key only when verifying the winner of a round.
Students in the Control group participated in conventional study of radiographic cases during the laboratory session. This activity is a self-directed web-based exercise in which students are given a series of clinical cases to work through. Students first select a case from the laboratory home page, and they are taken to a page where they are given relevant case history and can view the radiographs from the case. Students are expected to evaluate the images, identify radiographic lesions, and produce a list of differential diagnoses for the case. Once the student feels that he or she has completed the case, the student can click a link to go to the answers. The student is then given the written findings and conclusions for the case, and there are also annotated images that the student can use to confirm that he or she has appropriately identified the lesions for the case. The student can then return to the home page and select another case.
Study procedures and data collection
All veterinary students enrolled in the 3rd-year Veterinary Imaging II course at (the University of Minnesota College of Veterinary Medicine) in fall 2017 were invited to participate in the study. Participation in the study was voluntary. During this course, each student attended one of two offered laboratory sessions focusing on the upper urinary tract. These laboratory sessions occurred after conventional didactic lecture sessions dedicated to the same topic. The two laboratory sessions were offered on consecutive afternoons.
At the beginning of each laboratory session, the study was introduced and students who chose to participate formed teams of 3–4 students. Using the randomization (rand) function in Microsoft Excel (Microsoft Excel 2016, Microsoft Corporation, Redmond, WA, USA), teams were randomized to either the experimental Game group or the Control group.
After group randomization, each participant received a packet containing a pretest and posttest. For each test question, participants were given a radiographic appearance of kidneys and were expected to identify appropriate differential diagnoses for that appearance [Appendix 1 [Additional file 1]]. Two sets of questions intended to be of equal difficulty were available, and within each packet, opposite sets of questions were used for the pretest and the posttest. Two packet variants (A and B) were used in the laboratory sessions, differing only in that the pretest questions in one packet variant were used as the posttest questions in the other variant. This design prevents participants from studying for known questions as they play or study cases and helps verify that score changes between pretest and posttest are not simply due to differences in question difficulty because if one set of questions is more difficult than the other, there will be different score patterns when the harder version is used as a pretest and when it is used as a posttest. During packet distribution, teams with an even number of students received equal numbers of the two packets. Because each team with an odd number of participants received unequal numbers of the A and B packets, the majority packet for teams with odd numbers was assigned on an alternating basis. Thus, packets A and B were equally distributed among the Game group and the Control group.
Each participant took a pretest before his or her assigned activity to determine his or her baseline knowledge of urinary radiology. Immediately, following the pretest participants in the Game group played the upper urinary radiology game for 1 h, while members of the Control group studied radiographic cases, including original images, written descriptions, and image annotations (the conventional activity for this laboratory session). After 1 h of their respective activities, each participant took the posttest found in his or her packet. On each test, the participants were provided four multiple choice questions, each of which described an imaging finding and five possible differential diagnoses for that finding. Students were instructed to select all possible valid differential diagnoses for each question but were not informed of how many total correct answers there were. The maximum score possible on both the pretest and the posttest was 10 points.
One week after the upper urinary tract laboratory sessions, all the study participants took a follow-up test. All copies of this test were the same and included all of the questions asked in the pretests and posttests in the initial part of the study; due to this, the maximum score possible on the follow-up test was 20 points. Participants were also asked to indicate how much time they had spent studying radiology of the upper urinary tract between the upper urinary tract laboratory session and taking the follow-up test.
Data were analyzed with commercially available statistical software (JMP Pro 13.2.1, 2016, SAS Institute, Cary, NC, USA). Means and standard deviations of participant scores were calculated for the pretest and posttest (both pooled and by packet type) for both the groups. Improvement from the pretest to the posttest was calculated for each student, and means and standard deviations of improvement were calculated by packet type for both groups. Overall means and standard deviations of participant scores were calculated for the follow-up test for both groups (there was only one version of the follow-up test). A two-tailed independent t-test was used to compare results of students in the Game group to the results of the students in the Control group on the pretest and posttest (pooled and by packet type) as well as on the follow-up test (pooled). Overall pretest results were also compared between Packet A and Packet B using a two-tailed independent t-test to determine if the two question sets were equal in difficulty. A two-tailed independent t-test was also used to compare pre-to-posttest improvement scores between the two groups by packet type and to directly compare pretest and posttest scores by packet type within each group. Follow-up test scores were not grouped by students' time spent studying before the follow-up test, as 97/98 students (99%) indicated that they had not studied upper urinary radiology over that span. Statistical significance for all analyses was set at <5% (P < 0.05).
| Results|| |
All 98 students in the class who were present in the upper urinary tract laboratory sessions participated in the study. Of those students, 51 were randomized to the Game group and 47 were randomized to the Control group. When asked how much time they had spent studying the upper urinary tract radiography between the laboratory session and the follow-up test, 97/98 (99%) participants responded that they had not studied at all and 1 student (1%) indicated a study time of <30 min.
When evaluating pooled pretest scores, there were no differences in mean (± standard deviation [SD]) scores between the two groups (Game group: 4.7 ± 0.3, Control group: 5.2 ± 0.3, P = 0.13). There were also no differences in pretest scores among those students with Packet B (Game group: 5.0 ± 2.0, Control group: 4.9 ± 1.3, P = 0.85). However, with Packet A, students in the Control group scored higher (5.6 ± 1.9) than students in the Game group (4.4 ± 1.8, P = 0.03). The overall comparison of mean (± SD) Packet A pretest and Packet B pretest scores demonstrated no difference (A: 5.0 ± 2.0, B: 4.9 ± 1.7, P = 0.91), indicating no difference in difficulty between the two question sets.
Mean posttest scores for the Game group were significantly higher than those for the Control group for both Packet A (8.1 ± 0.4 vs. 6.0 ± 0.4, P = 0.0002) and Packet B (8.1 ± 1.4 vs. 5.0 ± 2.8, P < 0.0001). The pooled mean posttest score for the Game group was 8.1 ± 1.7 versus 5.5 ± 2.3 for the Control group (P < 0.0001). For students in the Game group, mean posttest scores were significantly higher than pretest scores for both packet versions (both P < 0.0001). However, for participants in the Control group, there were no significant differences between pretest and posttest scores for either packet version (P = 0.47 for Packet A; P = 0.79 for Packet B).
When comparing individual students' changes in scores between the pretest and the posttest, students playing the game demonstrated significantly greater improvement in scores than those evaluating cases. For Packet A, the mean change in the Game group was +3.8 ± 2.3 points, compared to +0.4 ± 2.0 points for the Control group (P < 0.0001). Likewise, for Packet B, the mean change in the Game group was +3.1 ± 2.5 points, compared to +0.2 ± 2.8 points for the Control group (P = 0.0004). Similarly, when the data for the two packets were pooled, the mean change for the Game group was +3.4 ± 2.4 points versus +0.3 ± 2.4 points for the Control group (P < 0.0001).
The mean follow-up test score for students in the Game group (13.1 points) was significantly higher than that of the Control group (10.4 points; P < 0.0001).
| Discussion|| |
The results indicate that using a card game as a means of studying radiology is not only useful in improving students' understanding of differential diagnosis list generation but also may be more beneficial than studying actual radiology cases for that purpose. Students who played the upper urinary tract game for 1 h showed a significant improvement in test scores when their knowledge of relevant differential diagnoses was evaluated, but students who studied urinary radiology cases did not demonstrate a change in their scores. Test scores from students who played the upper urinary tract game were significantly higher than those utilizing conventional studying at both the immediate and 1-week time points. Improved immediate posttest scores relative to pretest scores in the gameplay group are similar to findings in multiple previous studies.,,,,,, Improvement in gameplay groups relative to control groups studying through conventional means has been described in studies in the medical literature evaluating computer games, but no directly applicable comparisons of board games or card games to conventional study are available. Several studies evaluating board games or computer games compare a gaming group to a Control group, but in these studies, both groups receive the standard education and training, and the gameplay opportunity for the study group is an additional educational opportunity that is not balanced by a specific educational opportunity for the Control group.
It is likely that one reason for the differences in performance between the two groups was the ability of the participants in the Game group to focus on learning differential diagnoses, as this was the sole purpose of the upper urinary tract game. Students in the Control group were studying cases in their entirety, including lesion identification, lesion description, and differential diagnosis generation. Because the tests evaluated students' ability to produce differential diagnoses, the students who played the game were likely at an advantage relative to the Control group. Nonetheless, it is still notable that when comparisons between groups are removed from the equation, students who played the game demonstrated significant improvement in scores, while students in the Control group had no change in scores between the pretest and the posttest. Students who studied cases for 1 h were no more able to identify reasonable differential diagnoses for renal radiographic findings than they were before studying. Thus, although the card game cannot be used as a substitute for case evaluation, as it does not include components of lesion identification, the game seems to be considerably more effective in its stated purpose of helping learn differential diagnoses than conventional case review is.
In addition to providing a concentrated focus on the study of differential diagnoses, the urinary tract game may have been more engaging than typical case study. Key features of an effective learning environment include significant interaction, specific goals, and direct involvement in the task, all of which are features of card and board games. While student interaction is certainly greater in group-based case study than in didactic lecture, interaction in gameplay is greater still. In addition, the game's goal of scoring points through learning differential diagnoses is more concrete and specific than the case-based goals of identifying lesions, describing lesions, and learning differential diagnoses.
Student learning could have been further bolstered by the active nature of gameplay. Game-playing participants had to actively consider the cards in their hands to determine potentially correct differential diagnoses and find an optimal play. Several studies have demonstrated that competition itself can improve students' learning and performance in courses., This active consideration of the game state has been suggested as a factor in making effective games for use in education. This is potentially in contrast to students working through the conventional imaging cases. While the cases were structured such that the students were expected to provide their own assessments of the cases before clicking on the answer key and annotated versions of the radiographs, there was nothing preventing students from quickly clicking to the answers. Although active learning before reading the answers would likely serve students better in the long run, as various methods of active learning have been shown to improve information gain relative to passive learning, there were no immediate consequences to a more passive approach of quickly moving forward to the answers. Thus, students may have been less actively engaged with the clinical cases as compared to the card game.
As noted above, the upper urinary tract card game cannot be used as a replacement for conventional case study in learning radiology. Synthesis of imaging cases involves assessment of normal anatomy, recognition and description of lesions, production of a ranked list of differential diagnoses for those lesions, and development of a further diagnostic or therapeutic plan. The urinary tract game focuses on only one of those facets, differential diagnosis generation, so students must still be provided with a means of incorporating the other components of image evaluation into their studies. However, given the positive results identified by this study with the focused approach to learning differential diagnoses, perhaps a focused approach to the other components of image interpretation could also be considered for initial learning. The holistic approach to case interpretation may be overwhelming to some students, and it is possible that generalized case interpretation should be saved until students are comfortable with each of the individual components of image assessment.
Students in the Control group with Packet A scored higher on the pretest than those in the Game group with this packet, but the reason for this difference is unclear. It is possible that this actually represents a Type I error, the rejection of a true null hypothesis (i.e., a false-positive finding). It is also conceivable that the subset of students in the Game group who completed the Packet A pretest had a lower baseline knowledge level than the equivalent Control group cohort simply through random chance. A systematic bias is unlikely, as all students had equal access to the relevant lecture materials, team assignment to the Game and Control groups was randomized, each team had a mixture of Packet A and Packet B (limiting biases in packet distribution), and no differences in scores were seen between groups on the Packet B pretest. In any case, there was no evidence of long-term effects of this disparity, as mean Game group scores were higher than mean Control group scores on the posttest, regardless of packet, and on the follow-up test.
The results of this study open up several possibilities for further research. Studies with a longer-term follow-up would be useful to see if differences in knowledge of differential diagnoses between the two groups were maintained. Given the potential advantages of initial learning of radiology topics through more focused evaluation of images and the specific components of image interpretation, it would also be interesting to develop games focused on lesion identification and description. Efficacy of a multistep focused learning approach could then be tested relative to the current holistic approach to learning of radiographic interpretation.
| Conclusion|| |
The findings in this study demonstrate that card games can be more useful than conventional image evaluation for students to learn focused topic areas, such as urinary differential diagnosis generation. Games cannot be used to replace all of the parts of radiologic interpretation, but they can have utility as a supplemental learning aid for students.
Financial support and sponsorship
Partial funding support for this study was provided by the University of Minnesota College of Veterinary Medicine Educational Development/Curriculum Implementation Grant.
Conflicts of interest
There are no conflicts of interest.
| References|| |
Bochennek K, Wittekindt B, Zimmermann SY, Klingebiel T. More than mere games: A review of card and board games for medical education. Med Teach 2007;29:941-8.
Akl EA, Pretorius RW, Sackett K, Erdley WS, Bhoopathi PS, Alfarah Z, et al.
The effect of educational games on medical students' learning outcomes: A systematic review: BEME guide no. 14. Med Teach 2010;32:16-27.
Akl EA, Sackett KM, Erdley WS, Mustafa RA, Fiander M, Gabriel C, et al.
Educational games for health professionals. Cochrane Database Syst Rev 2013:CD006411. doi: 10.1002/14651858.CD006411.pub3.
Steinman RA, Blastos MT. A trading-card game teaching about host defence. Med Educ 2002;36:1201-8.
Michel da Rosa AC, Osowski LF, Tocchetto AG, Eduardo Niederauer C, Benvenuto Andrade CM, Scroferneker ML, et al.
An alternative teaching method for the regulation of the immune response. Med Educ Online 2003;8:4335.
Da Rosa AC, Moreno Fde L, Mezzomo KM, Scroferneker ML. Viral hepatitis: An alternative teaching method. Educ Health (Abingdon) 2006;19:14-21.
Girardi FM, Nieto FB, Vitória LP, de Borba Vieira PR, Guimaráes JB, Salvador S, et al.
T-and B-cell ontogeny: An alternative teaching method: T-and B-cell ontogeny game. Teach Learn Med 2006;18:251-60.
Ober CP. Novel card games for learning radiographic image quality and urologic imaging in veterinary medicine. J Vet Med Educ 2016; 43:263-70.
Ober CP. Use of a novel board game in a clinical rotation for learning thoracic differential diagnoses in veterinary medical imaging. Vet Radiol Ultrasound 2017;58:127-32.
Ober CP. Examination outcomes following use of card games for learning radiographic image quality in veterinary medicine. J Vet Med Educ Spri 2018;45:140-4.
Diehl LA, Gordan PA, Esteves RZ, Coelho IC. Effectiveness of a serious game for medical education on insulin therapy: A pilot study. Arch Endocrinol Metab 2015;59:470-3.
Anyanwu EG. Anatomy adventure: A board game for enhancing understanding of anatomy. Anat Sci Educ 2014;7:153-60.
Lagro J, van de Pol MH, Laan A, Huijbregts-Verheyden FJ, Fluit LC, Olde Rikkert MG, et al.
A randomized controlled trial on teaching geriatric medical decision making and cost consciousness with the serious game GeriatriX. J Am Med Dir Assoc 2014;15:957.e1-6.
de Bie MH, Lipman LJ. The use of digital games and simulators in veterinary education: An overview with examples. J Vet Med Educ 2012;39:13-20.
Lei JH, Guo YJ, Chen Z, Qiu YY, Gong GZ, He Y, et al.
Problem/case-based learning with competition introduced in severe infection education: An exploratory study. Springerplus 2016;5:1821.
Corell A, Regueras LM, Verdú E, Verdú MJ, de Castro JP. Effects of competitive learning tools on medical students: A case study. PLoS One 2018;13:e0194096.
Luc JG, Antonoff MB. Active learning in medical education: Application to the training of surgeons. J Med Educ Curric Dev 2016;3. pii: JMECD.S18929.