• Users Online: 241
  • Print this page
  • Email this page


 
 
Table of Contents
ORIGINAL RESEARCH
Year : 2020  |  Volume : 3  |  Issue : 1  |  Page : 1-7

Efficient clinical reasoning: Knowing when to start and when to stop


Department of Educational and Counselling Psychology, McGill University, Montreal, QC, Canada

Date of Submission29-Jan-2020
Date of Acceptance03-Feb-2020
Date of Web Publication13-Mar-2020

Correspondence Address:
Mr. Shan Li
B148, 3700 McTavish Street, Education Building, McGill University, Montreal, QC H3A 1Y2
Canada
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/EHP.EHP_1_20

Rights and Permissions
  Abstract 


Purpose: While clinical reasoning in medicine traditionally values the ultimate goal of providing an accurate diagnosis of disease, insufficient emphasis has been placed on how students' decisions may affect their diagnostic efficiency. This study adds new empirical evidence about what makes students efficient problem-solvers in clinical reasoning. Methods: Seventy-five medical students participated in this study in 2015. The authors compared the differences in clinical reasoning behaviors between high- and low-performing students before they proposed any diagnostic hypothesis. The authors used the Cox proportional-hazards model to explore how certain characteristics of students and essential features of reasoning processes affect the life span of incorrect hypotheses. Results: High-performing students were more prepared than low performers to propose their first hypothesis. The more laboratory tests and hypotheses the medical students had, the longer it took for them to confirm a correct diagnosis. Male students tended to finalize the correct diagnosis earlier than females. There are no differences between the easy and the difficult cases in the aforementioned patterns. Conclusion: This study helps shift the emphasis away from a solitary focus on accuracy to one that considers the importance of diagnostic efficiency. The authors discussed two main take-home messages for physicians.

Keywords: Clinical reasoning, diagnostic efficiency, student characteristics, survival analysis, technology-rich environment


How to cite this article:
Li S, Zheng J, Lajoie SP. Efficient clinical reasoning: Knowing when to start and when to stop. Educ Health Prof 2020;3:1-7

How to cite this URL:
Li S, Zheng J, Lajoie SP. Efficient clinical reasoning: Knowing when to start and when to stop. Educ Health Prof [serial online] 2020 [cited 2020 Apr 9];3:1-7. Available from: http://www.ehpjournal.com/text.asp?2020/3/1/1/280536




  Introduction Top


Physicians are well-informed about the clinical reasoning process in determining patient's problems: they collect a detailed history of patients' symptoms and life experience, perform appropriate physical examinations, formulate a set of hypotheses, order relevant medical tests, and endeavor to confirm or disconfirm hypotheses through linking each piece of evidence to each hypothesis.[1] However, in addition to making an accurate diagnosis, clinicians must decide how to make clinical decisions in an efficient way, determining when, what, and how many pieces of information will be collected to confirm or disconfirm a diagnosis. It is highly possible that a novice clinician may order as many as medical tests that are available for diagnosing patients, while an expert knows when to start developing a diagnostic hypothesis and when to stop proposing new hypothesis or ordering tests. However, the question is do the numbers of diagnostic hypotheses or medical tests influence the length of time to make an accurate diagnosis? Do more medical tests ordered lead to quicker and better decisions or vice versa? Furthermore, there is little information in the current clinical reasoning literature that links student characteristics (e.g., gender) and diagnostic efficiency (the ability to diagnose patients without wasting materials, time, and energy).[2] Therefore, this study aims to add new evidence to address the aforementioned concerns. Findings from this study could enhance our understanding of how reasoning strategies affect diagnostic efficiency and inform practitioners about the different components of medical expertise.

Theoretical background

Clinical reasoning is a key component of physician competence. There are two main forms of clinical reasoning: analytic reasoning (i.e., conscious, controlled, based on careful analysis of patient data) and nonanalytic reasoning (i.e., unconscious, automatic, based on past experiences).[3],[4] Models of analytic reasoning assume that clinicians are aware of the probabilistic relationships between patient features and diagnoses. A clinician assesses the likelihood of each diagnosis wherein a differential list of relevant diagnoses is generated for hypothesis testing. The fundamental belief of these analytic models is that causal rules, which link the signs and symptoms of patients to respective diagnoses, can be extracted from daily practices. The development of clinical reasoning expertise involves the enhancement of such causal rules and an appropriate judgment of their relative probabilities.[3] As regards nonanalytical reasoning, clinicians use their past experiences to make judgments and their reasoning is somewhat automatic where it appears that they do not reason at all, but in fact, their reasoning is based on accessing their prior knowledge seamlessly. Considering that the samples in this study are medical students who have little first-hand experience with patients, the expectation is that they will rely exclusively on the analytic approach to clinical reasoning.

While the process of the analytic approach of clinical reasoning (e.g., collecting information of patients, ordering laboratory tests, developing hypotheses, and confirming/disconfirming hypotheses) sounds straightforward, the work is much more complex at its microlevel where one is faced with a constant stream of decisions where one becomes more familiar with the patient. For instance, one crucial decision that students make while clinical reasoning is whether to purposefully test specific hypotheses or verify hypotheses in an exhaustive way to make sure nothing is missed.[5] Students are often faced with a dilemma of whether to order more tests to eliminate the sense of uncertainty or to quit the process and make a final decision. While clinical reasoning in medicine traditionally values the ultimate goal of providing an accurate diagnosis of disease,[6],[7] it seems that there is insufficient emphasis being placed on how students' decision choices affect their diagnostic efficiency. As observed by Elstein and Schwartz[8] who selectively reviewed 30 years of research on clinical reasoning, limited research has explored the efficiency issues in diagnostic decision-making. Compared with accuracy, efficiency in clinical reasoning is an underexplored area of research.

This study adds new empirical evidence about what makes students efficient problem-solvers in clinical reasoning. Specifically, we addressed the following two research questions: (1) are high-performing students (who solved the patient case) more prepared than low-performing students (who did not solve the case) to propose their first diagnostic hypothesis? By prepared, we refer to spending more time on essential tasks, collecting more evidence, and conducting more laboratory tests. Considering that medical students use an analytic approach to clinical reasoning, we hypothesize that high-performing students would be more prepared than low performers, which would be indicated by either longer duration on the case, more evidence selections, or more medical tests before formulating their first hypothesis, and (2) how do certain characteristics of students (e.g., gender) and essential features of reasoning process (e.g., the number of evidence collected, medical tests, and hypotheses) affect the presence of incorrect hypotheses? Based on Groves et al.'s research,[9] we hypothesize that student characteristics, such as gender, lead to differences in the time needed to rule out incorrect hypotheses. Furthermore, from an efficiency standpoint, we anticipate that the more information the medical students collect, the longer it will take for them to differentiate the useful from the useless pieces of information. Accordingly, we hypothesize that the number of clinical reasoning activities such as collecting evidence, performing tests, and proposing hypotheses positively associates with the presence of incorrect hypotheses.

It is noteworthy that intelligent tutoring systems (ITSs) in medical sciences afford researchers a crucial means to model the clinical reasoning process and to pinpoint the start and end of each diagnostic procedure.[10] Thus, we approached these two questions by leveraging the promises of an ITS.


  Methods Top


Participants

A total of 75 medical students from a large North American university volunteered to participate in this study. Ethics approval was obtained for this research, and we obtained the students' consent to participate in the study; therefore, students were fully aware of the purpose and the procedures of this study. Participants comprised 28 males (37.3%) and 47 females (62.7%), with an average age of 24.0 (standard deviation = 3.17). Excluding 29 participants who did not report their ethnicity, there were 19, 9, 11, and 7 participants who viewed themselves as Caucasian, Arab, Asian, and mixed-race, respectively. Second-year students accounted for the largest proportion of the sample (61.3%, n = 46), with 22, 3, and 4 students being in their 1st, 3rd, and 4th years of medical school, respectively.

Learning context and task

BioWorld, as shown in [Figure 1], is an ITS that helps medical students deliberately practice their clinical reasoning skills in a technology-rich environment.[11] In BioWorld, students are presented with virtual patient cases and their task is to diagnose the patient case by collecting appropriate evidence. In particular, students begin the task by first reading the description of a patient's history and symptoms. Students can also order laboratory tests to obtain more information about the patient. They can formulate one or more diagnostic hypotheses. To confirm or disconfirm their hypotheses, students can return to the patient description and order more laboratory tests. Students organize the evidence they collect and link the evidence to specific hypotheses. After submitting a final hypothesis, students evaluate their solutions and write a summary of their diagnostic processes.
Figure 1: The interface of BioWorld

Click here to view


In this study, the participants were required to diagnose two patient cases, i.e., Amy (an easy case) and Cynthia (a difficult case). The correct diagnoses for the Amy and Cynthia cases were diabetes mellitus (type I) and pheochromocytoma, respectively.

Procedure

A training session was provided to help participants familiarize themselves with the BioWorld system before the experiment. The two patient cases were randomly assigned to the participants to counterbalance the effect of case order on students' performance. Students were asked to diagnose the two cases independently during which their operations were captured into BioWorld log files automatically. Specifically, the log files recorded the timestamp and duration of each behavior conducted by students. It took approximately 1.5 h to complete the entire experiment.

Data processing and analysis

To address the first research question, we calculated the number of evidence items collected by students from the case description, the number of laboratory tests, and the amount of time needed to propose their first hypothesis. We examined how students who solved the case differed with those who did not solve the case in these variables of interest using t-tests. To address the second research question, we first calculated the total number of evidence items, laboratory tests, hypotheses, and the linking activities (i.e., linking collected evidence and the results of laboratory tests to respective hypotheses) in the whole process of clinical reasoning for each participant. We then used survival analysis, the Cox proportional-hazards model, in particular, to explore how students' personal and behavioral factors affect the time to the occurrence of the event of interest, i.e., students ruling out all incorrect hypotheses.[12]


  Results Top


Are high-performing students (who solved the patient case) more prepared than low-performing students (who did not solve the case) to propose their first diagnostic hypothesis?

With regard to the easy patient case of Amy, 65 students reported a correct diagnosis, while the other 10 students provided an incorrect diagnosis. The results showed that there was a significant difference in the numbers of tests conducted by students between the two groups before formulating their first hypothesis, t (64.42) = 3.20, P = 0.002. Specifically, students in the correct group (mean [M] = 4.49) performed more laboratory tests than those in the incorrect group (M = 0.60). However, there were no significant differences in the number of evidence and the duration between the two groups [Table 1].
Table 1: Differences in behavioral characteristics between correct and incorrect groups

Click here to view


In terms of the difficult case of Cynthia, there were 42 and 33 medical students who gave a correct and incorrect diagnosis, respectively. As shown in [Table 1], a significant difference was found in the amount of time students spent to develop their first hypothesis between the two groups, t (67.17) = 2.05, P = 0.044. Students who solved the case spent more time (M = 451.21) than those who did not solve the case (M = 263.06). No significant between-group difference was found in the number of laboratory tests ordered; however, students in the correct group ordered more tests (M = 5.95) than those in the incorrect group (M = 2.67) before advancing their first hypothesis. Furthermore, no significant between-group differences were found in the number of evidence selections from the case description.

How do certain characteristics of students and essential features of reasoning process affect the presence of incorrect hypotheses?

As discussed before, we applied the Cox proportional-hazards model to explore the relationship of survival distribution of incorrect hypotheses to the covariates (i.e., gender, year of school, and number of evidence items, tests, hypotheses, and linking activities). The first two variables are student characteristics and the last four are essential features of the reasoning process. It is noteworthy that a key assumption of proportional hazards (PH) should be met, otherwise results may lead to a wrong inference, severe bias, or lower power of a test.[13] In addition, there should be no influential observations when modeling a Cox model. To check the PH assumption, we used Schoenfeld residuals to test the independence between residuals and time.[14] The results in [Table 2] showed that the Schoenfeld residual test was not statistically significant for each of the covariates for both the Amy and the Cynthia cases. Therefore, the PH assumptions were not violated. We then used the deviance residual to examine whether influential observations existed in the dataset. As shown in [Figure 2], the residuals were symmetrically distributed about zero with a standard deviation of one, indicating that none of the observations is extremely influential.
Table 2: Schoenfeld residuals to checking the proportional hazards assumption for each case

Click here to view
Figure 2: The visualization of the deviance residuals for checking outliers. Note: The left side is for Amy case and the right side is for Cynthia

Click here to view


For the Amy case, there were 65 observations of the event (i.e., abandoning all incorrect hypotheses in clinical reasoning) out of 75 medical students. The concordance index of the Cox model is 0.776, which suggests that the overall predictive ability of the model is good, with 1 representing perfect prediction accuracy.[15] In addition, the P values for three alternative tests (i.e., the likelihood-ratio test, Wald test, and score log-rank statistics) for overall significance of the model are 2e-08, 3e-06, and 4e-07, respectively, suggesting a good model fit as well. As shown in [Table 3], the regression coefficient for male students is positive, which means that male students were more likely to abandon incorrect hypotheses earlier than female students, although the probability was not significant, P = 0.644 [Figure 3]. The P value for the covariate of tests was significant, with a hazard ratio (HR) = 0.923 <1, 95% confidence interval (CI) (0.890, 0.957), indicating that the number of tests was negatively associated with the event probability and thus positively associated with the length of survival of incorrect hypotheses. Besides, the P value for the covariate of hypotheses was also significant, with HR = 0.851, 95% CI (0.758, 0.956), demonstrating that the number of hypotheses had a strong positive relationship with the length of survival of incorrect hypotheses.
Table 3: The Cox proportional.hazards model of Amy case

Click here to view
Figure 3: The survival curve of incorrect hypotheses of Amy case

Click here to view


With respect to the Cynthia case, 42 out of 75 students have the event of interest (i.e., abandoning all incorrect hypotheses in clinical reasoning). The Cox model also fits well with the observations, with the index of concordance = 0.784 and the P values for the likelihood-ratio test, Wald test, and score (log-rank) test equal to 2e-06, 6e-05, and 1e-05, respectively. Similar to the Amy case, the results in [Table 4] and [Figure 4] showed that male students were also more likely to disconfirm all incorrect hypotheses earlier than females, although the probability was not significant. However, the year of school had a significantly negative relationship with the length of survival of incorrect hypotheses, with HR = 2.225 >1, 95% CI (1.368, 3.619). The higher the school year students were in, the shorter the life span of incorrect hypotheses. Moreover, the number of tests was significantly positively associated with the length of the survival, with P = 0.001, HR = 0.941, 95% CI (0.907, 0.976). The number of hypotheses was also significantly positively associated with the length of the survival, with P = 0.030, HR = 0.831, 95% CI (0.703, 0.982).
Table 4: The cox proportional-hazards model of Cynthia case

Click here to view
Figure 4: The survival curve of incorrect hypotheses of Cynthia case

Click here to view



  Discussion Top


The results revealed that students who solved the case ordered more laboratory tests before developing their first hypothesis than those who did not solve the case in the easy patient case (Amy). In the difficult case (Cynthia), those who solved the case also ordered more laboratory tests before developing their first hypothesis than those who did not solve the case, although the difference was not significant. Furthermore, students who provided a correct diagnosis in the Cynthia case spent significantly more time than the other students before formulating their first hypothesis. As hypothesized, these findings indicate that high-performing students (who solved the patient case) were more prepared than low-performing students (who did not solve the case) to propose their first diagnostic hypothesis in both an easy and a difficult patient case. These findings are in line with the research of Audétat et al.[5] who claimed that one main difficulty at the early stage of clinical reasoning is that students cannot identify the nature of the presenting problems. As a result, students who have such difficulty exclusively rely on their first impression to make judgments and they also hasten to generate hypotheses. This idea is not new to clinical teachers. What is relatively new, however, is the recognition that diagnostic behaviors occurred even before the formulation of students' first hypothesis are powerful indicators of their later performances.

Counter to our hypothesis and expectations, this study revealed that there was no significant difference in the time needed to rule out all incorrect hypotheses between male and female students, regardless of the complexity of the medical cases. However, males tended to finalize the correct diagnosis earlier than females from the descriptive analyses. We highlighted this fact since the literature has found that female students are more competent than males in solving clinical reasoning problems.[9] Although the literature indicated that females have demonstrated higher ability in clinical reasoning, one should not assume that females are more efficient in making clinical decisions. Our results suggested the opposite might be the truth, and this article calls for more refined studies to shed light on this issue.

It is noteworthy that senior students were more likely to abandon incorrect diagnostic hypotheses earlier than junior students, especially in solving the difficult case. This finding is reasonable since senior students may have more knowledge or skills than juniors. Considering that we have limited numbers of senior students, we cannot overgeneralize. Future research will include a larger sample size of students in a more balanced way.

In addition, the more laboratory tests the medical students conducted, the longer it took for students to confirm a correct diagnosis. There is no difference between the easy Amy case and the difficult Cynthia case for this pattern. Moreover, we also found that the more hypotheses students proposed in clinical reasoning, the longer it took for them to abandon incorrect hypotheses, regardless of the difficulty of the patient cases. These findings have important practical implications. To be efficient in clinical reasoning, trainees or medical students should exercise self-restraint in terms of the generation and testing of hypotheses. They should be aware of what they need rather than pursuing a thorough list of hypotheses or an endless search for analytically perfect tests.[2] The thoroughness of data collection did not contribute to the accuracy of data interpretation[16] and at times impedes students from reaching their solutions in a timely manner. The balance between speed and accuracy will need to be revisited in future studies.[17]


  Conclusion Top


This study explored the efficiency aspect of clinical reasoning in a technology-rich environment whereby medical students diagnosed two patient cases. There are two main take-home messages generated from this study for practitioners. One is that being aware of when to start developing a hypothesis is important. The collection of a comprehensive body of evidence at the early stages of clinical reasoning, rather than a quick decision based on intuition, takes students to a correct diagnosis. Being aware of when to stop the diagnostic process is just as important and requires clinicians to evaluate the totality of available evidence and to reconcile the need for success with their sense of insecurity. For the safety of patients, clinical teachers should deliberately teach students how to make a correct diagnosis. However, for the sake of timely treatment, clinical teachers should train students to make decisions regarding when to stop collecting additional unneeded evidence in the process of clinical reasoning. This study is not without any limitations. A larger and different cohort of medical students is needed to confirm the generalizability of our findings. We situated the research in a simulated environment that has differences from the real workplace. Nevertheless, this study raises some fundamental questions for future research, such as how different profiles of students balance the accuracy and the efficiency of clinical reasoning activities.

Financial support and sponsorship

This research was funded by the Fonds de recherche du Québec – Société et culture (FRQSC) and the Social Sciences and Humanities Research Council of Canada (SSHRC).

Conflicts of interest

There are no conflicts of interest.



 
  References Top

1.
Eva KW. What every teacher needs to know about clinical reasoning. Med Educ 2005;39:98-106.  Back to cited text no. 1
    
2.
Lippi G, Mattiuzzi C. The biomarker paradigm: Between diagnostic efficiency and clinical efficacy. Pol Arch Med Wewn 2015;125:282-8.  Back to cited text no. 2
    
3.
Norman G. Research in clinical reasoning: Past history and current trends. Med Educ 2005;39:418-27.  Back to cited text no. 3
    
4.
Monteiro SM, Norman G. Diagnostic reasoning: Where we've been, where we're going. Teach Learn Med 2013;25 Suppl 1:S26-32.  Back to cited text no. 4
    
5.
Audétat MC, Laurin S, Dory V, Charlin B, Nendaz MR. Diagnosis and management of clinical reasoning difficulties: Part II. Clinical reasoning difficulties: Management and remediation strategies. Med Teach 2017;39:797-801.  Back to cited text no. 5
    
6.
Elstein AS. Clinical reasoning in medicine. In: Higgs J, Jones M, Watson MJ, editors. Clinical Reasoning in the Health Professions. 1st ed. Oxford: Butterworth Heinemann; 1995. p. 95-106.  Back to cited text no. 6
    
7.
Wass V, van der Vleuten C, Shatzer J, Jones R. Assessment of clinical competence. Lancet 2001;357:945-9.  Back to cited text no. 7
    
8.
Elstein AS, Schwartz A. Clinical problem solving and diagnostic decision making: Selective review of the cognitive literature. BMJ 2002;324:729-32.  Back to cited text no. 8
    
9.
Groves M, O'rourke P, Alexander H. The association between student characteristics and the development of clinical reasoning in a graduate-entry, PBL medical programme. Med Teach 2003;25:626-31.  Back to cited text no. 9
    
10.
Boscardin C, Fergus KB, Hellevig B, Hauer KE. Twelve tips to promote successful development of a learner performance dashboard within a medical education program. Med Teach 2018;40:855-61.  Back to cited text no. 10
    
11.
Lajoie SP. Developing Professional Expertise with a Cognitive Apprenticeship Model: Examples from Avionics and Medicine. In: Ericsson KA, ed. Development of Professional Expertise: Toward Measurement of Expert Performance and Design of Optimal Learning Environments. New York: Cambridge University Press; 2009. p. 61-83.  Back to cited text no. 11
    
12.
Fox J, Weisber S. Cox proportional-hazards regression for survival data. An R Companion to Applied Regression. 3rd ed. Newbury Park, California: Sage Publications; 2011.  Back to cited text no. 12
    
13.
Abrahamowicz M, Mackenzie T, Esdaile JM. Time-dependent hazard ratio: Modeling and hypothesis testing with application in lupus nephritis. J Am Stat Assoc 1996;91:1432-9.  Back to cited text no. 13
    
14.
Schoenfeld D. Partial residuals for the proportional hazards regression model. Biometrika 1982;69:239-41.  Back to cited text no. 14
    
15.
Raykar VC, Steck H, Krishnapuram B, Dehing-Oberije C, Lambin P. On Ranking in Survival Analysis: Bounds on the concordance Index. In: Advances in Neural Information Processing Systems 20; 2007. p. 1209-16.  Back to cited text no. 15
    
16.
Elstein AS, Shulman LS, Sprafka SA. Medical Problem Solving: An Analysis of Clinical Reasoning. Cambridge, MA: Harvard University Press; 1978.  Back to cited text no. 16
    
17.
Lajoie S, Shore BM. Intelligence: The speed and accuracy tradeoff in high aptitude individuals. J Educ Gift 1986;2:85-104.  Back to cited text no. 17
    


    Figures

  [Figure 1], [Figure 2], [Figure 3], [Figure 4]
 
 
    Tables

  [Table 1], [Table 2], [Table 3], [Table 4]



 

Top
 
  Search
 
    Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
    Access Statistics
    Email Alert *
    Add to My List *
* Registration required (free)  

 
  In this article
Abstract
Introduction
Methods
Results
Discussion
Conclusion
References
Article Figures
Article Tables

 Article Access Statistics
    Viewed183    
    Printed14    
    Emailed0    
    PDF Downloaded45    
    Comments [Add]    

Recommend this journal