X

Download Medical Decision Making with Clinical Tests PowerPoint Presentation

SlidesFinder-Advertising-Design.jpg

Login   OR  Register
X


Iframe embed code :



Presentation url :

Home / Science & Technology / Science & Technology Presentations / Medical Decision Making with Clinical Tests PowerPoint Presentation

Medical Decision Making with Clinical Tests PowerPoint Presentation

Ppt Presentation Embed Code   Zoom Ppt Presentation

PowerPoint is the world's most popular presentation software which can let you create professional Medical Decision Making with Clinical Tests powerpoint presentation easily and in no time. This helps you give your presentation on Medical Decision Making with Clinical Tests in a conference, a school lecture, a business proposal, in a webinar and business and professional representations.

The uploader spent his/her valuable time to create this Medical Decision Making with Clinical Tests powerpoint presentation slides, to share his/her useful content with the world. This ppt presentation uploaded by gaetan in Science & Technology ppt presentation category is available for free download,and can be used according to your industries like finance, marketing, education, health and many more.

About This Presentation

Medical Decision Making with Clinical Tests Presentation Transcript

Slide 1 - Medical Decision Making with Clinical Tests An Introduction to Bayesian Logic Gaetan Lion. 8/08/2021 Positive Test Move on to Treatment High Disease Prevalence Rate Negative Test May have to run 2nd test Diagnostic Test Positive Test May have to run 2nd test Low Disease Prevalence Rate Negative Test Patient is probably fine
Slide 2 - 2 Introduction Clinical tests are very often misinterpreted. The Sensitivity of a test also called “true positive rate” is often confused with the accuracy of the test outcome. A test with a Sensitivity of 60% does not mean that if you have a positive test result there is a 60% probability that you have the disease. The Specificity of a test also called “true negative rate” is often confused with the accuracy of the test outcome. A test with a Specificity of 55% does not mean that if you have a negative test result there is a 55% probability that you do not have the disease. The Disease Prevalence Rate or Pre Test Probability is a huge driver causing the resulting accuracy of the tests to diverge dramatically from what the Sensitivity or Specificity of a test suggest.
Slide 3 - 3 Background Material
Slide 4 - 4
Slide 5 - 5
Slide 6 - 6
Slide 7 - Bayes’ Theorem 7
Slide 8 - Bayesian Statistics Semantics can be confusing When dealing with Positive Tests, in Bayesian statistics, the Disease Prevalence Rate is called the Pre Test Probability, and the Positive Test Accuracy of outcome is called the Post Test Probability. However, this semantic leads to great confusion when dealing with Negative Test because the Negative Test Accuracy is equal to 1 – Probability that patient has the disease. Because of that we avoid the traditional Bayesian semantic. Example: A Positive Test with a resulting outcome accuracy of 60% is equal to a Post Test Probability of 60%. And, it means that the patient has a 60% probability of having the disease. A Negative Test with a resulting outcome accuracy of 60% is equal to a Post Test Probability of 60%. And, it means that the patient has a 40% probability of having the disease. This inverse relationship leads to real confusion when reading scatter plots. You have to interpret the Y-axis completely differently when dealing with a Negative Test vs. Positive Test. For this reason we replace the phrase “Post Test Probability” with “Positive or Negative Test Accuracy.” This removes the confusion from what you are actually looking at. 8
Slide 9 - Starting at the End: The Decision Tree Positive Test Move on to Treatment High Disease Prevalence Rate Negative Test May have to run 2nd test Diagnostic Test Positive Test May have to run 2nd test Low Disease Prevalence Rate Negative Test Patient is probably fine The Tree may not be self-explanatory now, but it will be once you read the whole presentation. 9
Slide 10 - 10 Working through an example
Slide 11 - 11 A simple example Unlike much of the body of statistics, Bayesian statistics are most often not affected by population or sample size. But, using an hypothetical population allows us to replicate Bayesian statistics using frequencies and tables. Instead of having to deal with a set of Bayesian conditional probabilities, you can use plain arithmetic with natural frequencies or tables. Disease and Test Specifications This is also called the Pre Test probability This number allows one to use frequencies
Slide 12 - Using Natural Frequencies Notice how the probability that a patient has the disease given a positive test (25%) is very different than the Sensitivity of the test (true positive rate of 60%). The same is true for the Specificity of the test (80%) vs. the probability that a patient does not have a disease given a negative test (94.7%). Disease and Test Specifications 12
Slide 13 - Using Tables 13 Disease and Test Specifications Sum 100 Sum 240 760 900 1,000 Test accuracy
Slide 14 - 14 Sensitizing Disease Prevalence Rate, Sensitivity, and Specificity A Disease Prevalence Rate (DPR) can greatly differ between the general population and the specific category a patient falls in (age, gender, pre-existing conditions, and other health metrics). Depending on what DPR we use, a test’s accuracy will be greatly affected.
Slide 15 - Sensitizing the Disease Prevalence Rate When the Disease Prevalence Rate is low, the Positive tests are inaccurate (low Post Test Probability); and the Negative tests are accurate. That makes good sense. But, keep in mind that in all instances we have kept the Sensitivity (60%) and Specificity (80%) of the test unchanged! If the Disease Prevalence Rate is low, and a patient receives a Negative test, he can be reasonably sure he does not have the disease. If the Disease Prevalence Rate is high, and a patient receives a Positive test, he can be reasonably sure he does hav1e5 the disease. Accuracy of Positive and Negative tests outcome 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% Accuracy of Test Outcome Accuracy of Positive and Negative tests outcome 1% 5% 10% 20% 30% 40% 50% 60% 70% 80% 90% Disease Prevalence Rate Positive test Negative test
Slide 16 - Sensitizing Disease Prevalence and Sensitivity 16 Sensitizing Disease Prevalence and Sensitivity on Positive Test Sensitizing Disease Prevalence and Sensitivity on Negative Test Accuracy of Positive test (patient has the disease) Accuracy of Negative test (patient does not have disease) 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% Positive Test Accuracy 1% 5% 10% 20% 30% 40% 50% 60% 70% 80% 90% Disease Prevalence Rate Sensitivity impact on Positive Test Sen 50% Sen 60% Sen 70% Sen 80% 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% Negative Test Accuracy 1% 5% 10% 20% 30% 40% 50% 60% 70% 80% 90% Disease Prevalence Rate Sensitivity impact on Negative Test Sen 50% Sen 60% Sen 70% Sen 80%
Slide 17 - Sensitizing Disease Prevalence and Sensitivity. A closer look 17 Sensitizing Sensitivity (true positive rate) has a much stronger impact on Negative Test accuracy than on Positive one. This is because higher Sensitivity reduces the False Negatives. 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% Positive Test Accuracy 1% 5% 10% 20% 30% 40% 50% 60% 70% 80% 90% Disease Prevalence Rate Sensitivity impact on Positive Test Sen 50% Sen 60% Sen 70% Sen 80% 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% Negative Test Accuracy 1% 5% 10% 20% 30% 40% 50% 60% 70% 80% 90% Disease Prevalence Rate Sensitivity impact on Negative Test Sen 50% Sen 60% Sen 70% Sen 80%
Slide 18 - Sensitizing Disease Prevalence and Specificity 18 Sensitizing Disease Prevalence and Specificity on Positive Test Sensitizing Disease Prevalence and Specificity on Negative Test Accuracy of Positive test (patient has the disease) Accuracy of Negative test (patient does not have disease) 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% Positive Test Accuracy Specificity impact on Positive Test 1% 5% 10% 20% 30% 40% 50% 60% 70% 80% 90% Disease Prevalence Rate Spe 60% Spe 70% Spe 80% Spe 90% 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% Negative Test Accuracy Specificity impact on Negative Test 1% 5% 10% 20% 30% 40% 50% 60% 70% 80% 90% Disease Prevalence Rate Spe 60% Spe 70% Spe 80% Spe 90%
Slide 19 - Sensitizing Disease Prevalence and Specificity. A closer look Sensitizing Specificity (true negative rate) has a much stronger impact on Positive Test Accuracy than on Negative ones. This is because higher Specificity reduces the False Positives. 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% Positive Test Accuracy Specificity impact on Positive Test 1% 5% 10% 20% 30% 40% 50% 60% 70% 80% 90% Disease Prevalence Rate Spe 60% Spe 70% Spe 80% Spe 90% 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% 1% 5% Negative Test Accuracy 10% 20% 30% 40% 50% 60% 70% 80% 90% Disease Prevalence Rate Specificity impact on Negative Test Spe 60% Spe 70% Spe 80% Spe 90% 19
Slide 20 - 20 Multiple Tests situations
Slide 21 - The 2-tests situation In the first test, one may often use the general population Disease Prevalence Rate (DPR), or pre test probability. A Positive Test with a low accuracy (post test probability) may invite further investigation with a second test. The second test is preferably different from the first one, and when running this second test, one may use as a DPR the test accuracy of the first Positive Test (25% in this particular case). If this second test is also positive, the Positive Test Accuracy (post test probability) will have risen because of using the higher patient category-specific DPR that is equal to the Positive Test Accuracy of the 1st test. 21
Slide 22 - A 3-tests situation In summary, the above entry from Wikipedia states that this test has a low sensitivity of 10% to 30%. But, when conducting the same test 3 times in a row, if all such tests are positive, the sensitivity rises to 92%. In the example below, we uncover that based on our hypothetical disease and test specifications, a sensitivity of 21% does correspond to a sensitivity of 92% when conducting this same test 3 times in a row. This procedure is reserved only for specific tests designed to be run 3 times in a row. Otherwise, using this procedure running the exact same test 3 times in a row artificially exaggerates this Positive Test Accuracy (post 22 test probability).
Slide 23 - 23 Scenarios and Decisions. Part I Introductory Cases… maybe not entirely realistic because we will focus on a test Sensitivity and Specificity separately.
Slide 24 - Scenario 1: Low Disease Prevalence (Pre Test Prob.) Low Positive Test Accuracy (Post Test Prob.) Sensitizing Sensitivity Positive Test Accuracy Sensitivity impact on Positive Test 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% 1% 5% 10% 20% 30% 40% 50% 60% 70% 80% 90% Disease Prevalence Rate Sen 50% Sen 60% Sen 70% Sen 80% In this scenario, even though the patient got a positive test, he is still rather unlikely to have the disease regardless of the test sensitivity (within our specified range of 50% to 80%). A second, and different test, is probably desirable to confirm the positive test result of the first test. 24
Slide 25 - Scenario 2: High Disease Prevalence (Pre Test Prob.) High Positive Test Accuracy (Post Test Prob.) Sensitizing Sensitivity 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% 1% 5% Positive Test Accuracy 10% 20% 30% 40% 50% 60% 70% 80% 90% Disease Prevalence Rate Sensitivity impact on Positive Test Sen 50% Sen 60% Sen 70% Sen 80% In this scenario, the positive test looks pretty solid. There is a pretty high probability the patient has the disease, again regardless of the test sensitivity. 25 It may be time to move on from diagnosis to treatment.
Slide 26 - Scenario 3: Low Disease Prevalence High Negative Test Accuracy Sensitizing Sensitivity In this scenario, the negative test looks pretty solid. There is a pretty high probability the patient does not have the disease regardless of the test sensitivity. The patient may not need further interaction with the health care system. 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% 1% 5% 70% 80% 90% Negative Test Accuracy Sensitivity impact on Negative Test Sen 50% 10% 20% 30% 40% 50% 60% Disease Prevalence Rate Sen 60% Sen 70% Sen 80% 26
Slide 27 - Scenario 4: High Disease Prevalence Mid to Low Negative Test Accuracy Sensitizing Sensitivity Notice here how the different Sensitivity levels have a material impact on this Negative Test Accuracy. Here the Sensitivity of the test is critically important. To confirm this Negative Test result, it may be informative to run a 2nd test. Make sure that the 2nd test is different and hopefully may have a higher sensitivity than the first test. 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% Negative Test Accuracy Sensitivity impact on Negative Test 1% 5% 10% 20% 30% 40% 50% 60% 70% 80% 90% Disease Prevalence Rate Sen 50% Sen 60% Sen 70% Sen 80% 27
Slide 28 - Scenario 5: Low Disease Prevalence (Pre Test Prob.) Low Positive Test Accuracy (Post Test Prob.) Sensitizing Specificity Here different Specificity levels make a huge difference. To confirm this Positive Test either make sure you start with a first test that does have a high Specificity or run a second different test. 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% Positive Test Accuracy Specificity impact on Positive Test 1% 5% 10% 20% 30% 40% 50% 60% 70% 80% 90% Disease Prevalence Rate Spe 60% Spe 70% Spe 80% Spe 90% 28
Slide 29 - Scenario 6: High Disease Prevalence (Pre Test Prob.) High Positive Test Accuracy (Post Test Prob.) Sensitizing Specificity Now the different Specificity levels do not make as much difference as in the “Low” Scenario 5. And, there is much less uncertainty associated with this diagnostic, regardless of the Specificity level of the test. This is probably an adequate time to move on from the diagnostic phase to the treatment phase. 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% Positive Test Accuracy Specificity impact on Positive Test 1% 5% 10% 20% 30% 40% 50% 60% 70% 80% 90% Disease Prevalence Rate Spe 60% Spe 70% Spe 80% Spe 90% 29
Slide 30 - Scenario 7: Low Disease Prevalence (Pre Test Prob.) High Negative Test Post Accuracy Sensitizing Specificity Negative test results look pretty solid, regardless of Specificity level. The patient, most probably, does not have the disease. 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% Negative Test Accuracy Specificity impact on Negative Test 1% 5% 10% 20% 30% 40% 50% 60% 70% 80% 90% Disease Prevalence Rate Spe 60% Spe 70% Spe 80% Spe 90% 30
Slide 31 - Scenario 8: High Disease Prevalence Low Negative Test Accuracy Sensitizing Specificity Negative test results look pretty asymptotically convergent. With a high Disease Prevalence Rate (> 70%), the patient is likely to have the disease since the Negative Test Accuracy or probability of not having the disease falls much below 50%. A second test to confirm the negative result is warranted. 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% 70% 80% 90% Negative Test Accuracy 1% 5% 10% 20% 30% 40% 50% 60% Disease Prevalence Rate Specificity impact on Negative Test Spe 60% Spe 70% Spe 80% Spe 90% 31
Slide 32 - 32 Scenarios and Decisions. Part II More realistic situations where we consider both Sensitivity and Specificity
Slide 33 - Scenario 9: Low Disease Prevalence (Pre Test Prob.) Low Positive Test Accuracy (Post Test Prob.) Sensitizing Sensitivity & Specificity 33 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% Positive Test Accuracy Sensitivity impact on Positive Test 1% 5% 10% 20% 30% 40% 50% 60% 70% 80% 90% Disease Prevalence Rate Sen 50% Sen 60% Sen 70% Sen 80% 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% 70% 80% 90% Positive Test Accuracy 1% 5% 10% 20% 30% 40% 50% 60% Disease Prevalence Rate Specificity impact on Positive Test Spe 60% Spe 70% Spe 80% Spe 90% This situation is a bit confusing. The Sensitivity graph on the left suggests the patient is unlikely to have the disease despite a positive test. The Specificity graph on the right raises much uncertainty on the issue, especially when Specificity reaches 90%.
Slide 34 - Scenario 9. A closer look 34 We kept the Disease Prevalence Rate constant at 10%. We kept Sensitivity constant at 60%. Earlier scenarios and graphs indicated it did not make that much difference. We sensitized the Specificity from 70% to 90%. Notice how the Positive Test Accuracy or Post Test probability really jumps when Specificity increases from 80% to 90%. The above suggests that for Scenario 9, you have to look very closely at the test specification. And, if the resulting Post Test probability reaches 40%, it may invite using a second test for confirmation of this diagnostic.
Slide 35 - Scenario 10: High Disease Prevalence (Pre Test Prob.) High Positive Test Accuracy (Post Test Prob.) Sensitizing Sensitivity & Specificity 35 This situation is reasonably clear cut. In this case, the patient is most likely to have the disease. Note we used a higher Disease Prevalence Rate of 70%. Positive Test Accuracy Specificity impact on Positive Test 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% 1% 5% 10% 20% 30% 40% 50% 60% 70% 80% 90% Disease Prevalence Rate Spe 60% Spe 70% Spe 80% Spe 90% 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% Positive Test Accuracy Sensitivity impact on Positive Test 1% 5% 10% 20% 30% 40% 50% 60% 70% 80% 90% Disease Prevalence Rate Sen 50% Sen 60% Sen 70% Sen 80%
Slide 36 - Scenario 11: Low Disease Prevalence High Negative Test Accuracy Sensitizing Sensitivity & Specificity 36 Negative Test Accuracy Sensitivity impact on Negative Test 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% 1% 5% 10% 20% 30% 40% 50% 60% 70% 80% 90% Disease Prevalence Rate Sen 50% Sen 60% Sen 70% Sen 80% 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% Negative Test Accuracy Specificity impact on Negative Test 1% 5% 10% 20% 30% 40% 50% 60% 70% 80% 90% Disease Prevalence Rate Spe 60% Spe 70% Spe 80% Spe 90% Here we sensitized Sensitivity as we can see on the left graph that it has more impact on the divergence in Negative Test Accuracy. Nevertheless, as the table suggests, the patient is most likely not to have the disease.
Slide 37 - Scenario 12: High Disease Prevalence Mid to Low Negative Test Accuracy Sensitizing Sensitivity & Specificity 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% Negative Test Accuracy Sensitivity impact on Negative Test 1% 5% 10% 20% 30% 40% 50% 60% 70% 80% 90% Disease Prevalence Rate Sen 50% Sen 60% Sen 70% Sen 80% 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% Negative Test Accuracy Specificity impact on Negative Test 1% 5% 10% 20% 30% 40% 50% 60% 70% 80% 90% Disease Prevalence Rate Spe 60% Spe 70% Spe 80% Spe 90% Now we are just about flipping a coin. We probably need to run a second different test to confirm that the patient does not have the disease. 37
Slide 38 - 38 Conclusion One of the most important inputs is what is the Disease Prevalence Rate (DPR); and, to understand when it is relevant to use a general population-DPR vs. a patient category specific-DPR. As the next two bullet points illustrate, getting the correct DPR pretty much drives everything in terms of post clinical tests decision making. When using an appropriate and relevant high DPR, it is likely a Positive Test will be accurate (regardless of Sensitivity within a reasonable range) and a Negative Test will be not so accurate (regardless of Specificity within a reasonable range). Regarding the Positive Test, you probably will get reasonably reliable results suggesting it may be time to move from diagnostic to therapy. Regarding the Negative Test, you may need to run a second different test to confirm the first Negative Test that the patient does not have the disease. When using an appropriate and relevant low DPR, it is likely a Positive Test will be inaccurate (regardless of Sensitivity within a reasonable range) and a Negative Test will be reasonably accurate (regardless of Specificity within a reasonable range). Regarding the Positive Test, you may need to run a second different test to confirm the Positive Test that the patient has the disease before moving on to the treatment phase. Regarding the Negative Test, you may get reasonably reliable results that the patient does not have this disease.
Slide 39 - Conclusion: Decision Tree Positive Test Move on to Treatment High Disease Prevalence Rate Negative Test May have to run 2nd test Diagnostic Test Positive Test May have to run 2nd test Low Disease Prevalence Rate Negative Test Patient is probably fine 39
Slide 40 - Conclusion: High Disease Prevalence Rate 40
Slide 41 - Conclusion: Low Disease Prevalence Rate 41