Ideas and Opinions
17 December 2019

Should Health Care Demand Interpretable Artificial Intelligence or Accept “Black Box” Medicine?

Publication: Annals of Internal Medicine
Volume 172, Number 1
In recent years, health care applications of artificial intelligence (AI), such as detection of atrial fibrillation from electrocardiography, retinopathy from optical coherence tomography, and in-hospital mortality risk from electronic health records, have emerged (1–3). Artificial intelligence is also capable of assisting with more abstract clinical situations, such as predicting the onset of sepsis before clinician recognition (4). Artificial intelligence approaches, such as deep learning, rely on vast amounts of data and complex model structures with millions of parameters. For example, the Inception v3 model (Google), which is more accurate than physicians at identifying diabetic retinopathy from fundus photographs …

Get full access to this article

View all available purchase options and get full access to this article.

References

1.
Hannun AYRajpurkar PHaghpanahi Met al. Cardiologist-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network. Nat Med. 2019;25:65-9. [PMID: 30617320]  doi: 10.1038/s41591-018-0268-3
2.
De Fauw JLedsam JRRomera-Paredes Bet al. Clinically applicable deep learning for diagnosis and referral in retinal disease. Nat Med. 2018;24:1342-50. [PMID: 30104768]  doi: 10.1038/s41591-018-0107-6
3.
Rajkomar AOren EChen Ket al. Scalable and accurate deep learning with electronic health records. NPJ Digit Med. 2018;1:18. [PMID: 31304302]  doi: 10.1038/s41746-018-0029-1
4.
Nemati SHolder ARazmi Fet al. An interpretable machine learning model for accurate prediction of sepsis in the ICU. Crit Care Med. 2018;46:547-53. [PMID: 29286945]  doi: 10.1097/CCM.0000000000002936
5.
Szegedy CVanhoucke VIoffe Set al. Proceedings of the 29th IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, Nevada, 26 June–1 July 2016. Piscataway, NJ: IEEE; 2016:2818-26.
6.
Simonite T. Google's AI guru wants computers to think more like brains. Accessed at www.wired.com/story/googles-ai-guru-computers-think-more-like-brains/ on 1 August 2019.
7.
Poursabzi-Sangdeh F, Goldstein DG, Hofman JM, et al. Manipulating and measuring model interpretability. Preprint at ArXiv. Accessed at https://arxiv.org/abs/1802.07810 on 30 October 2019.
8.
van Walraven CDhalla IABell Cet al. Derivation and validation of an index to predict early death or unplanned readmission after discharge from hospital to the community. CMAJ. 2010;182:551-7. [PMID: 20194559]  doi: 10.1503/cmaj.091117
9.
Hinton G, Vinyals O, Dean J. Distilling the knowledge in a neural network. Preprint at ArXiv. Accessed at https://arxiv.org/pdf/1503.02531.pdf on 30 October 2019.
10.
Craven MWShavlik JW. Extracting tree-structured representations of trained networks. In: Touretzky DS, Mozer MC, Hasselmo ME, eds. Proceedings of the 8th International Conference on Neural Information Processing Systems, Denver, Colorado, 27 November–2 December 1995. Cambridge, MA: MIT Pr; 1995:24-30.

Comments

0 Comments
Sign In to Submit A Comment

Information & Authors

Information

Published In

cover image Annals of Internal Medicine
Annals of Internal Medicine
Volume 172Number 17 January 2020
Pages: 59 - 60

History

Published online: 17 December 2019
Published in issue: 7 January 2020

Keywords

Authors

Affiliations

Fei Wang, PhD
Weill Cornell Medicine, New York, New York (F.W.)
Rainu Kaushal, MD, MPH
Weill Cornell Medicine and New York–Presbyterian Hospital, New York, New York (R.K., D.K.)
Dhruv Khullar, MD, MPP
Weill Cornell Medicine and New York–Presbyterian Hospital, New York, New York (R.K., D.K.)
Corresponding Author: Fei Wang, PhD, Weill Cornell Medicine, 425 East 61st Street, Suite 301, New York, NY 10065; e-mail, [email protected].
Current Author Addresses: Dr. Wang: Weill Cornell Medicine, 425 East 61st Street, Suite 301, New York, NY 10065.
Drs. Kaushal and Khullar: Weill Cornell Medicine, 402 East 67th Street, New York, NY 10065.
Author Contributions: Conception and design: F. Wang, R. Kaushal, D. Khullar.
Drafting of the article: F. Wang, D. Khullar.
Critical revision of the article for important intellectual content: F. Wang, R. Kaushal, D. Khullar.
Final approval of the article: F. Wang, R. Kaushal, D. Khullar.
Statistical expertise: F. Wang.
Administrative, technical, or logistic support: F. Wang.
Collection and assembly of data: D. Khullar.
This article was published at Annals.org on 17 December 2019.

Metrics & Citations

Metrics

Citations

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. For an editable text file, please select Medlars format which will download as a .txt file. Simply select your manager software from the list below and click Download.

For more information or tips please see 'Downloading to a citation manager' in the Help menu.

Format





Download article citation data for:
Fei Wang, Rainu Kaushal, Dhruv Khullar. Should Health Care Demand Interpretable Artificial Intelligence or Accept “Black Box” Medicine?. Ann Intern Med.2020;172:59-60. [Epub 17 December 2019]. doi:10.7326/M19-2548

View More

Login Options:
Purchase

You will be redirected to acponline.org to sign-in to Annals to complete your purchase.

Access to EPUBs and PDFs for FREE Annals content requires users to be registered and logged in. A subscription is not required. You can create a free account below or from the following link. You will be redirected to acponline.org to create an account that will provide access to Annals. If you are accessing the Free Annals content via your institution's access, registration is not required.

Create your Free Account

You will be redirected to acponline.org to create an account that will provide access to Annals.

View options

PDF/EPUB

View PDF/EPUB

Related in ACP Journals

Full Text

View Full Text

Figures

Tables

Media

Share

Share

Copy the content Link

Share on social media