Ideas and Opinions6 June 2023
    Author, Article, and Disclosure Information

    The sudden and shocking appearance of ChatGPT (OpenAI)—able to write scientific articles, pass medical licensing examinations, fetch CPT (Current Procedural Terminology) codes, and develop differential diagnoses (1, 2)—raises immediate questions about how health systems will use conversational artificial intelligence, or chatbots, in patient-facing contexts. ChatGPT may catalyze expansion of this technology’s uses in patient communication. Chatbots are already using other natural language processing methods to check COVID-19 symptoms, manage chronic diseases, support mental health treatment, and deliver genetic test results (3).

    Chatbots promise to support medical education, research, and practice but not without peril. They raise ethical issues around safety, ...


    • 1. Kung TH, Cheatham M, Medenilla A, et al. Performance of ChatGPT on USMLE: potential for AI-assisted medical education using large language models. PLOS Digit Health. 2023;2:e0000198. [PMID: 36812645] doi:10.1371/journal.pdig.0000198 CrossrefMedlineGoogle Scholar
    • 2. Schinkel M, Paranjape K, Nanayakkara P. Written by humans or artificial intelligence? That is the question [Editorial]. Ann Intern Med. 2023;176:572-573. [PMID: 36913691] doi:10.7326/M23-0154 LinkGoogle Scholar
    • 3. McGreevey JD, Hanson CW, Koppel R. Clinical, legal, and ethical aspects of artificial intelligence-assisted conversational agents in health care. JAMA. 2020;324:552-553. [PMID: 32706386] doi:10.1001/jama.2020.2724 CrossrefMedlineGoogle Scholar
    • 4. Liao Y, He J. Racial mirroring effects on human-agent interaction in psychotherapeutic conversations. In: Paternò F, Oliver N, Conati C, et al., eds. IUI ’20: Proceedings of the 25th International Conference on Intelligent User Interfaces, Cagliari, Italy, 17–20 March 2020. Association for Computing Machinery; 2020:430-442. doi:10.1145/3377325.3377488 CrossrefGoogle Scholar
    • 5. Takeshita J, Wang S, Loren AW, et al. Association of racial/ethnic and gender concordance between patients and physicians with patient experience ratings. JAMA Netw Open. 2020;3:e2024583. [PMID: 33165609] doi:10.1001/jamanetworkopen.2020.24583 CrossrefMedlineGoogle Scholar
    • 6. Thaler RH, Sunstein CR. Nudge: Improving Decisions About Health, Wealth, and Happiness. Yale Univ Pr; 2008. Google Scholar
    • 7. Blumenthal-Barby JS. Good Ethics and Bad Choices: The Relevance of Behavioral Economics for Medical Ethics. MIT Pr; 2021. Google Scholar
    • 8. Eyssel F, Hegel F. (S)he's got the look: gender stereotyping of robots. J Appl Soc Psychol. 2012;42:2213-2230. doi:10.1111/j.1559-1816.2012.00937.x CrossrefGoogle Scholar
    • 9. Darcy A, Daniels J, Salinger D, et al. Evidence of human-level bonds established with a digital conversational agent: cross-sectional, retrospective observational study. JMIR Form Res. 2021;5:e27868. [PMID: 33973854] doi:10.2196/27868 CrossrefMedlineGoogle Scholar
    • 10. Rajkomar A, Hardt M, Howell MD, et al. Ensuring fairness in machine learning to advance health equity. Ann Intern Med. 2018;169:866-872. [PMID: 30508424] doi:10.7326/M18-1990 LinkGoogle Scholar