AI in Healthcare: ethics, challenges, and implications
WRITTEN BY: JUWON PARK
From the Terminator to Iron Man’s JARVIS, the media has shaped our perception of artificial intelligence. The advent of artificial intelligence is felt through the daily use of virtual assistants like Apple’s Siri and Amazon’s Alexa. In addition, there has been an increase in concerns about the replacement of human jobs by robots and artificial intelligence as they become more advanced and capable of more complex tasks. Especially in medicine and healthcare, physicians have been apprehensive about this topic as computer-assisted image analysis and deep learning continue to advance.
The potential applications of AI in medicine have given rise to ethical challenges because of its capacity to threaten patient preference, safety, and privacy (Rigby, 2019). Current policies and ethical guidelines for AI technology in healthcare are falling behind the progress made. The most pressing concerns address patient privacy and confidentiality, the boundary between the roles of humans and AI, and future education of physicians concerning changes in medicine. The lack of policy and guidelines also gives way to other limitations of implementing AI in hospitals. The lack of regulation for disease-diagnosing algorithms involves liabilities in case of errors and malfunctions that raise ethical and privacy questions not addressed with regulations (Santoro, 2017).
In one study, students were less likely to schedule diagnostic stress assessments when the provider was automated, even when the human provider was equally accurate. The distrust of “automated” providers, or medical AI, comes from a concern called uniqueness neglect. Consumers believe that machines can only operate in a standardized way that makes them incapable of making accurate medical decisions because they are unique. In the same study, by contrast, participants were more likely to use a medical service when medical AI supported a human provider who made decisions (Longoni, 2019). This further strengthens the value of using AI in healthcare.
People think that computer vision and interpreting medical images through algorithms can replace doctors that rely on images to diagnose and treat patients. Research has shown that a doctor’s bedside manner can affect healthcare outcomes (Kelley, 2014). Even a patient’s relationship with their doctors and their expectations about healing, which develops from their relationship, have shown that social context can drive placebo responses (Shashkevish, 2017). Soft skills like interacting with patients and bedside manners are irreplaceable and even have positive effects on patients.
The data-intensive scientific era has made it advantageous for physicians to become familiar with AI analytics. Especially within the medical field, medical AI users should be more aware that the input data quality limitations have health implications because they affect the model accuracy for predicting clinical disease risks and patient outcomes (Miller, 2019). Because these deep learning algorithms rely on large amounts of high-quality medical data, patients with rare diseases cannot be treated in this way. This would also be the case of diseases that require expensive sophisticated, invasive, or dangerous tests (Chang, 2019).
There have been further talks about using AI as an empowerment tool for healthcare providers. Implementing AI in medicine and healthcare is inevitable. The promising future of AI in medicine has not gone ignored. An Accenture report estimates that the AI health market will hit $6.6 billion by 2021, which is a $6 billion increase from 2014. As the industry grows, it will be difficult for healthcare providers to avoid the use of AI. It will be providers that utilize AI who will replace those who do not.
For example, Stanford-Google digital-scribe pilot research aims to use AI-assisted voice recognition to enter data into a patient’s electronic health record systems while simultaneously patients are in the room (Bach, 2017). This technology aims to reduce the time physicians spend manually entering data, which has shown to take up one-sixth of an average physician’s working hours (Woolhandler, 2014). In fact, a Mayo Clinic study has linked EHRs with physician burnout (Shanafelt, 2016). Physicians that utilize this technology will replace ones that do not; they can take advantage of this technology to better support patients.
The hype about AI has given rise to concerns that have yet to be addressed for further implementation. As more research and advancements are made, what can we say about the future of AI in medicine in the next 10 years?
Citations: Bach, B., MacCormick, H., & White, T. (2018, January 29). Stanford-Google digital-scribe pilot study to be launched. Retrieved from http://scopeblog.stanford.edu/2017/11/21/stanford-google-digital-scribe-pilot-study-to-be-launched/.
Chang, A. (2019). Common Misconceptions and Future Directions for AI in Medicine: A Physician-Data Scientist Perspective. Conference on Artificial Intelligence in Medicine in Europe, 3–6. doi: https://doi.org/10.1007/978-3-030-21642-9_1
Longoni, C., Bonezzi, A., & Morewedge , C. K. (2019). Resistance to Medical Artificial Intelligence . Journal of Consumer Research, 46(4), 629–650. doi: https://doi.org/10.1093/jcr/ucz013
Miller, D. D. (2019). The medical AI insurgency: what physicians must know about data to practice with intelligent machines. Npj Digital Medicine. Retrieved from https://www.nature.com/
Rigby, M. J. (2019, February 1). Ethical Dimensions of Using Artificial Intelligence in Health Care. Retrieved from https://journalofethics.ama-assn.org/article/ethical-dimensions-using-artificial-intelligence-health-care/2019-02.
Santoro, E. (2017). Artificial intelligence in medicine: limits and obstacles. Recenti Progressi in Medicina, 108(2), 500–502. doi: 10.1701/2829.28580.
Shanafelt, T. D., Dyrbye, L. N., Sinsky, C., Hasan, O., Satele, D., Sloan, J., & West, C. P. (2016). https://www.mayoclinicproceedings.org/article/S0025-6196(16)30215-4/abstract. Mayo Clinic Proceedings, 91(7), 836–848. doi: https://doi.org/10.1016/j.mayocp.2016.05.007
Shashkevich, A. (2017, March 8). Patient mindset matters in healing and deserves more study, experts say. Retrieved from https://med.stanford.edu/news/all-news/2017/03/health-care-providers-should-harness-power-of-mindsets.html.
Woolhandler, S., & Himmelstein, D. U. (2014). Administrative work consumes one-sixth of U.S. physicians’ working hours and lowers their career satisfaction. International Journal of Health Services: Planning, Administration, Evaluation, 44(4), 635–642. doi: 10.2190/HS.44.4.a