Sundar Pichai, Alphabet CEO, recently said that “AI could have more profound implications on humanity than electricity or fire.” In the age of increasingly advanced AI and large language models, or LLMs, doctors are facing a question that requires some serious soul-searching: in a world of AI handling signal analysis, diagnosis and treatment recommendations, and communication, what do we need doctors for? Are the days of parents dreaming of their daughters and sons donning that white coat a thing of the past?
With the public release of ChatGPT, the most prominent example of an LLM, anxiety around healthcare has increased rather than decreased. While automation has been a fact of life – think of how washing machines have freed up hours of time before they became commonplace. At its best, automation saves humans from work that is repetitive or even dangerous. But this time feels different for many – AI is now coming after white-collar jobs, including healthcare, which was always thought to be protected and somehow insulated from the whims of the modern disruptive economy. It’s no longer only truck drivers at risk – it’s now Hollywood screenwriters, law firm partners, and surgeons.
PwC projects that by 2030, the potential contribution to the economy from AI could be as much as $15.7 trillion, an increase in global GDP by 14%. Just consider how AI has allowed businesses to monitor their connected systems in real time and make sense of huge quantities of data in real time. Now we see the same potential in healthcare, where Statista projects that the annual AI market in the sector will reach $188 billion U.S. dollars by 2030.
Healthcare AI systems, even early ones, focused on diagnosis, a complicated process. ChatGPT, in particular, was trained on text from various disciplines and has passed both law and medical tests. The idea here is that it has basic competence, albeit no actual semantic understanding of health and medicine.
Now all of that is rapidly changing. The impact of these language models in medicine is expected to revolutionize the industry, from spitting out diagnoses and treatments, to doing paperwork better and deciphering complex scientific information.
Even more unsettling, these language models are not just restricted to reasoning and analysis but can also provide the human layer of emotional intelligence, counseling, and communication, which is where doctors should excel. In short, having good “bedside manner” is no longer the asset that it once was. This development raises several concerns about the future of medicine for many, as software may soon replace human doctors’ traditional roles. Although we expect software to help us with repetitive reasoning tasks, like analyzing EKGs and X-rays, it’s now encroaching on the emotional intelligence and counseling aspects of medical practice. Just think of the business value and productivity gains to be captured.
Yes, the core of medicine is conversation analysis, reasoning, and counseling advising. From the standpoint of patients, however, they want discussion and answers. However, the healthcare system is doing a poor job of this. In fact, the healthcare system is notorious for its inadequate communication, and now there’s software that can do both those things better and quicker than a doctor can. Patients may begin to question why they’re paying for an email from their doctor when they can get a much better, more efficient response from ChatGPT or GPT-4. The unexpected ability of these language models to provide emotional intelligence and counseling creates a different threat to the medical field. It forces us to do some soul-searching and even think hard about redefining the role of physicians in the era of AI – and we need to do so quickly, as the International Data Cooperation has projected that global spending on AI systems will soon reach $98 billion, growing over 27% from 2019.
The ability for LLMs to create empathetic interactions on par with doctors isn’t just theoretical. A recent study showed that AI systems could provide more empathetic and useful interactions with patients compared to their human MD counterparts.
So, what are some potential focal areas for today’s medical students and tomorrow’s physicians? I think one way to think of it might be to imagine a conductor – integrating information from many sources, mastering tools, and checking the veracity of knowledge.
- Doctors may also play a supervisory role in the future, conducting the system of systems that will define medical care.
- Medical professionals can check the veracity of medical information provided by these AI systems and fact-check them to ensure they’re correct.
- They can also be responsible for overseeing several AI systems that help in different areas of medical practice.
- Furthermore, doctors will still be needed for complicated cases when things go wrong. Similar to how today’s airline pilots act as supervisors of systems, doctors can supervise several AI systems and be there as a human to take care of the long tails of medical cases that require human intervention.
Above all, doctors who dive headfirst into taking the challenge of personalized medicine and behavior change for people will win.
The arms race between companies is heating up. There are additional models to be released that will get better, including the next version of ChatGPT, Llama from Meta, and BARD from Google. These technologies represent the tip of the iceberg and have let the public know the true power of what’s possible. Companies train these models to do specific things to get better at specific domains. This will upend everything from 9th-grade English class to entire industries like marketing and advertising. Healthcare is no different.
Doctors’ roles are going to change, and there’s no use in resisting that. My hope is that in the process, we can reinvent the profession for the better – and get back to putting patients at the center of the experience.
Ravi Komatireddy
Ravi Komatireddy MD, is Founder & CEO of Daytona Health and a renowned digital health entrepreneur.