AI in Medicine Won't Be Replacing Physicians Anytime Soon
We’ve all seen examples in recent months of what ChatGPT and other early generative AI systems can write – news articles, songs, computer code, to name a few – with varying degrees of success. Yet strong medical guidance is not among them and likely won’t be anytime soon.
As a practicing radiation oncologist at Yale School of Medicine and the co-founder of theMednet.org, a leading Q&A platform for physicians, I’ve been running clinical questions by ChatGPT to compare its answers to those we receive from actual doctors on theMednet.org, a leading Q&A platform for physicians.
For example, when I asked ChatGPT about offering consolidative thoracic radiation for patients with oligometastic NSCLC following upfront immunotherapy, it provided a series of definitions of the various terms in the question and some basic information about how an oncologist would make a decision. In other words, it told me what any oncologist would already know. On theMednet.org, we received detailed responses from physicians with specific medical references to inform treatment decisions.
For sure, AI has the potential to benefit both my clinic patients, as well as the more than 20,000 physicians who use theMednet platform, two-thirds of whom are the oncologists AI is expected to impact first. However, rather than replacing doctors, artificial intelligence almost certainly would make human medical intelligence more important than ever.
There are two major reasons for this: knowledge and trust.
Today AI solutions are good at aggregating data that already exists and placing it at the point of care. In other words, assembling what’s already out on the internet, what we call known data. It can tell me what’s in textbooks and guidelines or has been published. However, data is not knowledge, and many patient cases don’t fit neatly into what’s known. For that, we consult with colleagues and academic medical experts. It’s their real-world experience that differentiates data from knowledge, helping us make better informed decisions when diagnosing and treating patients.
AI solutions are good at...assembling what’s already out on the internet, what we call known data. It can tell me what’s in textbooks and guidelines or has been published. However, data is not knowledge, and many patient cases don’t fit neatly into what’s known.
Recently I saw a patient with lung cancer whose case illustrates this concern. She has a lesion in her brain that looks like a benign lesion. If an AI-based system were to interpret her scan, it would almost definitely find a benign brain tumor. But in the context of having lung cancer that is currently progressing in her lung and an old MRI that showed a smaller spot in her brain, it is more likely we’re looking at lung cancer that metastasized to the brain.
The second reason is trust. There are complexities to what information an AI surfaces and what it doesn’t. Today, there is too little transparency as to how AI models are programmed and this will be critical for AI to expand its role in clinical practice–especially for physicians to be comfortable with treatment recommendations versus surfaced insights. If we understand AI’s limitations, we can work alongside AI experts to ensure algorithms are informed by good clinical practice and real-world expertise.
That said, I do see enormous potential for this technology to help clinicians and patients alike.
In a recent poll we conducted on theMednet, 77% of physicians surveyed said they believed AI would help their jobs with decision-making, treatment planning and administrative burdens, among other things. But it’s still early days for most. In a separate poll asking how they’re using AI today, 70% of physicians surveyed said they weren’t using it at all. That percentage will certainly go up as the technology evolves.
In a recent poll we conducted on theMednet, 77% of physicians surveyed said they believed AI would help their jobs with decision-making, treatment planning and administrative burdens, among other things.
By employing this technology, we will be able to screen more patients for cancer because AI will help radiologists interpret CT scans and mammograms. In developing oncology treatment plans, today the work of drafting detailed radiation plans for my patients is all done by hand. I spend hours a day manually going through each slice of a CT scan to highlight a patient’s organs and the tumor. AI will do this automatically, freeing me up to see more patients and spend time necessary on cases requiring more attention.
Additionally, AI will also help modify treatment plans on a daily basis, so as a tumor shrinks or if a patient loses weight, the radiation dose can be modified in real time to reflect those changes. There is still a lot of research being done on these technologies, currently not in widespread use but the potential is there.
At the end of the day, medicine is a human endeavor. Any technology and platforms that make it easier for physicians to interpret data and improve patient outcomes will benefit our profession and patients in a profound way. Yet there’s a level of medical insight that comes from sitting in a room with a patient and understanding not just the details of their medical condition but their lifestyle, emotional health, economic situation, support system, and more. The combination of the two would be powerful.
The better the AI gets, the better we’ll get as physicians.