To Err Is Human, To Diagnose Artificial Intelligence is...?
A new study found that physicians have a surprisingly poor knowledge of the benefits and harms of common medical treatments. Almost 80% overestimated the benefits, and two-thirds overestimated the harms. And, as Aaron Carroll pointed out, it's not just that they were off, but "it's how off they often were."
Anyone out there who still doesn't think artificial intelligence (AI) is needed in health care?
The authors noted that previous studies have found that patients often overestimate benefits as well, but tended to minimize potential harms. Not only do physicians overestimate harm, they "underestimate how often most treatments have no effects on patients -- either harmful or beneficial." Perhaps this is because, at least in part, because "physicians are poor at assessing treatment effect size and other aspects of numeracy."
The authors pointed out that "even when clinicians understand numeracy, expressing these terms in a way patients will understand is challenging." When asked how often they discussed absolute or relative risk reduction, or NNT, with patients, 47% said "rarely" -- and a third said "never."
Dr. Carroll's reaction to this: "I’m screaming in my office because I feel like it’s all I talk about."
An accompanying editorial called for more physician training, and also urged more use of visual representations of probabilities. Better visual representation of data is certainly good, but one wonders why physicians should need more training in understanding what treatments have what kind of value to their patients. Isn't that, in fact, the whole point of medical education?
So, who/what is good with numeracy, remembering statistics, and evaluating data? There are math nerds, of course, but, instead of going to medical school, as they once might have, they're probably making fortunes working on Wall Street or trying to make billions with their tech start-ups. Then, of course, there is AI.
An AI would know exactly the known benefits/risks of treatments, common or not, and maybe even produce a nifty graph to help illustrate them.
Probably the best-known AI in health care is IBM's Watson, but they definitely don't have the field to themselves. CB Insights recently profiled over 90 AI start-ups in healthcare, with over 55 equity funding rounds already this year (compared to 60 in 2015). These run the gamut; they categorized them into: drug discovery, emergency room & hospital management, healthcare research, insights and risk management, lifestyle management & monitoring, medical imaging & diagnostics, mental health, nutrition, miscellaneous, wearables, and virtual assistants.
No wonder Frost & Sullivan projects this to be a $6.7b industry by 2025.
Take at look at some of AI's recent successes:
- Researchers at Houston Methodist developed AI that improves breast cancer risk prediction, translating data "at 30 times human speed and with 99 percent accuracy."
- Harvard researchers created AI that can differentiate breast cancer cells almost as well as pathologists (92% versus 96%) -- and, when used in tandem with humans, raised accuracy to 99.5%.
- Stanford researchers developed AI that they believe beats humans in analyzing tissue cells for cancer, partly because it can identify far more traits that lead to a correct diagnosis than a human can. No wonder some think AI may replace radiologists!
- A Belgium study found that AI can provide a "more accurate and automated" interpretation of tests for lung disease.
- Watson recently diagnosed leukemia in a patient, a diagnosis that physicians had missed. Watson had the benefit of comparing the patient's data to 20 million cancer records.
Current iterations of AI are less truly "intelligent" than just really, really fast at what they do. That is changing, though, as AI becomes less about what we program them to do than about an AI using "deep learning" to get smarter. Deep learning essentially uses trial and error -- at almost incomprehensible speeds -- to figure out how to accomplish tasks. It's how AI have gone from lousy at image recognition to comparable to -- or better than -- humans.
One of the dirty little secrets of health care is that much of our care is based on trial and error -- and that trial and error is often limited to our physician's personal training and experience. If we're lucky, we have a physician who has seen lots of similar patients and is very well versed in the research literature, but he/she is still working with much less information than an AI could have access to -- even about one of the physician's own patients.
Our problem is going to be when we simply don't know what an AI did or why it reached the conclusion it did. No human could have searched through the 20 million records Watson did to identify the leukemia, but at least it was a fairly objective result. Someday soon AI will pull together seemingly unrelated facts to produce diagnoses or proposed treatments that we simply won't be able to follow, much less replicate.
We've had trouble getting state licensing boards to accept telemedicine, sometimes even when used by physicians they've licensed, much less when used by physicians from other states. One can only imagine how they, or the FDA, are going to react to AIs who come up with diagnoses and treatments for reasons they can't explain to us.
Then, again, many physicians might sometimes have the same problem. How much of a higher standard should we hold AI to? How much better do they have to be?
In the short term, AI are likely to be a tool to help expand physicians' capabilities; as IBM's Kyu Rhee says, "as ubiquitous as the humble stethoscope." In the mid-term, they may be partners with physicians, adding value on an equal basis. And in the not-too-distant future, they may be alternatives to, or even replacements for, physicians. With AI's capabilities growing exponentially and computing cost growing cheaper, human physicians may be a costly luxury many can't -- or won't want to -- afford.
One suspects that AIs will be ready for us before we're ready for them.
To Err Is Human, To Diagnose AI? was authored by Kim Bellard and first published in his blog, From a Different Perspective.... It is reprinted by Open Health News with permission from the author. The original post can be found here. |
- Tags:
- Aaron Carroll
- artificial intelligence (AI)
- breast cancer risk prediction
- CB Insights
- deep learning
- drug discovery
- emergency room & hospital management
- Food and Drug Administration (FDA)
- healthcare research
- Houston Methodist
- IBM's Watson
- insights and risk management
- Kyu Rhee
- lifestyle management & monitoring
- medical imaging & diagnostics
- mental health
- Nutrition
- telemedicine
- virtual assistants
- Wearables
- Login to post comments