My Phone Says I've Looked Better
I admit it; I'm a sucker for artificial intelligence. For example, this week Google said it has developed "Smart Reply," which uses AI to suggest potential replies to your emails, based on their contents, your previous replies, and other emails Gmail has spied on read. The responses are brief, but you have to assume the feature will only get smarter.
Meanwhile, Facebook says the facial recognition software it uses to help users tag photos -- which they claim is already almost as accurate as a human's -- can now recognize not only people in pictures but objects in the pictures with them. It can even answer questions about what is in the photos.
Of course, if Google's auto-correct suggestions or Facebook's success in tagging my photos are any indication, AI is still has a ways to go.
Despite that, experts are excited about the potential application of AI to health care, as evidenced by some speakers at last week's Connected Health Symposium, hosted by Partners HealthCare:
- Partner's Dr. Joseph Kvedar foresees us having automated health coaches in our smartphones. "This is quite doable,” he said. "It’s as if the puzzle pieces are there and we haven’t put the jigsaw puzzle together yet.”
- MIT's Joi Ito sees the doctor-computer combination as "the winning combination." He used the now-familiar example of IBM's Watson, saying: "I think there was an announcement recently that Watson is almost finished with med school, It’s sort of a joke, but sort of true...Now imagine if you had a computer that had all of the knowledge that you needed for med school and if it were available all of the time, maybe there’s an argument to be made that you don’t have to memorize it if it’s available all of the time.”
- VC mogul Vinod Khosha predicts AI will eventually help us make better diagnoses. As he points out: "The error rates in medicine, if you look at the Institute of Medicine studies, are about the same as if Google’s self driving car was allowed to drive if it only had one accident per week." Ouch.
We will someday have AI-based clinical support systems to guide physicians in real time interactions with patients. We will someday have our own AI-based health coaches, integrated with our Cortana/Google Now/Siri/M interfaces on our various devices. And we will someday have our own AI physician avatars, supplementing or, in some cases, replacing our need for actual physicians.
Right now, though, I'm thinking about that facial recognition AI.
Current AI can sift through millions of photos to pick you out of a crowd, with varying degrees of success. Camera angles, make-up, hats, quality of image all factor into how successful such software is. Given the recent rapid rates of improvement, though, these are bumps in the road, not insurmountable barriers.
Other software can process your facial expressions, allowing them to make some good guesses about your emotions. If you are a marketer, or a law enforcement officer, this information might be gold, but if your privacy is important, it might be a scary invasion. Someone is always watching.
What I want to know is when this AI can tell if I look sick.
It can already, I assume, look at a set of pictures featuring me in varying degrees of illness and still determine if that the images are all, in fact, me. It would not seem much of a stretch for such software to look at a picture of me and determine if I don't look "normal." E.g., I'm pale, I'm in pain, or I'm unusually tired. Jaundice or measles would seem to be a piece of cake. Issues like a swollen ankle or a gash would also be obvious, as would a limp or rapid breathing, if a video or sequential still images are available.
Facial recognition software has had to learn to ignore certain variations from the norm -- e.g., a frown versus a smile, standing versus sitting --in a person's image, so this would amount to learning which variations are inconsequential and which may be attributable to a physical problem, like an illness or a severe injury. The next step would be to distinguish what that physical problem might be.
AI can already predict how you'll look as you age; it wouldn't seem hard to use similar AI to detect premature aging that could be linked to an illness -- or to visually illustrate longer term impacts of bad health habits. Even young "invincibles" might pay attention to the latter.
Physicians stress the importance of face-to-face visits with patients, claiming that they can pick up clues that they might miss over the phone or even via a video visit. AI should eventually be able to pick up on at least some of those clues.
Research indicates that smartphone users check their phones an astonishing 83 times a day (and the rate for younger users is some 50% higher than that!). That presents an easy opportunity for your phone to check on you and spot early warning signs.
One can easily imagine an app that, say, takes a photo of you at times throughout the day, perhaps periodically prompting you to note how you are feeling so it can better learn what your variations mean. The easy first stage would be for it to recognize variations that are unusual, even if the AI doesn't quite know what they might mean. That might warrant a "hey, maybe you should call your doctor," or a "GO TO THE ER NOW!"
The second, and harder stage, would be to match your variations with how selected conditions visually present. For example, it could recognize a rash or a growth, and maybe effects of a stroke. One can imagine an AI synthesizing large data-sets of photos of people with specific issues to derive some commonalities, and use that learning to suggest what you might be experiencing. The AI might need to ask you what you were feeling before suggesting a potential diagnosis, but at least it could help narrow what might be the problem -- if you even realized there was a problem.
I don't really mind writing my own emails and I'm happy to tag my own photos, thank you very much, but if an AI can help me recognize when I have a health issue, that'd be something I could use.
My Phone Says I've Looked Better was authored by Kim Bellard and first published in his blog, From a Different Perspective.... It is reprinted by Open Health News with permission from the author. The original post can be found here. |
- Tags:
- AI physician avatars
- AI-based clinical support systems
- AI-based health coaches
- application of AI to health care
- artificial intelligence (AI)
- automated health coaches in our smartphones
- Connected Health Symposium
- doctor-computer combination
- IBM Watson
- Institute of Medicine
- Joi Ito
- Joseph Kvedar
- Kim Bellard
- Partners HealthCare
- Vinod Khosha
- Login to post comments