Artificial Intelligence, One Day, Could Keep The Doctor Away

You wake up with an angry stomach, but this time it’s got nothing to do with that questionable leftover poké dish you’d kept in the fridge for a week. No, there’s more to this, but is it enough to put everything on hold to call a doctor and potentially wait all day, missing work, just to find out it’s a minor bug? More importantly, can you really afford it? On the other hand, the notion of consulting internet-based health information services (those that most quickly come to mind will remain unnamed) is antiquated at best, and we have all heard tales of hypochondriacs using such sites to self-diagnose with, in this case, probably stomach cancer. Not only is doing so irresponsible, but it could lead to people engaging in unnecessary and potentially dangerous treatments. So what can you do?

Enter Babylon Health, a London-based firm whose mission is to give everyone across the globe access to affordable health care through an app that combines the expertise of experienced physicians, specialists, and therapists with a sophisticated AI system developed with the latest advances in deep-learning. The company asserts that their artificial intelligence is more than just a database, that it assesses known symptoms and risk factors to provide informed medical information. Babylon’s founder, Ali Parsa, says that the best way to realize his company’s goals is to reduce people’s need to see a doctor, and this technology is engineered to do just that.

In 2017, Babylon ran a trial with a London hospital, in which calls to the National Health Service’s advice line would be handled partly with the company’s AI system. For many less serious conditions, the patients’ needs can be met through self-treatment, and Parsa says that 40% of Babylon’s patients eventually stopped calling for appointments, realizing that they did not need them, three times the proportion of people speaking with human operators. Now available in part from Babylon is GP at Hand, a service in the UK and Rwanda (in which 20% of the adult population has registered) in which patients can consult with Babylon’s chatbot app or talk over video with a human doctor.

Founded in 2011 and based centrally in Berlin (with offices in New York, London, and Munich), Ada Health’s doctors and engineers spent six years developing an AI-based health care app as its primary goal, which is something that company CEO and co-founder Daniel Nathrath says competitors considered an afterthought. From the onset, Ada’s AI was developed to take all of the patient’s information, including established medical history, into consideration, not just symptoms and risk factors. Using machine learning and closed feedback loops, the system acts as a prescreening consultation or to provide a real physician with relevant information prior to a consult, saving time by reducing the need to ask the basic introductory questions. Co-founder and Chief Medical Officer Claire Novorol says that feedback has already proven the app successful in diagnosing both common and rare conditions, and will make further progress as it continues to be trained by real physicians to become a greater compendium of combined medical knowledge.

Just weeks ago, it was announced that Ada Health had launched its Global Health Initiative with partners Foundation Botnar and the Bill and Melinda Gates Foundation; in doing so, Ada will be offering its app in Romanian and Swahili, with the latter expanding its availability to over 100 million people in sub-Saharan Africa.

All this talk in recent years of the advent of artificial intelligence and robots has mostly involved the prospect of humans being replaced by such things, an apprehension that has existed for well over half a century and has given birth to the best (and, honestly, the worst) science fiction. But are we coming closer to science fiction becoming a reality?

Researchers at The Francis Crick Institute recently published a study that used data from over 80,000 patients to train an artificial intelligence to model and predict heart disease mortality, that ultimately proved to be more successful than a model created and used by human experts, being 80% and 70% successful, respectively. The AI-generated model uses 586 variables to assess potential patient outcomes, while the physician-developed model uses just 27, allowing the robotic cardiologist to consider things through its calculations—for instance, the number of home visits to a patient—that may not even cross a human’s mind.

Even Babylon Health has made claims recently that its AI system is as accurate in its diagnostic capabilities as a doctor. The company’s staff fed its chatbot with questions from the official exam for induction into the Royal College of General Practitioners (RCGP), a membership body for family doctors in the UK. The average score to pass in the last five years has been 72%, and Babylon’s chatbot scored 82%.

All that said, while the Crick Institute’s AI has been proven accurate thus far, more studies would need to be conducted and results would need to be replicated in order to claim that it is reliable. And the RCGP has stated that there are flaws in the methodology that Babylon used in challenging its AI system. So there is no need to form the Anti-Robot Resistance just yet.

A major part of the reason that AI looks to be so useful is that there is a worldwide shortage of medical staff—which will reach an estimated 18 million by 2030—particularly among those who specialize. AI would also be quite useful to fill in gaps and cover shortages, especially in regions where the number of clinicians per capita is relatively low, be they in rural regions, low-resource nations, or in remote communities where access may be limited or difficult. Indeed, a third of Ada’s users hail from such places in Africa, Asia, and the Indian subcontinent. Even in more well-off regions, slashed budgets, aging populations, and increases in chronic disease conditions among populations will still place greater strains on health ministries; artificial intelligence will be beneficial by reducing the workload of doctors who can then focus on patients with more pressing conditions. Ultimately, AI should be viewed as a tool rather than a replacement, and it is even possible that its adoption could lead to more jobs being created; people coming to the job market or preparing to change their places in it should train with AI to properly apply it as such.

Finally, according to a study out of MIT, computer scientists say that flesh-and-blood physicians have something that artificial intelligence lacks: gut feelings about a patient’s condition, which researchers have found to influence the number and types of tests doctors order for their patients. Their intuition is especially important in the early period of a patient’s treatment, when there is little data available to analyze in order to make a diagnosis. Researchers also found that how a doctor felt about the patient’s condition affected the number of tests ordered, that if the situation was less than sunny, more tests would be ordered, but a more positive outlook meant fewer tests. To boil it down, artificial intelligence really only serves to spout data and perform rote memorization, just on a much greater scale than we humans are capable of doing ourselves. What AI cannot do, at least not yet, is to think creatively and critically, especially in situations that have never been encountered—after all, how can a computer provide a solution for which is has no data? Martin Marshall, vice chair of the RCGP, stated that, “…[A]t the end of the day, computers are computers, and GPs are highly-trained medical professionals[…] the former may support, but will never replace, the latter.”

Mobasher Butt, Babylon’s medical director, says that that was never his company’s intent, that “[…] AI decisions are supported by real-life GPs to provide the care and emotional support that only humans are capable of.” So, physicians, sigh in relief. Your jobs are not in danger of anything but perhaps becoming a bit easier.