Data without pizza #3: A talk on AI in health by Professor Peter Szolovitz
Below you can read the full article as written by Jessie van de Nieuwenhuijzen.
The vision from the 1970s
Szolovits is a pioneer in the field of AI in medicine, having worked in it since the 1970s, and his talk focused on the history of the development of AI applications in health. “My start in this field really started when, in 1974, I joined the faculty at MIT and I met Bill Schwartz, who was the Chief of Medicine at Tufts Medical Centre,” he told us. William B Schwartz had published a prescient article in 1970 in the New England Journal of Medicine in which he laid out his vision of how the rapidly advancing information sciences would make huge changes in the way that medicine is practised. “I found this quite inspiring,” said Szolovits, “and decided that with my background in artificial intelligence, I wanted to focus on the particular problems of AI as they are applied to medicine.”
Computers as support tools
Schwartz’ predictions included the notion that computers would function as an intellectual tool – one to support physicians by taking note of history, physical findings, lab data and other factors and suggesting the most probable diagnoses and appropriate responses. This, in Schwartz’s vision, would leave physicians free to focus on the tasks that are “uniquely human” – i.e., dealing with emotional aspects of care, and exercising “good judgment in the nonquantifiable areas of clinical care”, which, in turn, would attract more doctors with an interest in the human aspects of medicine. Schwartz thought that AI would have transformed the healthcare system completely by the year 2000. “Which seemed a safe 30 years away,” pointed out Szolovits. “But in 2000, Bill was still alive, and I remember calling him up and teasing him about the fact that basically, none of the things that he had predicted had actually come to pass.”
The evolvement of AI approaches to medicine
He may have been too optimistic regarding the timing, but Schwartz was certainly on to something, and in the following part of the talk we learned more about how AI approaches to medicine evolved throughout the decades. Szolovits talked us through the simple probabilistic methods used in the 1950s and 1960s, such as Naïve Bayes, then “a very common way of doing diagnostic reasoning.” Other methods are still in use today. “Linear and logistic regression are still very much in the toolkit of people analysing data in healthcare,” said Szolovits.
By the mid-70s, researchers at MIT, Stanford, the University of Pittsburgh and Rutgers began using systems that employed symbolic methods, such as knowledge-based systems. “Instead of unspecified probabilistic models, people used rules and matching with prototypes and logical statements in order to draw inferences from the observational data to arrive at diagnosis and therapies,” said Szolovits. The next big step followed. “By the 1980s, we became more sophisticated.”
Big data arrives on the scene
This was when the idea of Bayesian networks was introduced. This “essentially lifted the simplicity of the Naïve Bayes model, and said, well, as long as not everything is fully dependent on everything else, we can still build practical models that assume less independence among different variables for less conditional independence, but that are still somewhat computationally tractable.” The key to the further development of the network models was, of course, data. “Later in that decade, people started to say, oh, well, if we have large data sets, we can learn both the structure and the parameters of these Bayesian network models that represent the truth about what we identify in these large data sets. By the mid-1990s, big data started being possible.”
Out of that came some interesting observations. “One said if you have enough data, even very simple methods work quite well. Even methods that you would think might not because they make too many assumptions that are not a good fit to the world.”
New networks and old dreams
At the same time, the growth of computing power allowed neural network-based deep models to be trained. These algorithms are enabled by enormous collections of data and enormous computational power and show their greatest successes in image interpretation, including chest x-rays, retinopathy and dermatology, though they also have growing capabilities in predictions, and making “sense of narratives.”
Interestingly, the development of neural networks meant that Schwartz’s old dreams resurfaced – the idea of the computer as a virtual medical assistant that leaves the humans with time to dedicate to the human aspects of medicine. Time for empathy-based care. But, again, we are not quite there yet. According to Szolovits, actual use of ML-based decision support tools is still rare, though some image interpretation models are now commercially available and approved for use.
In the following part of the talk, Szolovits presented examples of work from his research group, including the joint modelling of chest radiographs and radiology reports for pulmonary oedema assessment; and finding biomarkers for NASH.
A new prediction?
Questions from the audience were wide-ranging and are mainly beyond the scope of this report. In the concluding part, Szolovits was rather put on the spot by being asked for his “updated guess for Schwartz’s prediction” – i.e., the scenario of AI completely transforming healthcare by the year 2000. But, perhaps smartly, he was hedging his bets. “I remember in 1974, when I started working in this area, I predicted that we would have comprehensive electronic medical records by 1983. And I was only wrong by 30 years… so I don’t trust my own intuition.” Other experts in the field, he pointed out, are much more optimistic. And using examples of significant capabilities in analysing mammographs and images of the fundus of a retina, Szolovits did concede there was a lot of worth in these applications, particularly in the field of imaging. “And the Food and Drug Administration in the US has now even approved about a dozen such tools for use in clinical medicine,” he added. “In other areas, not so much. In areas that involve treatment decisions, or that involve diagnoses that are based on factual data, or ideas like summarising clinical records in order to get a good summary… those things are still very much experimental, and probably not good enough that you would want to rely on them clinically. Now, how quickly will those things come to pass?” Unlike Schwartz, Szolovits did not give us a set date on which we could give him a call and tease him about his wild predictions.
Read the origional article here.
Read More
-
ADS’s Integration with Amsterdam AI – Next Steps
From September onwards Amsterdam Data Science will merge media channels with Amsterdam AI. Any online activities you are used to will continue on the Amsterdam AI channels. So please register below to stay up to date!
-
ADS’s Integration with Amsterdam AI – Next Steps
Amsterdam Data Science is excited to announce the next step in joining forces with Amsterdam AI. Together, we will support Amsterdam’s development as an international hub for Responsible AI.
-
Data Science Center starts groundbreaking research program on AI with all 7 UvA Faculties
The UvA Data Science Center is announcing a groundbreaking research program to align artificial intelligence (AI) for the interpretation of video data with human values and ethical principles.