A new study by the MIT Laboratory for Computer Science and Artificial Intelligence (CSAIL) proposes a machine learning system that can be used to examine X-rays to diagnose conditions such as lung collapse and enlarged heart. This is not particularly new – computer vision in healthcare is an established area – but the CSAIL system can delay experts depending on factors such as the person’s skills and level of experience.
Despite its promise, AI is facing ethical challenges in medicine. Google recently released a white paper that found that an eye disease prediction system is impractical in the real world, in part due to technological and clinical failures. STAT reports that unproven AI algorithms are used to predict the decline in COVID-1
The CSAIL system aims to fix this with a “classifier” that can predict a specific subset of tasks and a “reject” who decides whether a particular task should be handled by the classifier or an expert. The researchers behind the system claim that the classifier is fairly accurate and performs 8% better in cardiomegaly (enlarged heart) than in experts alone. However, the real advantage is probably the adaptability – the system allows the user to optimize for any desired choice, regardless of whether this is the prediction accuracy or the cost of the time and effort of the expert.
Efficiency is another advantage of the system approach. Experiments on tasks in medical diagnosis and the classification of text and images have shown that not only does better performance than basic values, but also with less computing effort and far fewer training patterns.
The researchers have not yet tested the system with human experts. Instead, they have developed a number of “synthetic experts” to optimize parameters such as experience and availability. The current iteration requires onboarding to get used to the strengths and weaknesses of certain people. However, the team’s plans include developing systems that learn from biased expert data and work with (and postpone) multiple experts at the same time.
“There are many obstacles that understandably prohibit full automation in clinical environments, including trust and accountability issues,” said David Sontag, lead author and Von Helmholtz associate professor of medical technology at the MIT Institute of Electrical and Computer Engineering, in a statement . “We hope that our methodology will inspire machine learning practitioners to become more creative by integrating human expertise into their algorithms in real time.”