قالب وردپرس درنا توس
Home / SmartTech / Google describes an AI that classifies chest X-rays with human accuracy

Google describes an AI that classifies chest X-rays with human accuracy



Analyzing chest X-ray images with machine learning algorithms is easier said than done. This is because the clinical identifiers required to train these algorithms are typically obtained by natural language processing or human annotation, resulting in inconsistencies and errors. In addition, creating records that represent a sufficiently diverse range of cases and creating clinically meaningful and consistent labels for images alone is a challenge.

Researchers at AI have been working to identify four findings on chest radiographs: pneumothorax (collapsed lung), nodules, and masses. The researchers are working to accelerate the goal post in terms of X-ray image classification , Fractures and air space turbidity (filling the lung tree with material). In an article published in the journal Nature the team claims that the model family, which has been evaluated with thousands of images in high quality label datasets, has demonstrated radiologist-grade performance in an independent review human expert.

The study was published months after scientists from Google AI and Northwestern Medicine created a model that could better detect lung cancer using early detection tests than human radiologists with an average of eight years of experience, and about one year after New York University used Google's Inception v3 machine learning model to detect lung cancer. AI also confirms the technology giant's progress in the diagnosis of diabetic retinopathy through eye scans, as well as the AI's Alphabet subsidiary DeepMind, which can recommend the correct treatment line for 50 eye diseases with an accuracy of 94%.

More than 600,000 images have been taken in this recent work. From two unidentified datasets, the first of which was developed in collaboration with Apollo Hospitals and consists of X-rays collected over several years in multiple locations. As for the second corpus, this is the publicly available ChestX-ray1

4 image set published by the National Institutes of Health, which has served as a resource for AI efforts in the past, but suffers from accuracy deficiencies.

The researchers developed a text-based image extracting system using X-ray reports associated with each X-ray beam and providing labels for over 560,000 images from the Apollo Hospitals record. To reduce the errors caused by text-based label extraction and to provide the relevant labels for a series of ChestX-ray14 images, they have recruited radiologists to check approximately 37,000 images in the two corpora.

The next step was to create high-quality references for model evaluation labels. A panel-based method was used, in which three radiologists checked all final melody and test set images and resolved disagreements through online discussion. This, according to co-authors of the study, allowed the identification and documentation of difficult findings that were originally discovered by only one radiologist.

Google notes that although the models achieved expert-level accuracy, performance was different for each company. For example, the sensitivity for detecting pneumothorax in radiologists was about 79% for the ChestX-ray14 images, but for the same radiologists it was only 52% for the other data set.

"The differences in performance between datasets … underscore the need for standardized scoring sets of images with accurate reference standards to enable comparison across studies," wrote Google researcher Dr. David Steiner and Google Health's technical director, Shravya Shetty, in a blog post that both contributed to the post. "[Models] frequently identified findings that were often overlooked by radiologists and vice versa. Strategies that use the unique & # 39; skills & # 39; Combining both the [AI] system and human experts are probably the most promising methods to exploit the potential of AI applications for the interpretation of medical images.

The research team hopes to lay the groundwork for superior methods with a corpus of labels for the ChestX-ray14 record that they have provided in open source. It contains 2,412 training and validation pictures and 1,962 test pictures or a total of 4,374 pictures.

"We hope these labels facilitate future machine learning efforts and allow a better comparison of apples to apples between machine-learning models for the breast, enabling X-ray interpretation," Steiner and Shetty wrote.


Source link