[3IA Young Women Researchers] AI with Hind Dadoun

Published on March 8, 2022 Updated on March 8, 2022



What is your research topic?

My thesis is part of a partnership with NHance, the APHP health data warehouse (EDS) as well as the national health data platform, also called Health Data Hub (HDH).  This thesis has several objectives:

  • Build a large database of abdominal ultrasound images that can be easily exploited in a machine learning approach.
  • Evaluate the relevance of different learning methods -supervised and/or unsupervised- for computer aided diagnostic in abdominal ultrasound imaging.


Could you briefly explain it?

Ultrasound imaging is a method of choice for the medical profession because it allows for a real-time use,  does not irradiate the patient, and is not only used to see a baby for the first time, but also for over 20 different medical and surgical specialties at the bedside. The ultrasound device is evolving to become smaller and cheaper, sometimes connected to a smartphone. A kind of new stethoscope! However, interpreting an ultrasound image remains complex, with possible errors. Our project aims to help caregivers to make the best possible interpretation. To do so, we are developing artificial intelligence algorithms to identify the abdominal organs visible in an ultrasound and, if necessary, the main diseases usually detectable in the current practice of ultrasound. 

 

Can you illustrate with an example?  

In this example, the caregiver holds a portable ultrasound machine to see the inside of the patient's body. The goal is to perform medical interpretation of abdominal ultrasound images in real time to provide the user with clear explanations of AI-based diagnoses.

Can you tell us about an important result?

We developed a software that was made freely available to allow for standardization of ultrasound image content across different databases.  This work was published at the IEEE International Symposium on Biomedical Imaging. We also worked with the EDS and HDH teams to include this software in the ultrasound image processing workflow within the  EDS and thus accelerate the development of the project.

We then developed a framework for the detection, localization and characterization of focal liver lesions in B-mode ultrasound images. This work was accepted for publication in Radiology: Artificial Intelligence.  Accurate detection and assessment of focal liver lesions (FLLs) is a critical public health issue due to the increasing incidence of primary liver malignancies.  Non-contrast ultrasound is one of the most commonly used modalities to screen for FLLs in high-risk patients. A computer-assisted tool could help detect more malignant lesions at an early stage, increase differential diagnosis, and enable efficient and cost-effective treatment[3].The performance of our model was analyzed on a test set of 155 images from 48 livers, and was compared with two expert caregivers. Our model had a positive predictive value of 0.94 (95% CI: 0.90-0.99) and a sensitivity of 0.99 (95% CI: 0.97 -1.0) for the detection of FLLs. It was able to correctly localize 82% of lesions and, among lesions localized by all raters, had a positive predictive value of 0.89 (95% CI: 0.81-0.96) and a sensitivity of 0.84 (95% CI: 0.75-0.92) for characterizing FLLs (benign or malignant). In conclusion, the performance met or exceeded that of the experts. 

 

What are the challenges related to this topic?

Ultrasound imaging is challenging for several reasons:

First, it is a complex modality due to its poor resolution. It is even more difficult to interpret when factors related to the patient come into play: such as the presence of fatty tissue that can interfere with the reading. 
Second, The associated data are unstructured and heterogeneous: i.e. they are acquired by different machines, different people and without a generalized protocol to perform the examination.  
Third,  like any other medical modality, it is subject to data protection with restricted access on secured servers, making it cumbersome to construct a large database. 
Finally,  the annotation of ultrasound images is difficult because an expert is needed, and, even for an expert, an ultrasound image alone without context is difficult to interpret. 

In summary, the main challenge of this topic is how to develop machine learning tools for the automatic interpretation of abdominal ultrasound images in the absence of curated, annotated and openly available abdominal US databases. We managed to develop a framework for the standardization of image content across different databases and analyzed the performance of machine learning algorithms for a specific task when annotations are available on a small dataset. Our future work will focus on the development of weakly supervised/unsupervised algorithms (i.e few to no annotations available) on a very large dataset of images paired with the exam report, with the hope that we can automatically process the text to serve as ground truth for the image analysis algorithm.

 

What are the real-world impacts, issues?

According to the World Health Organization, two-thirds of the world's population do not have access to medical imaging services. Ultrasound, combined with X-rays, could cover 90% of the needs. Ultrasound is a method of choice: whether in an emergency, in consultation for patient follow-up or during a public health screening examination. It is the only non-invasive imaging modality, with no side effects (such as radiation-induced cancers), and which allows for a cost-effective diagnosis in real time.  Moreover, as computers have evolved, ultrasound machines have become considerably smaller and cheaper. Currently, hospitals are equipped with bulky machines costing several hundred thousand euros, which will gradually be replaced by probes connected to smartphones that cost only a hundred euros. In short, this imaging modality has the potential to become the new stethoscope for doctors around the world.  Despite the appearance of these ultra-portable devices, their use remains very limited because few operators are trained in the acquisition and interpretation of an ultrasound image. In a study [4], health care providers in developing countries identified lack of training as the main limitation to the use of ultrasound. In parallel with the deployment of ultrasound in all medical fields, artificial intelligence has developed considerably in recent years, especially in visual recognition. A visual recognition tool for ultrasound images would therefore be relevant in this case, provided that the developed algorithms are interpretable, fair, and robust. 

 


References

1. Evaluation and Management of Liver Masses. 2020. doi:10.1007/978-3-030-46699-2
2. Alizadeh A, Mansour-Ghanaei F, Bagheri FB, Froutan H, Froutan Y, Joukar F, et al. Imaging Accuracy in Diagnosis of Different Focal Liver Lesions: A Retrospective Study in North of Iran. J Gastrointest Cancer. 2020. doi:10.1007/s12029-020-00510-z
3. Cadier B, Bulsei J, Nahon P, Seror O, Laurent A, Rosa I, et al. Early detection and curative treatment of hepatocellular carcinoma: A cost-effectiveness analysis in France and in the United States. Hepatology. 2017;65: 1237–1248.
4. Shah S, Bellows BA, Adedipe AA, Totten JE, Backlund BH, Sajed D. Perceived barriers in the use of ultrasound in developing countries. Crit Ultrasound J. 2015;7: 28.

Publications

  • Dadoun, Hind, et al. "Combining Bayesian and Deep Learning Methods for the Delineation of the Fan in Ultrasound Images." 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI). IEEE, 2021.
  • Dadoun, Hind, et al. "Detection, Localization, and Characterization of Focal Liver Lesions in Abdominal Ultrasound with Deep Learning.” In press, Radiology: Artificial Intelligence.