Improving the Generalizability of Deep Learning Segmentation of Brain Metastases on MRI

Deep learning networks for brain metastases segmentation are often limited in terms of generalizability due to limited training data and test conditions. By using an input-level dropout layer, and utilizing multi-center data, we aimed to improve the generalizability of our segmentation network.
Improving the Generalizability of Deep Learning Segmentation of Brain Metastases on MRI

The need for automated segmentation models

Attributed in large by advances in treatment of primary tumors, there has been an explosive growth in patients with metastatic cancer over the last decades. In a survey including more than 26,000 patients, 12% of all patients with metastatic disease were found to have brain metastases at diagnosis (1). The majority of patients present with three or fewer metastases to the brain; however, studies have reported the frequency of multiple (>3) brain metastases being above 40% (2, 3). Magnetic resonance imaging (MRI) is the key imaging modality in the diagnosis of brain metastases, as well as longitudinal follow-up to assess treatment response. The diagnostic methods for assessing treatment response follow the criteria formulated by the Response Assessment in Neuro-Oncology (RANO) working group and are based on measuring the size of the enhancing metastases on contrast-enhanced T1-weighted MRI. Hence, the delineation of initial metastatic brain lesion size and changes related to disease progression or response are key neuroradiology tasks (4). Traditionally, the metrics used for assessing the response to treatment of brain metastases are based on unidimensional measurements, and although the value of using volumetric measurements has been increasingly discussed, RANO continues to use linear measurement for assessing brain metastases. One concern raised for not switching to volume assessment is that volumetric analysis, as performed manually by radiologists, is a tedious and time-consuming task that adds cost and complexity, and is not available at all centers. If an automated, accurate pipeline capable of detection and segmentation of brain metastases could be developed, it might enable physicians to incorporate volumetric measurements into routine practice.

Advances in artificial intelligence (AI) are suggesting the possibility of new paradigms in healthcare and are particularly well-suited to be adopted by radiologists (5–7). One of the key advantages of AI-based radiology is the automatization and standardization of tedious and time-consuming tasks. In recent years, several deep learning approaches have successfully been developed for automatic segmentation of brain metastases (8, 9).

On the generalizability of deep learning algorithms

In an initial study, our research group trained a fully convolution neural network (CNN) for automatic detection and segmentation of brain metastases using multisequence MRI (10). While our approach showed high accuracy and performance, we concluded that the robustness and clinical utility needed to be challenged to fully understand the strengths and limitations of our method. In fact, many AI-based segmentation studies are often limited in terms of generalizability in that the algorithms are trained and tested on single-center patient cohorts. In some studies, the training- and test-sets are even limited to a single magnetic field strength, a single vendor, and/or a single scanner for data acquisition. However, a key step towards understanding the generalizability and clinical value of any deep neural network is by training and testing using real-world multi-center data.

Moreover, as we learned from our experiments, a limitation of these AI-based segmentation networks is that they are trained on a distinct set of MRI contrasts, which limits the use of the networks to sites acquiring the same sequences. However, deep neural networks, as used in future clinical settings, should be able to handle missing model inputs, as variations in MRI protocols across the world are quite common.

Our solution: Input-level dropout

In our paper entitled ‘Handling Missing MRI Sequences in Deep Learning Segmentation of Brain Metastases: A Multi-Center Study’, which is published in npj – Digital Medicine, we aimed to improve the robustness and generalizability of our segmentation network by utilizing an input-level dropout (ILD) model. A diagram showing the ILD-pipeline, as well as four example cases with resulting AI-based segmentations, is shown in Figure 1. In this neural network, a dropout layer at the input-level is trained on the full set of four distinct MRI sequences, as well as every possible subset of the MRI sequences. This way, our ILD-model can allow segmentation even in the setting where individual MRI sequences are missing. This enabled us to generalize our deep learning segmentation model for use in multiple different imaging sites.  Our model showed high performance and accuracy, equivalent to that of expert neuroradiologists, in a separate cohort of patients with different scanners and imaging protocols.

Figure 1: Diagram showing the neural network pipeline used in this study. The left column shows MRI data from four representative example cases. The MRI data is fed into the input-level dropout (ILD) model shown in the middle column. The right column shows the predicted segmentation as likelihood-maps overlayed the input MRI-data. The voxel-wise likelihood of being within a metastatic lesion is indicated by the color-bar.

Potential impact of the proposed method

We hope that our AI-based segmentation model may have direct impact on the diagnosis and treatment of patients with brain metastases. The use of our ILD-model will enable objective and quantitative analysis in real time, even when there is missing MRI data. Automation will greatly benefit the patients in that physicians can provide higher quality and reproducibility in their assessments, as well as lowering interobserver variability. In current clinical settings, only semi-quantitative analysis, such as maximal linear dimension, can be assessed because better volumetric assessments are too time-consuming to be performed properly.

Segmenting metastatic lesions is incredibly time consuming, so we hope that the ILD-model can lighten the workload physicians face in their everyday clinical practice. Many hospitals are seeing large increases in imaging studies without an equivalent increase in the radiological workforce. Hence, our AI tool can be used to improve clinical efficiency, thus allowing physicians to keep up with increased workload and provide better patient care. Better efficiency for brain metastasis identification and segmentation may also benefit other patients, as radiologists could then devote more time to clinically challenging cases and discussions with other caregivers instead of outlining lesions.

A key step going forward is to bring our ILD-pipeline into everyday hospital practice. Our results so far show high segmentation performance, indicating that our proposed AI model can soon begin implementation into clinical workflow. However, the results are limited by the small sample size and the homogeneity of the test cohort, and prospective studies are needed to even further verify the generalizability of our method to other scanners, sequences, and patient cohorts.

To learn more about our ILD-model, please read our open access paper entitled “Handling Missing MRI Sequences in Deep Learning Segmentation of Brain Metastases: A Multi-Center Study” here


  1. Cagney DN, Martin AM, Catalano PJ, et al.: Incidence and prognosis of patients with brain metastases at diagnosis of systemic malignancy: A population-based study. Neuro Oncol 2017; 19(11):1511–1521.
  2. Fabi A, Felici A, Metro G, et al.: Brain metastases from solid tumors: Disease outcome according to type of treatment and therapeutic resources of the treating center. J Exp Clin Cancer Res 2011; 30, 10.
  3. Nussbaum ES, Djalilian HR, Cho KH, Hall WA: Brain metastases: Histology, multiplicity, surgery, and survival. Cancer 1996; 78(8):1781–1788.
  4. Lin NU, Lee EQ, Aoyama H, et al.: Response assessment criteria for brain metastases: proposal from the RANO group. Lancet Oncol 2015; 16:e270–e278.
  5. Zaharchuk G, Gong E, Wintermark M, Rubin D, Langlotz CP: Deep learning in neuroradiology. Am J Neuroradiol 2018; 39(10):1776–1784.
  6. Ting DSW, Liu Y, Burlina P, Xu X, Bressler NM, Wong TY: AI for medical imaging goes deep. Nat Med 2018; 24:539–540.
  7. Hosny A, Parmar C, Quackenbush J, Schwartz LH, Aerts HJWL: Artificial intelligence in radiology. Nat Rev Cancer 2018; 18:500–510.
  8. Liu Y, Stojadinovic S, Hrycushko B, et al.: A deep convolutional neural network-based automatic delineation strategy for multiple brain metastases stereotactic radiosurgery. PLoS One 2017; 12:e0185844.
  9. Charron O, Lallement A, Jarnet D, Noblet V, Clavier J-B, Meyer P: Automatic detection and segmentation of brain metastases on multimodal MR images with a deep convolutional neural network. Comput Biol Med 2018; 95:43–54.
  10. Grøvik E, Yi D, Iv M, Tong E, Rubin D, Zaharchuk G: Deep learning enables automatic detection and segmentation of brain metastases on multisequence MRI. J Magn Reson Imaging 2020; 51:175–182.

Please sign in or register for FREE

If you are a registered user on Research Communities by Springer Nature, please sign in

Subscribe to the Topic

Health Care
Life Sciences > Health Sciences > Health Care
  • npj Digital Medicine npj Digital Medicine

    An online open-access journal dedicated to publishing research in all aspects of digital medicine, including the clinical application and implementation of digital and mobile technologies, virtual healthcare, and novel applications of artificial intelligence and informatics.

Related Collections

With collections, you can get published faster and increase your visibility.

Clinical applications of AI in mental health care

This joint venture Collection between npj Mental Health Research and npj Digital Medicine highlights how AI can be safely, ethically, & impactfully utilized to advance our understanding of mental illnesses & improve patient care.

Publishing Model: Open Access

Deadline: Jun 22, 2024