Recent developments such as large language models (e.g., ChatGPT) and computer vision (e.g., self-driving cars) have brought much attention to artificial intelligence's potential for completing complex tasks with near human-level accuracy. Numerous other applications for artificial intelligence have already been developed to assist with research across many fields of study, including oncology.
Developing these tools is a complex task, requiring expertise in both the machine-learning principles that underlie the models and the domain in which the model is to operate. Because many machine learning models that drive artificial intelligence require training on extensive data, most applications rely on deep learning from pre-existing data sets. This approach typically uses extensive collections of low-specificity data to allow the model to learn the features contained in the data. Some artificial intelligence applications have been developed based on the digital slide images and associated patient data in The Cancer Genome Atlas to recognize cancer-associated features, such as microsatellite instability [1].
While this approach is efficient for clinically focused research, most basic and translational research relies on model systems, which can be controlled experimentally. In cancer research, this usually involves cell culture or animal models. Unfortunately, there is no analog to The Cancer Genome Atlas for animal models. This presents a significant barrier to creating artificial intelligence applications to analyze animal models of cancer, although some assays (e.g., immunohistochemistry) can be used to generate labeled training data more easily [2]. However, an initial investment in artificial intelligence-driven tools could significantly improve the speed and reproducibility of preclinical analyses for research groups worldwide.
In our efforts to understand the development and progression of lung adenocarcinomas, the most common form of lung cancer, we have developed many mouse models with various engineered mutations. In contrast to human lung cancers, the engineered mutations lead to the development of multiple primary tumors throughout the lungs of the mice. This reduces the number of animals required for our studies as each primary tumor develops and evolves largely independently; however, analyzing each tumor throughout the lungs can be quite daunting and tedius due to the large number of tumors that arise in some of our models. While some existing commercial and open-source software can distinguish between tumor and normal lung tissue and summarize the physical characteristics of tumors (e.g., area) with minimal effort, histological analysis still requires manual examination of each tumor. Furthermore, methods like tumor grading often need a good amount of practice to perform accurately and are plagued by intra- and inter-rater variability issues.
Therefore, we set out to build a tool for Grading of Lung Adenocarcinomas with Simultaneous Segmentation by Artificial Intelligence (GLASS-AI). We aimed to produce an easy-to-use program that could automatically identify and grade lung adenocarcinoma tumors across an entire digital scan of a microscopy slide. This would not only free up the valuable time spent manually analyzing these specimens but could also be used across multiple institutions to achieve a more standardized, reproducible analysis.
We included multiple mouse models in our training data set to improve the generalizability of the GLASS-AI machine-learning model. We recruited three expert human raters to segment and grade individual tumors using the scale developed by the Jacks laboratory [3] to minimize the effects of inter-rater variability. In our test using a set of 10 slides not used for training that was analyzed by a fourth human rater not involved in generating the training annotations, GLASS-AI found 99.8% of the 1,958 individual tumors that were manually annotated. GLASS-AI’s grade prediction agreed with the manual annotation in 86% of the tumors, which was almost identical to the accuracy we observed in the final testing analysis of the training data. In addition, GLASS-AI completed the analysis with an average of 7.5 minutes spent on each slide, while the human rater averaged approximately 4.5 hours per slide.
Employing artificial intelligence for these kinds of analyses not only saves researchers a considerable amount of time, but the design of the machine learning models can also provide a higher resolution analysis than could ever feasibly be performed by a human. With GLASS-AI, each pixel in an image is classified into ‘normal airway’, ‘normal alveoli’, or the various grades of lung adenocarcinoma. Contiguous tumor regions are then segmented and used for overall tumor grade assignment. Human raters perform this process almost reversed, first segmenting tumors and then assigning the overall tumor grade based on the highest grade they observe within the tumor, comprising at least 10 – 20% of the tumor’s area. This manual approach masks the heterogeneity observed in tumors, a significant contributor to cancer progression and therapeutic resistance. Lung adenocarcinomas are especially heterogeneous tumors, and GLASS-AI can fully reveal this underlying heterogeneity through its bottom-up analysis approach.
GLASS-AI's unprecedented resolution of grading also enables a more precise analysis of the changes that occur during tumor progression. For example, in tumors driven by oncogenic activation of the Ras signaling pathway, it has been noted that high-grade tumors often display further dysregulation in the Mapk signaling [4]. By pairing GLASS-AI’s analysis with immunohistochemical staining, we showed that this dysregulation occurs specifically in high-grade regions of tumors, even if that region is only a small fraction of a large, lower-grade tumor. These results highlight the dynamic evolution of tumors where a small population of cancer cells undergoes an advantageous alteration that leads to expansion and subsequent disease progression.
We continue to use GLASS-AI to characterize new mouse models of lung adenocarcinoma. We are also using GLASS-AI in conjunction with spatial multi-omics techniques to explore the molecular drivers of tumor progression in lung adenocarcinoma. GLASS-AI is a valuable tool for other researchers working on lung adenocarcinomas, and we are committed to collaborative, open science. Therefore we have provided GLASS-AI as open-source software on GitHub with links to pre-compiled versions for Windows and Mac computers and made the training data set publicly available on Zenodo.
References
[1] J. N. Kather et al., “Deep learning can predict microsatellite instability directly from histology in gastrointestinal cancer,” Nat. Med., vol. 25, no. 7, pp. 1054–1056, Jul. 2019, doi: 10.1038/s41591-019-0462-y.
[2] K. Lee et al., “Deep Learning of Histopathology Images at the Single Cell Level,” Front. Artif. Intell., vol. 4, no. September, pp. 1–14, Sep. 2021, doi: 10.3389/frai.2021.754641.
[3] E. L. Jackson et al., “The differential effects of mutant p53 alleles on advanced murine lung cancer,” Cancer Res., vol. 65, no. 22, pp. 10280–10288, Nov. 2005, doi: 10.1158/0008-5472.CAN-05-2193.
[4] F. A. Karreth, K. K. Frese, G. M. DeNicola, M. Baccarini, and D. A. Tuveson, “C-Raf is required for the initiation of lung cancer by K-Ras G12D,” Cancer Discov., vol. 1, no. 2, pp. 128–136, 2011, doi: 10.1158/2159-8290.CD-10-0044.
Please sign in or register for FREE
If you are a registered user on Research Communities by Springer Nature, please sign in