Histology, which is essential for disease diagnosis and treatment planning, traditionally requires staining, which adds additional cost and labor.
To address this, label-free photoacoustic histology (PAH) techniques were developed to produce images without staining.
However, PAH was unfamiliar to pathologists, making interpretation and diagnosis difficult and inaccurate.
In this study, we integrated PAH with cutting-edge deep learning models capable of virtual staining, segmentation, and classification of human tissue images.
Limitations of the conventional Histology process
Histology is essential for diagnosing diseases and developing appropriate treatment plans.
Typically, examining removed tissue under a microscope requires staining, which involves additional labor and cost due to the use of chemicals.
Label-free histology approach
Photoacoustic histology (PAH) technology has been developed to mitigate these issues.
PAH generates images by detecting sound (ultrasound) signals produced by biomolecules when illuminated with light (laser), thus eliminating the need for staining and labeling.
However, PAH was initially unfamiliar to pathologists, complicating interpretation and diagnosis, and resulting in relatively low accuracy.
Deep learning-based framework for label-free photoacoustic Histology
We have developed an artificial intelligence (AI) system to analyze the label-free PAH of human liver cancer tissues.
This interconnected deep learning (DL)-based framework performs automated virtual staining, segmentation, and classification in label-free photoacoustic histology (PAH) of human specimens.
Initially, the "virtual staining step" transforms black-and-white, unlabeled images—containing cell nuclei and cytoplasm—into images that mimic stained samples.
This step is designed to produce images similar to actual stained samples while preserving tissue structures, and uses explainable deep learning methods to increase the reliability of the virtual staining results.
Next, during the "segmentation" phase, the unlabeled image and the virtual staining data are used to segment features of the sample such as cell area, cell count, and inter-cellular distances.
Finally, in the "classification" phase, the model uses the unlabeled image, virtual staining image, and segmentation data to classify whether the tissues are cancerous or not.
Results
We applied interconnected DL-based framework to the PAH images of human liver cancer tissues.
This interconnected DL-based framework, which integrates "virtual staining", "segmentation", and "classification", not only demonstrates outstanding realism in virtual staining, but it also yields additional results that improve the explainable of the model.
It also achieved a high accuracy of 98% in distinguishing between cancerous and non-cancerous liver cells.
Notably, the model demonstrated a 100% sensitivity when evaluated by three pathologists, underscoring its potential for clinical application.
Conclusion
We expect integration of PAH with AI will save tissue biopsy times and increase reliability of results.
Additionally, we anticipate that using this framework will lead to more accurate diagnoses and more effective treatment planning for patients.
For more information, read our publication "Deep learning-based virtual staining, segmentation, and classification in label-free photoacoustic histology of human specimens" in Light: Science & Applications.
Please sign in or register for FREE
If you are a registered user on Research Communities by Springer Nature, please sign in