Deep learning-enabled realistic virtual histology with ultraviolet photoacoustic remote sensing microscopy

A new laser-based scanning technology shows the capability to image fresh tissues with histological realism and with the speed and resolution needed for intra-operative analysis of resected tissues.
Published in Biomedical Research
Like

Share this post

Choose a social network to share with, or copy the URL to share elsewhere

This is a representation of how your post may appear on social media. The actual post will vary between social networks

To know whether a cancer surgery is successful requires a lengthy processing procedure on the surgically-resected tissue. Many research groups have sought for a technology to achieve histology-like images from these tissues within minutes of a surgery. However, no such technology has yet achieved the resolution, speed and realism needed for intra-operative analysis. Our recent Nature Communications paper "Deep learning-enabled realistic virtual histology with ultraviolet photoacoustic remote sensing microscopy" presents a new label-free technology that can address this unmet need. 

One of the things lacking with other label-free virtual histology imaging technologies has been strong cell nuclei contrast. Our approach, photoacoustic remote sensing (PARS) microscopy uses pulsed ultraviolet light and a co-focused interrogation beam. Modulations in the interrogation beam are proportional to the optical absorption of the pulsed ultraviolet light. Because DNA has strong optical absorption in the ultraviolet spectrum, our UV-PARS images show strong positive nuclei contrast that mimics a hematoxylin stain. This is complemented by UV scattering data which mimics an eosin stain.

We found these data to provide a promising view similar to true histology, however, the pictures we created did not have the realism expected by pathologists. To address this, we turned to deep learning. We subjected the absorption and scattering data to a deep-learning algorithm called a cycle-consistent Generative Adversarial Network to render images that are effectively indistinguishable from gold-standard H&E stained tissues. Unlike previous conditional Generative Adversarial Network methods, our cycleGAN approach did not require paired true and virtual histology data. This was important to us because of the need to generate images in thick fresh tissue that may be impossible to get a paired true H&E comparison.

Below are example virtual and true H&E images sampled from our paper. 

The image quality looked good to us, but we needed to test the diagnostic utility of these images to see if they were as good as existing methods such as frozen sections or gold-standard H&E analysis. To do this, we created a blinded pathologist study which included 5 pathologists. 

The results of this study showed that pathologists preferred our virtual images over frozen sections and when compared with true H&E images, our approach achieved 96% sensitivity and 91% specificity with 97% negative predictive value. 

Maloney et al., J. Biomed. Opt. 2018, 23:1-19 indicated that “A device to detect positive margins should have a high sensitivity, indicating the ability to accurately detect any tumor found in the margins, ideally above 95%. While specificity is less important, excess false positive margin detection would lead to additional unnecessary tissue removal. A new device should have a specificity at least matching current standard best practices, estimated at 85%." Our system satisfied this criterion and does so with needed speed and resolution. 

On the topic of speed and resolution, we worked hard to achieve a system with <0.5 micron resolution and imaging speeds of a few minutes per cm2. The combination of this resolution and scanning speed, along with histological realism and label-free scanning had not yet been achieved. We achieved 0.4micron resolution with scan speeds of 7min/cm2. Previous technologies have reported fast scan speeds but not with our resolution, as detailed in the following table: 

Some of these previous systems used camera-based imaging techniques. Some of these would require mosaicking of many patches, and the stop-start nature of such scanning requires substantial time. Our approach instead uses point-scanning with a fast voice-coil stage, which has achieved resolution- and speed combinations hard to otherwise achieve with even camera-based readout. 

In summary, finding a suitable intra-operative technology which provides the combination of speed, resolution, realism, and diagnostic utility, looks like it may be within reach.

Click on this link to see a video about our technology: video

Please sign in or register for FREE

If you are a registered user on Research Communities by Springer Nature, please sign in

Follow the Topic

Biomedical Research
Life Sciences > Health Sciences > Biomedical Research

Related Collections

With collections, you can get published faster and increase your visibility.

Biology of rare genetic disorders

This cross-journal Collection between Nature Communications, Communications Biology, npj Genomic Medicine and Scientific Reports brings together research articles that provide new insights into the biology of rare genetic disorders, also known as Mendelian or monogenic disorders.

Publishing Model: Open Access

Deadline: Oct 30, 2024

Carbon dioxide removal, capture and storage

In this cross-journal Collection, we bring together studies that address novel and existing carbon dioxide removal and carbon capture and storage methods and their potential for up-scaling, including critical questions of timing, location, and cost. We also welcome articles on methodologies that measure and verify the climate and environmental impact and explore public perceptions.

Publishing Model: Open Access

Deadline: Mar 22, 2025