Intelligent microscopy: work smarter, not harder

Within a dynamic biological sample, it is unsurprising that the optimal imaging parameters change during a microscope acquisition. To overcome rigid imaging frameworks, we present event-driven acquisitions that integrate sample changes into the imaging parameter control of a self-driving microscope.
Published in Protocols & Methods
Intelligent microscopy: work smarter, not harder
Like

Share this post

Choose a social network to share with, or copy the URL to share elsewhere

This is a representation of how your post may appear on social media. The actual post will vary between social networks

Biological processes take place across a range of spatial and temporal scales. As the name suggests, the microscope has been an invaluable tool in allowing us to observe biological processes at the roughly micron scale – and a popular vehicle for biologists exploring the vast and multi-faceted terrain of the life sciences. Furthermore, biological processes take place across a range of timescales with varying dynamics. Optimizing the imaging conditions of a microscope is a bit like learning how to drive through the unpredictable and complex landscape of biological discovery. Driving through a city is very different from vast open highways – similarly, different biological processes change the optimal imaging parameters for their observation. However, unlike commercial cars that have moved towards increased automation, microscope operation remains a heavily manual task requiring a lot of user input. Acquisitions are limited to rigid frameworks where the imaging speed and duration are fixed at the beginning of the experiment, and – without specific user intervention – are maintained constant throughout, regardless of the activity observed in the sample.

The perfect microscope would be able to navigate biological events across all timescales – from the millisecond fluctuations to the lifetime period of the system in question, with no perturbation to the sample. However, there is no such perfect microscope and instead, the available imperfect microscopes subsample the biological process at the microscope’s limited spatial resolution and imaging speed, and do so only within the imaging field-of-view and for a limited duration. Furthermore, photobleaching and phototoxicity impose limitations on the imaging parameters in time-lapse fluorescence microscopy of living samples. Choosing the “right” imaging parameters is challenging, since increasing the temporal resolution causes sample health and the photon budget to degrade faster, reducing the available imaging duration. Given these limitations, high temporal resolution and long-term imaging are mutually incompatible, and microscopy users are obliged to compromise between imaging speed and duration to capture events of interest under conditions that do not significantly deteriorate sample health.

While imperfect microscopes have still been sufficient for developing a better understanding of many biological processes, other questions remain unanswered because of the range of timescales involved and the challenges in accessing them. Working on mitochondrial division, you are quickly faced with the reality and limitations of live-cell time-lapse fluorescence imaging. Mitochondrial division is a dynamic process during which the organelle undergoes dramatic shape change at the constriction site before dividing into two separate entities. This process is also fast, taking only a few seconds. On the other hand, these events are relatively rare, often taking minutes between individual events. Therefore capturing such events is challenging since it requires at times different imaging speeds – slow to allow for longer imaging times to sample for events of interest; and fast to capture the event with good temporal resolution. Compromising in between risks missing events of interest because of either insufficient imaging duration or temporal resolution.

Ideally, the microscope user would be able to capture these different timescales independently and hence increase the imaging speed only when mitochondrial division events are present in the sample. Given the rigidity of current imaging frameworks, this would require the user to actively observe the sample at a slow imaging speed, and manually launch a fast acquisition only in the presence of mitochondrial divisions – the same way that drivers are required to accelerate and brake in response to the road conditions ahead. Besides being time-consuming and exhausting, such an approach is also made more difficult by human bias and error. Spotting mitochondrial divisions is challenging as they can occur almost anywhere and at any time within the mitochondrial network, equivalent to changing road conditions and unexpected obstacles requiring driver intervention. In fact, the easiest time to spot a mitochondrial division is after the fact – once the organelle has already divided. Therefore, even for an experienced microscopist, by the time they detect an event of interest, the mitochondrion may have already divided and it is too late to collect data on the early stages of this process.

As the world adapts to the arrival of self-driving cars, which have replaced or reduced human input to a minimum by sensing the driving conditions and car surroundings to safely navigate on the road, microscopy is experiencing a similar shift towards self-driving, or intelligent microscopy. Machine vision and artificial intelligence have made it possible to detect complex visual features as well and, in some cases, do so better than human users. Therefore, to advance beyond the inflexible imaging frameworks of imperfect microscopes and the bias and slow reaction of human users, we integrated a neural network to drive the microscope acquisition by detecting subtle changes in the sample – in this case, mitochondrial morphology and main components of its division machinery.

This framework, termed event-driven acquisitions (EDA) allows the microscope to prioritize the imaging speed or duration as needed. In the same way that a self-driving car controls the cruising direction and speed,  EDA works by adapting the temporal resolution, or the interval between individual exposures, in response to detected events of interest. This allows us to better sample the dynamic behavior of events of interest at a high temporal resolution while extending the imaging duration and preserving sample health during a lack thereof. This new, self-driving microscope tailors the acquisition to the sample and captures more mitochondrial events at a high temporal resolution, while preserving the sample and the fluorescence within during a lack thereof. Furthermore, the collected data is not acquired indiscriminately independent of the presence of events of interest, but instead enriched in the events of interest. This makes the data more valuable since it is denser in events of interest.

In the same way that self-driving cars can still be involved in crashes, EDA is not perfect and could still result in acquiring false positives or missing events of interest. As we learn to improve these systems, the question is however whether self-driving cars and microscopes still do a better job than their human counterparts. Our work showed that EDA could capture more mitochondrial divisions at an improved temporal resolution. Other considerations concern reproducibility, which relies on tightly controlling the experimental conditions across replicates. With EDA, since each dataset is acquired in a unique manner as a function of the activity observed within the sample, comparing within and across conditions will require careful randomization and sufficiently large datasets to average out any experimental variability and uncover underlying population differences.

Overall, EDA harnesses the power of computer vision to enable more complex imaging procedures, optimized to the biology in question. The ability to incorporate any user-trained neural network into the EDA framework (through the open-access EDA plugin provided) allows other microscopists to take EDA for a test drive for a range of biological problems. While there is still a lot of work needed for fully self-driving and intelligent microscopes, we believe EDA provides a starting framework for microscopy users to integrate artificial intelligence on their road to biological discovery.

Please sign in or register for FREE

If you are a registered user on Research Communities by Springer Nature, please sign in