close
close

Deep learning architecture enables higher efficiency in light microscope image enhancement

Deep learning architecture enables higher efficiency in light microscope image enhancement

The next step in light microscopy image optimization: new deep learning architecture delivers higher efficiency

Image processing with the m-rBCR model. Credit: CASUS/A. Yakimovich

It is the computational processing of images that reveals the finest details of a sample placed under all kinds of different light microscopes. Although this processing has come a long way, there is still room for improvements such as increasing image contrast and resolution.

Based on a unique foundation deep learning The architecture, a new computational model developed by researchers from the Center for Advanced Systems Understanding (CASUS) at HZDR and the Max Delbrück Center for Molecular Medicine, is faster than traditional models and matches or even exceeds image quality. The model, called Multiphase Residue-BCR Net (m-rBCR), was developed specifically for microscopy images.

The corresponding conference paper, first presented at the biennial European Conference on Computer Vision (ECCV), is now available inside Computer Vision – ECCV 2024.

The new model brings a new breath to the image processing technique called deconvolution. This computationally intensive method increases the contrast and resolution of digital images captured on optical microscopes such as wide-field, confocal, or transmission microscopes. Deconvolution aims to reduce blur, a specific type of image distortion caused by the microscopic system used. The two main strategies are explicit deconvolution and deep learning-based deconvolution.

Explicit deconvolution approaches are based on the concept of point spread function (PSF). A PSF basically describes how the infinitesimal point light source emerging from the sample is expanded by the optical system and spread into a three-dimensional diffraction pattern.

This means that in a recorded (two-dimensional) image there is always some light from out-of-focus structures that causes blurring. By knowing the PSF of a microscopic system, you can calculate the blur to obtain a much better lifelike image than the raw recorded image.

SPY Young Researcher Group Leader and corresponding author of the ECCV article, Dr. “The biggest problem with PSF-based deconvolution techniques is that the PSF of a given microscopic system is often not available or imprecise,” says Artur Yakimovich. “For decades people have been working on what is called blind deconvolution, where the PSF is estimated from the image or set of images. But blind deconvolution is still a very challenging problem and the progress achieved has been modest.”

As Yakimovich’s team has shown in the past, using the “inverse problem-solving” toolbox works well in microscopy. Inverse problems concern recovering causal factors that lead to particular observations. Usually you need a lot of data and deep learning algorithms to successfully solve such problems. As with explicit deconvolution methods, the results are higher resolution or better quality images. For the approach presented at ECCV, scientists used a physics-informed method. plexus It is called Multistage Residual BCR Network (m-rBCR).

Deep learning implemented differently

In general, there are two basic types of image processing. It can start with a classical spatial representation of an image or with a frequency representation (requiring a conversion step from the spatial representation). In the latter, each image is represented as a collection of waves. Both representations are valuable.

Some machining operations are easier to perform in one form and some in the other. The vast majority of deep learning architectures operate in the spatial domain. It is very suitable for photos. However, microscopy images are different. They are mostly monochrome. Techniques such as fluorescence microscopy deal with specific light sources on a black background. So m-rBCR uses the frequency representation as the starting point.

“Using the frequency domain in such cases can help create optically meaningful data representations, a concept that allows m-rBCR to solve the deconvolution task with surprisingly few parameters compared to other modern deep learning architectures,” explains first author and Rui Li is a presenter at ECCV.

Li proposed developing the neural network architecture of a model called BCR-Net, inspired by the frequency representation-based signal compression scheme introduced by Gregory Beylkin, Ronald Coifman, and Vladimir Rokhlin in the 1990s (explaining the name of the BCR transform). ).

The team validated the m-rBCR model on four different datasets, two simulated microscopy image datasets and two real microscopy datasets. It exhibits high performance with significantly fewer training parameters and shorter running time compared to state-of-the-art deep learning-based models, and of course outperforms explicit deconvolution methods.

Discover the latest advances in science, technology and space 100,000 subscribers Those who rely on Phys.org for daily insights. Sign up for us free newsletter and receive updates on important discoveries, innovations and research —daily or weekly.

A model adapted to microscopy images

Co-author Prof. Structural Biology Group Leader at the Max-Delbrück-Centrum für Moleculare Medizin in Berlin. “This new architecture exploits a neglected pathway to learn representations beyond classical convolutional neural network approaches,” summarizes Misha Kudryashev.

“Our model significantly reduces potentially unnecessary parameters. As the results show, this is not accompanied by a loss of performance. The model is clearly suitable for microscopy images and, since it has a lightweight architecture, it defies the trend of increasingly larger models requiring increasingly more computing power.”

The Yakimovich group recently published an image quality enhancement model based on generative artificial intelligence. This Conditional Variational Diffusion Model produces state-of-the-art results that exceed the m-rBCR model presented here.

“But you need training data and computational resources, including adequate graphical processing units, which are highly sought after these days,” Yakimovich recalls. “The lightweight m-rBCR model does not have these limitations and still produces very good results. Therefore, I am confident that we will see good interest in the imaging community. To facilitate this, we have begun to improve user-friendliness.”

Yakimovich’s group Machine Learning for Infection and Disease aims to understand the complex network of molecular interactions that are active after the body is infected with a pathogen. Using the new possibilities of machine learning is very important here. His research interests include improving image resolution, 3D image reconstruction, automatic disease diagnosis, and evaluation of image reconstruction quality.

More information:
Rui Li et al., Solving the Microscopy Deconvolution Problem with a Residual Beylkin-Coifman-Rokhlin Neural Network, Computer Vision – ECCV 2024 (2024). DOI: 10.1007/978-3-031-73226-3_22

Quotation: Deep learning architecture enables higher efficiency in light microscope image enhancement (2024, November 19), retrieved November 19, 2024 from:

This document is subject to copyright. No part may be reproduced without written permission except in any fair dealing for private study or research purposes. The content is provided for informational purposes only.