CN111429562A - Wide-field color light slice microscopic imaging method based on deep learning - Google Patents

Wide-field color light slice microscopic imaging method based on deep learning Download PDF

Info

Publication number
CN111429562A
CN111429562A CN202010117274.4A CN202010117274A CN111429562A CN 111429562 A CN111429562 A CN 111429562A CN 202010117274 A CN202010117274 A CN 202010117274A CN 111429562 A CN111429562 A CN 111429562A
Authority
CN
China
Prior art keywords
wide
field
imaging
color
wfm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010117274.4A
Other languages
Chinese (zh)
Other versions
CN111429562B (en
Inventor
柏晨
姚保利
但旦
千佳
党诗沛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
MGI Tech Co Ltd
Original Assignee
XiAn Institute of Optics and Precision Mechanics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by XiAn Institute of Optics and Precision Mechanics of CAS filed Critical XiAn Institute of Optics and Precision Mechanics of CAS
Priority to CN202010117274.4A priority Critical patent/CN111429562B/en
Publication of CN111429562A publication Critical patent/CN111429562A/en
Application granted granted Critical
Publication of CN111429562B publication Critical patent/CN111429562B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • G06T7/596Depth or shape recovery from multiple images from stereo images from three or more stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • G06T2207/10061Microscopic image from scanning electron microscope

Abstract

The invention belongs to the technical field of optical microscopic imaging, and provides a wide-field color light slice microscopic imaging method based on depth learning, which solves the problems that defocusing information is complex and resolution is poor in wide-field microscopic imaging, SIM-OS imaging needs large data acquisition and the like. The unique high resolution and the panchromatic property of the full-color structured light illumination are fully utilized, a single-width wide-field microscopic imaging result can be used for training, and the method is popularized to a wide-field microscopic imaging experiment; the method can directly obtain the high-quality image with the light slice component of the wide field depth of field and the full color from the wide field image, and simultaneously has the three-dimensional reconstruction and data analysis capabilities equivalent to full-color structured light illumination imaging on the spatial resolution and the dimension; the required data volume is sharply reduced compared with full-color structured light illumination; according to the method, under the condition of not losing details, the imaging throughput of the imaging system is remarkably improved and the phototoxicity pollution risk of the system is reduced by extracting the light slicing result of the wide field depth of field and reducing data acquisition.

Description

Wide-field color light slice microscopic imaging method based on deep learning
Technical Field
The invention belongs to the technical field of optical microscopic imaging, particularly relates to a deep learning technology, combines full-color wide-field microscopic imaging, and provides a low-data-acquisition high-altitude time-division-resolution wide-field color light slice microscopic imaging method based on deep learning.
Background
Wide field microscopes are a basic sample imaging party in microscopes that obtain an observed image by exposing the entire sample of interest to a light source through an observer, camera, or computer monitor. Wide-field microscopy (WFM) is generally the imaging tool of choice for biologists to study as a low-cost, fast-imaging, low-photobleaching imaging modality compared to confocal microscopy or electron microscopy. However, WFM not only collects light from targets in the focal plane, but also superimposes all light from illuminated layers of samples above and below the focal plane, and analysis and visualization of wide-field data is often challenging, with images obtained often accompanied by characteristics of defocused light information mixing, low signal-to-noise ratio, and poor axial resolution. Currently, WFM often employs deconvolution techniques to suppress out-of-focus blur and recover microscopic images with improved contrast and resolution, but this process is really a complex and time-consuming computational process, often requiring a large number of iterations to produce corrected images, and relying on detailed estimated parameter inputs.
Although confocal Microscopy has the advantage of producing a sharply focused image by blocking background and autofluorescence, some of the in-focus information may be lost due to pinholes, confocal Microscopy is generally time consuming and structurally complex, another method currently in use to extract in-focus information is structured light Microscopy (SIM), which uses a structured light field of sinusoidal intensity to illuminate the sample instead of the traditional uniform light field and can extract clear information of the sample from the modulated image.
However, as with many other SIM-OS methods, a common method of FC-SIM imaging an entire specimen is to take multiple images corresponding to different object planes. In other words, an "optical slice" of the sample against an off-focus background is achieved by moving along the optical axis. However, one of the main disadvantages of this study is that, due to its limited focus information on various parts, when the imaging target is a thick sample (e.g. an insect), a large amount of acquired data is required, requiring large-scale three-dimensional imaging. In fact, almost all SIM-OS based methods easily obtain a thin sample two-dimensional image with satisfactory resolution, but when imaging thick samples, the reconstructed image is severely disturbed by out-of-focus signals and often an image with sufficient in-focus components cannot be obtained. Therefore, extensive data acquisition and processing is inevitable for imaging the entire sample.
In recent years, deep learning techniques have been gradually developed as effective methods for achieving the background information removal and resolution enhancement effects. At present, the improvement of imaging depth and reconstruction efficiency in microscopic imaging has been successfully realized by using Convolutional Neural Networks (CNNs) and the like. In addition, CNN has also been shown to perform both autofocus and phase recovery. The applications show that the neural network can realize high-performance OS reconstruction under the condition of no need of prior information, so that the imaging quality is improved, and the data acquisition amount of WF imaging is effectively reduced.
Disclosure of Invention
The invention aims to provide a wide-field color light slice microscopic imaging method based on depth learning, namely FC-WFM-deep, in combination with technologies such as depth learning, wide-field microscopic imaging and data fusion, so as to solve the problems that defocusing information is complex and resolution ratio is poor in wide-field microscopic imaging, SIM-OS imaging needs large data volume acquisition and the like.
According to the method, the wide field depth OS result is directly reconstructed through the wide field data, and then the high-quality 3D image structure is constructed, so that the data acquisition amount is remarkably reduced on the premise of not losing details, and the imaging throughput of the system is improved.
In order to achieve the purpose, the invention adopts the following technical scheme:
a wide-field color light slice microscopic imaging method based on depth learning comprises the following steps:
step one, constructing and training an FC-WFM-Deep model;
step 1.1, obtaining IOSAnd IwideIn which IOSAnd IwideRespectively obtaining a full-color structured light slice microscopic imaging result and a corresponding full-color wide-field microscopic imaging result;
step 1.2, determining the depth of field range of FC-WFM, and selecting a plurality of full-color structured light slice microscopic imaging results IOSAnd determining IcosAnd Iwide,K+1In which IcosOS imaging junction for wide field depth of fieldFruits, Iwide,K+1Scanning wide field data positioned at the central position in a stack for 2K +1 frames in the z direction, wherein K is a parameter selected according to the stack number of odd frames;
step 1.3, utilizing the I obtained in step 1.2cosAnd Iwide,K+1Training an FC-WFM-Deep model by data;
step 1.31, establishing an FC-WFM-Deep model by using CNN;
step 1.32, mixing IcosAnd Iwide,K+1Training an FC-WFM-Deep model by taking the formed residual data set as a training data set;
step 1.321, because the whole representation is used as the input of the CNN model, the size and the calculation cost of the training data set are greatly increased, and therefore, the training data set is divided into sub-data pairs from center to center, namely, the sub-data pairs
Figure BDA0002391878470000041
And
Figure BDA0002391878470000042
since the information of the central point is affected by the surrounding area, the side length of the sub-data pair can be determined according to the point spread function. For more accuracy, the experimental data of a plurality of different fields of view are combined for use, so that thousands of sub-image pairs can be obtained to meet the scale requirement of a neural network training data set, and thousands of sub-image pairs can be obtained to meet the scale requirement of the neural network training data set;
step 1.322, determining the target of the FC-WFM-Deep model as follows: learning predicted values
Figure BDA0002391878470000043
Model f of (1), wherein
Figure BDA0002391878470000044
Representing wide field depth of field OS imaging results
Figure BDA0002391878470000045
An estimate of (d). In addition, unlike direct prediction of output images, residual networks have an easy-to-train, high training powerThe method has the advantages of high degree, quick convergence, improved imaging performance and the like, so that in FC-WFM-Deep, the prediction residual image is actually trained
Figure BDA0002391878470000046
Namely, it is
Figure BDA0002391878470000047
For the FC-WFM-Deep network with a depth of L, it has three types of layers — convolutional layer (Conv), corrected Linear Unit layer (Re L U), and batch normalization layer (BN).
Step 1.323, learning trainable parameters l (Θ) of the FC-WFM-Deep model by taking the predicted value estimated in step 1.322 f and the average mean square error of the corresponding residual image as a loss function, and the process is expressed as:
Figure BDA0002391878470000048
where T is the total number of sub-data pairs used to train the network,
Figure BDA0002391878470000049
is the p-th
Figure BDA00023918784700000410
Data, RpIs the corresponding residual image;
step 1.324, solving parameters of the FC-WFM-Deep model by using an optimization solving algorithm to obtain a trained FC-WFM-Deep model;
obtaining a high-quality wide-field depth-of-field OS image by using the trained FC-WFM-Deep model, and performing three-dimensional imaging;
inputting a series of arbitrary wide-field microscopic imaging data into the FC-WFM-Deep model trained in the step I, predicting a series of corresponding high-quality wide-field depth-of-field OS images, and then performing three-dimensional imaging.
Further, step 1.1 specifically includes the following steps:
step 1.11, for each axis plane in the z-axis scanning sequence, obtaining an original color image by utilizing the full-color structured light illumination imaging of a multi-step phase shift method, and storing each image; wherein the z-axis direction is the imaging scanning depth direction;
step 1.12, traversing all z-axis positions of a focal plane according to the step 1) according to a set z-axis scanning step length to obtain volume data of different axial planes;
step 1.13, converting each image from RGB color space to HSV color space according to the image data of each axial plane, and respectively obtaining three H, S and V space components;
step 1.14, respectively applying the root-mean-square decoding algorithm to three HSV space components to obtain an OS image I of each space componenti,z(x,y):
Figure BDA0002391878470000051
Wherein i ═ H, S, V; z, x and y are respectively space three-dimensional coordinates;
step 1.15, then, WF image data I of three spatial components can also be obtainedi,wide(x,y):
Figure BDA0002391878470000052
Step 1.16, recombining the OS image and the WF image of the three HSV space components and converting the images back to the RGB space to form a full-color structured light slice microscopic imaging result IOSAnd corresponding full-color wide-field microscopic imaging result IwideFor subsequent processing in the computer.
Further, the multi-step phase shift method in step 1.11 specifically includes: three fringe illumination original color images with adjacent phase shifts of 2 pi/3 are acquired simultaneously by a color camera.
Further, step 1.2 specifically includes the following steps:
step 1.21, according to the set z-axis scanning step length and the average OS intensity of the imaging system, continuous odd-numbered full-color structured light slice microscopic imaging results I are selected according to the effective information scanning depthOSAnd using multi-scale decomposition based color displaysCarrying out in-focus information fusion by a micro-image multi-focus method;
preferably, a multi-focus imaging information fusion technology based on discrete cosine transform is utilized to select a discrete cosine coefficient of each light slice with the maximum variance value at each pixel point, an image D of the wide field depth of field is formed, and corresponding central wide field data I is selectedwide,K+1;
Figure BDA0002391878470000061
Wherein j-1, 2 …, 2K +1 denotes the jth slice in the stack, O denotes IOSIn the block of the calculation region of the central pixel of (m, n), σ2Is the corresponding discrete cosine coefficient variance;
step 1.22, after consistency check, inverse discrete cosine transform is carried out on D to obtain a wide field depth of field OS result Icos
Further, in step 1.21, the effective information scanning depth is the depth of field distance of the wide-field microscopic imaging.
Further, the FC-WFM-Deep model has a depth of 17 or 20 layers.
Further, the optimization solution algorithm in step 1.324 adopts an adaptive moment estimation ADAM algorithm.
Further, K ═ 3.
The invention has the beneficial effects that:
1. the invention combines the deep learning technology, the multi-focus fusion technology and the wide-field microscopic imaging, overcomes the problems of complex out-of-focus information, poor resolution, large data acquisition requirement of SIM-OS and the like in the wide-field microscopic imaging, and realizes the full-color light slice wide-field microscopic imaging method based on the deep learning, which is low-data acquisition and high-altitude time-division discrimination.
2. The invention can directly predict the high-resolution imaging result with the expansion degree of freedom from the wide-field microscopic data, and avoids the phototoxicity pollution risk introduced by multiple imaging scans of a multi-step phase shift method.
3. The invention adopts the deep learning technology to directly predict the imaging result, and the imaging frame rate of the method can reach 100Hz, thereby having extremely high time resolution ratio for imaging.
Drawings
FIG. 1 is a schematic diagram of a full-color microscopic imaging system.
In the figure, 1-color camera, 2-tube lens, 3-beam splitter, 4-objective lens, 5-collimating lens, 6-L ED light source, 7-prism, 8-DMD, and 9-sample.
Fig. 2 is a schematic diagram of data acquisition in different optical imaging modes and their corresponding depth of focus and scanning modes. Wherein the FC-WFM image and the FC-SIM image are obtained from the same raw data to facilitate theoretical analysis and fair comparison.
FIG. 3 is an intensity distribution illustrating the defocus range of the present invention, which can measure the average OS intensity in FC-SIM and the depth of field of FC-WFM.
FIG. 4 is a schematic diagram of the FC-WFM-Deep method processing of the present invention. Wherein: (a) a model training process, (b) imaging using a well-trained network.
FIG. 5 is a comparison of the results of OS imaging of fly's eye of insects by the FC-WFM-Deep method of the present invention and conventional full color microscopy, wherein (a) full color wide field microscopy, (b) full color structured light illumination, and (c) FC-WFM-Deep.
Fig. 6 shows the results of compound eye three-dimensional imaging using conventional full-color wide-field microscopy, in which (a) maximum projection imaging, (b) three-dimensional height distribution map, and (c) height envelope distribution of selected cross-sections.
Fig. 7 shows the result of compound eye three-dimensional imaging with conventional full-color structured light illumination, in which (a) maximum projection imaging, (b) three-dimensional height distribution map, and (c) height envelope distribution of selected cross-sections.
FIG. 8 shows the results of compound eye three-dimensional imaging using the FC-WFM-Deep method of the present invention, wherein (a) maximum projection imaging, (b) three-dimensional height distribution map, and (c) height envelope distribution of selected cross-sections.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and examples.
Referring to fig. 1, the present invention illustrates a full-color microscopic imaging system that performs raw WFM and FC-SIM imaging using collimated high power white light L ED (SI L IS-3C, Thorlabs corporation) as the illumination source, then L ED light enters a total internal reflection prism (TIR prism) and IS then reflected to DMD (V7000, ViA L UX GmbH, germany), after which the modulated light IS passed through an optical projection system comprising achromatic collimator, beam splitter and objective lenses (20x objective lens, NA 0.45, Nikon inc., Japan) and projects a sinusoidal fringe pattern onto a sample that IS mounted on an x-y-z motorized translation stage (adox corporation), a color camera (80FPS, 2048 × 2048 pixels, IDS GMH, germany) for capturing scanned 2D wide-field images.
Referring to FIG. 2, FC-WFM and FC-SIM imaging may be performed on the system. A common method of imaging the entire sample is to take multiple images corresponding to different focal planes along the z-scan direction. In the case of FC-WFM, each imaged slice in the z-scan is a mixture of in-focus and out-of-focus imaged portions, except for a relatively large depth of focus according to its intensity curve. In contrast, FC-SIM can efficiently select focus information, suppress out-of-focus background, and reconstruct images with high sharpness, although the depth of focus is much smaller than FC-WFM in this manner, meaning that a large number of z-scan slices will be required for 3D reconstruction. In the FC-WFM-Deep of the present invention, after the neural network training is successful, the OS with a depth extended focal plane, which contains the focus information of a series of FC-SIM images, can be reconstructed directly from the WFM data. Namely, the reconstructed slice has the defocusing suppression effect and the data analysis capability which are equivalent to FC-SIM imaging, and the focusing depth is equivalent to FC-WFM, so that the data volume required by imaging is greatly reduced in terms of both phase-shift scanning times and data acquisition.
To achieve the above object, I is first obtained based on this systemOSAnd IwideIn which IOSAnd IwideRespectively is full-color structured light slice microscopic imaging result and corresponding full-color wide-field microscopic imaging result, and the specific steps are as follows:
1) for each axis plane in the z-axis scanning sequence, three fringe illumination original color images with adjacent phase shift of 2 pi/3 are synchronously acquired by a color camera, and all the images are stored in a computer;
2) traversing all z-axis positions of the focal plane according to the step 1) according to a set scanning step size of 0.5 mu m, thereby obtaining volume data of different axial layers, wherein the imaging pixel of each axial layer is 2048 × 2048;
3) for three image data of each axial plane, converting each image from an RGB color space into an HSV (hue, saturation and value) color space to obtain three H, S and V components respectively.
4) Respectively applying a root-mean-square decoding algorithm to the three HSV channels to obtain an OS image I of each channeli,z(x,y):
Figure BDA0002391878470000091
V wherein i ═ H, S, V;
5) then, WF data I of three spatial components can be obtainedi,wide(x,y):
Figure BDA0002391878470000092
6) The OS image and the WF image of the three HSV channels are recombined and converted back to the RGB space to form a full-color structured light slice microscopic imaging result IOSAnd corresponding full-color wide-field microscopic imaging result IwideFor subsequent processing in the computer.
Then, referring to FIG. 3, the average OS intensity of the imaging system and the depth of field of FC-WFM are calculated, and the wide field depth of field OS imaging result I is obtainedCOSThe method comprises the following specific steps:
1) according to the scanning step length of the z axis and the WFM illumination intensity of an imaging system, useful imaging information is concentrated within 3.5 mu m, namely the depth of field range of FC-WFM, so that 7 continuous (namely K is 3) full-color structured light slice microimaging results I are simultaneously selectedOSAnd performing in-focus information fusion by using a multi-focus method of color microscopic images based on multi-scale decomposition. Preferably, the light slices are selected to have each pixel point by utilizing a multi-focus imaging information fusion technology based on discrete cosine transformDiscrete cosine coefficient of maximum variance value, and imaging data D of wide field depth of field is formed:
Figure BDA0002391878470000101
wherein j is 1, 2 …, 7 represents the jth slice in the stack, and O represents the result of full-color structured light slice microscopic imaging IOSBlock of calculation area of (m, n) as central pixel, σ2Is the corresponding discrete cosine coefficient variance. At the same time, corresponding central wide field data I are selectedwide,4
2) And then, after consistency check, carrying out inverse discrete cosine transform on the D to obtain a wide-field depth of field OS result Icos
Finally, please refer to FIG. 4, which shows the above-mentioned result Icos,Iwide,4Training an FC-WFM-Deep model by data and imaging, wherein the method comprises the following specific steps:
1) as shown in FIG. 4 (a), the FC-WFM-deep model is built using CNN and using Iwide,4And IcosThe model is trained with reference to the composed data set. Since the entire representation is used as input for the CNN model, the size and the calculation cost of the training data set are greatly increased, so that the training data set is divided into center-to-center sub-data pairs, namely
Figure BDA0002391878470000102
And
Figure BDA0002391878470000103
the information of the central point is influenced by the surrounding area, so the side length of the sub data pair can be determined according to the point spread function, specifically, the side length of the sub data pair is 180 × 180 pixels, the scanning step length is 60 pixels, for more accuracy, the experimental data of five different fields of view are combined for use, and therefore thousands of sub image pairs can be obtained to meet the requirement of the scale of the neural network training data set;
2) thereafter, the goal of FC-WFM-Deep is to learn predicted values
Figure BDA0002391878470000104
Model f of (1), wherein
Figure BDA0002391878470000105
Representing wide field depth of field OS results
Figure BDA0002391878470000106
An estimate of (d). In addition, unlike the direct prediction output image, the residual network has advantages of easy training, high training precision, fast convergence, and improved imaging performance, so that in FC-WFM-Deep, the actual training predicts the residual image
Figure BDA0002391878470000111
The network will have 20 layers with three types of layers-convolutional layers, correctional linear unit layers, and batch normalization layers. Finally, using the residual image expectation, and then learning the trainable parameters l (Θ) of FC-WFM-Deep with the mean square error of the corresponding residual image as a loss function, the process can be expressed as:
Figure BDA0002391878470000112
where T is the total number of sub-data pairs used to train the network,
Figure BDA0002391878470000113
is the p-th
Figure BDA0002391878470000114
Data, RpIs the corresponding residual image;
3) on the basis, parameters of the FC-WFM-Deep model are solved by using an ADAM optimization solving algorithm. Wherein the momentum and weight decay parameters are set to 0.9 and 0.0001, respectively. To train the network faster, the learning rate was initially set to 0.1, then decreased by a factor of 10 every 30 stages, and stopped after 90 stages. The training process is implemented using GPU TitanV.
4) After the corresponding FC-WFM-Deep model is obtained, a series of arbitrary wide-field microscopic imaging data is input into the well-trained model, and a corresponding series of high-quality wide-field depth-of-field OS images can be predicted, and then three-dimensional imaging can be performed, as shown in FIG. 4 (b).
Referring to fig. 5, imaging results taken at an imaging depth of z 14 μm and illuminated by a white light L ED for the tiger beetle eye are illustrated, the FC-SIM based operating system is able to suppress the background effectively compared to FC-WFM, while the remaining focus component occupies a relatively small area relative to the WF data.
Referring to fig. 6-8, the precise measurement function of different imaging methods in terms of ultrastructure under the same imaging conditions is illustrated. Fig. 5 (a), fig. 6 (a), and fig. 7 (a) first compare maximum projection color images of compound eyes of different methods. FC-WFM visually exhibits relatively poor imaging quality due to deterioration of the background. While the details of the FC-SIM compound eye imaging results are significantly easier to observe than FC-WFM, mainly due to the extraction of the focusing components. Finally, for FC-WFM-Deep with wide field depth of field OS imaging capability, the maximum projection result is substantially consistent with the FC-SIM result. However, in the FC-SIM method, 161 three-step phase shift slices are involved, i.e., 483 processed images; whereas the result of reconstructing the FC-WFM-Deep involves only 23 WF images, i.e. 23 calculated images. In other words, the data size can be reduced by a factor of 21. This shows that we propose FC-WFM-Deep can obtain imaging quality comparable to FC-SIM directly from WF data, but with less data acquisition.
Furthermore, to quantify the spatial resolution, a height map in the region of interest, which is outlined by a dashed line, is also estimated. Since the reconstruction result is severely affected by the defocus information, the structure of the compound eye can only be roughly resolved in the three-dimensional height map of WFM. In contrast, details become more clear in the results of FC-SIM and FC-WFM-Deep. This case corresponds to the contour along the white dashed line from the different results, where the contour of WFM can only roughly confirm the number of compound eyes. In contrast, the FC-WFM depth reconstruction of the present invention has a profile that is consistent with the FC-SIM profile, and a single aperture radius of curvature of about 13 μm can be measured, while the overall compound eye radius of curvature is about 1.75 mm.
Therefore, the depth learning technology is taken as the core, the wide-field depth-of-field OS imaging result which has the expanded degree of freedom and can inhibit the out-of-focus background is reconstructed by utilizing the full-color wide-field imaging, biological samples such as insects and cells are imaged, the unique high resolution of full-color structured light and the advantage of low data volume of full-color and residual depth learning can be jointly played, and the defects of insufficient resolution and large-quantity collection required by inhibiting the background caused by the full-color wide-field micro-out-of-focus background are overcome. Particularly, under the requirements of high frame rate and high sampling rate, the algorithm effectively improves the imaging speed on the other hand. The method has important significance for ultra-fast two-dimensional imaging and three-dimensional imaging.

Claims (8)

1. A wide-field color light slice microscopic imaging method based on deep learning is characterized by comprising the following steps:
step one, constructing and training an FC-WFM-Deep model;
step 1.1, obtaining IOSAnd IwideIn which IOSAnd IwideRespectively obtaining a full-color structured light slice microscopic imaging result and a corresponding full-color wide-field microscopic imaging result;
step 1.2, determining the depth of field range of FC-WFM, and selecting a plurality of full-color structured light slice microscopic imaging results IOSAnd determining IcosAnd Iwide,K+1In which IcosFor wide field depth of field OS imaging results, Iwide,K+1Scanning wide field data positioned at the central position in a stack for 2K +1 frames in the z direction, wherein K is a parameter selected according to the stack number of odd frames;
step 1.3, utilizing the I obtained in step 1.2cosAnd Iwide,K+1Training an FC-WFM-Deep model by data;
step 1.31, establishing an FC-WFM-Deep model by using CNN;
step 1.32, mixing IcosAnd Iwide,K+1Training an FC-WFM-Deep model by taking the formed residual data set as a training data set;
step 1.321, dividing the training data set into center-to-center subdata pairs, namely
Figure FDA0002391878460000011
And
Figure FDA0002391878460000012
step 1.322, determining the target of the FC-WFM-Deep model as follows: learning predicted values
Figure FDA0002391878460000013
Model f of (1), wherein
Figure FDA0002391878460000014
Representing wide field depth of field OS imaging results
Figure FDA0002391878460000015
An estimated value of (d);
step 1.323, taking the mean square error between the predicted value estimated in step 1.322 f and the corresponding residual image R as a loss function, wherein
Figure FDA0002391878460000016
With trainable parameters l (Θ) to learn the FC-WFM-Deep model, the process is expressed as:
Figure FDA0002391878460000017
where T is the total number of sub-data pairs used to train the network,
Figure FDA0002391878460000018
is the p-th
Figure FDA0002391878460000019
Data, RpIs the corresponding residual image;
step 1.324, solving parameters of the FC-WFM-Deep model by using an optimization solving algorithm to obtain a trained FC-WFM-Deep model;
obtaining a wide field depth OS image by using the trained FC-WFM-Deep model, and carrying out three-dimensional imaging;
inputting a series of arbitrary wide-field microscopic imaging data into the FC-WFM-Deep model trained in the step I, predicting a series of corresponding OS images with wide-field imaging depth of field, and then performing three-dimensional imaging.
2. The method for wide-field color light slice microscopic imaging based on deep learning of claim 1, wherein step 1.1 specifically comprises the steps of:
step 1.11, for each axis plane in the z-axis scanning sequence, obtaining an original color image by utilizing the full-color structured light illumination imaging of a multi-step phase shift method, and storing each image; wherein the z-axis direction is the imaging scanning depth direction;
step 1.12, traversing all z-axis positions of a focal plane according to the step 1) according to a set z-axis scanning step length to obtain volume data of different axial planes;
step 1.13, converting each image from RGB color space to HSV color space according to the image data of each axial plane, and respectively obtaining three H, S and V space components;
step 1.14, respectively applying the root-mean-square decoding algorithm to three HSV space components to obtain an OS image I of each space componenti,z(x,y):
Figure FDA0002391878460000021
Wherein i ═ H, S, V; z, x and y are respectively space three-dimensional coordinates;
step 1.15, obtaining WF image data I of three spatial componentsi,wide(x,y):
Figure FDA0002391878460000022
Step 1.16, recombining the OS image and the WF image of the three HSV space components and converting the images back to the RGB space to form a full-color structured light slice microscopic imaging result IOSAnd corresponding full-color wide-field microscopic imaging result Iwide
3. The deep learning-based wide-field color light slice microscopic imaging method according to claim 2, wherein the multi-step phase shift method in step 1.11 is specifically: three fringe illumination original color images with adjacent phase shifts of 2 pi/3 are acquired simultaneously by a color camera.
4. The method for wide-field color light slice microscopic imaging based on deep learning of claim 2, wherein the step 1.2 comprises the following steps:
step 1.21, according to the set z-axis scanning step length and the average OS intensity of the imaging system, continuous odd-numbered full-color structured light slice microscopic imaging results I are selected according to the effective information scanning depthOSPerforming in-focus information fusion by using a multi-focus method of color microscopic images based on multi-scale decomposition;
selecting the discrete cosine coefficient of each light slice with the maximum variance value at each pixel point, forming an image D of the wide field depth of field, and selecting corresponding central wide field data Iwide,K+1;
Figure FDA0002391878460000031
Wherein j-1, 2 …, 2K +1 denotes the jth slice in the stack, O denotes IOSIn the block of the calculation region of the central pixel of (m, n), σ2Is the corresponding discrete cosine coefficient variance;
step 1.22, after consistency check, inverse discrete cosine transform is carried out on D to obtain a wide field depth of field OS result Icos
5. The method of deep learning based wide-field color light slice microscopy imaging as claimed in claim 3, wherein: in step 1.21, the effective information scanning depth selects the depth of field distance of the wide-field microscopic imaging.
6. The method of deep learning based wide-field color light slice microscopy imaging as claimed in claim 5, wherein: the FC-WFM-Deep model has a depth of 17 or 20 layers.
7. The method of deep learning based wide-field color light slice microscopy imaging as claimed in claim 6, wherein: the optimization solution algorithm in step 1.324 adopts an adaptive moment estimation ADAM algorithm.
8. The method of deep learning based wide-field color light slice microscopy imaging as claimed in claim 7, wherein: k is 3.
CN202010117274.4A 2020-02-25 2020-02-25 Wide-field color light slice microscopic imaging method based on deep learning Active CN111429562B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010117274.4A CN111429562B (en) 2020-02-25 2020-02-25 Wide-field color light slice microscopic imaging method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010117274.4A CN111429562B (en) 2020-02-25 2020-02-25 Wide-field color light slice microscopic imaging method based on deep learning

Publications (2)

Publication Number Publication Date
CN111429562A true CN111429562A (en) 2020-07-17
CN111429562B CN111429562B (en) 2023-04-07

Family

ID=71547933

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010117274.4A Active CN111429562B (en) 2020-02-25 2020-02-25 Wide-field color light slice microscopic imaging method based on deep learning

Country Status (1)

Country Link
CN (1) CN111429562B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112069735A (en) * 2020-09-08 2020-12-11 哈尔滨工业大学 Full-slice digital imaging high-precision automatic focusing method based on asymmetric aberration
CN113947565A (en) * 2021-09-03 2022-01-18 中国科学院西安光学精密机械研究所 Structured light illumination super-resolution imaging gene detection method based on deep learning
CN114018962A (en) * 2021-11-01 2022-02-08 北京航空航天大学宁波创新研究院 Synchronous multi-spiral computed tomography method based on deep learning
WO2022148132A1 (en) * 2021-01-05 2022-07-14 清华大学深圳国际研究生院 Deep learning-based all-in-focus microscopic image acquiring method
CN115841423A (en) * 2022-12-12 2023-03-24 之江实验室 Wide-field illumination fluorescence super-resolution microscopic imaging method based on deep learning

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130279752A1 (en) * 2010-12-15 2013-10-24 Carl Zeiss Microscopy Gmbh Automated imaging of predetermined regions in series of slices
US20170052356A1 (en) * 2014-12-30 2017-02-23 Xi'an Institute of Optics and Precision Mechanics of CAS Full-Color Three-Dimensionnal Optical Sectioning Microscopic Imaging System and Method Based on Structured Illumination
CN110443882A (en) * 2019-07-05 2019-11-12 清华大学 Light field microscopic three-dimensional method for reconstructing and device based on deep learning algorithm

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130279752A1 (en) * 2010-12-15 2013-10-24 Carl Zeiss Microscopy Gmbh Automated imaging of predetermined regions in series of slices
US20170052356A1 (en) * 2014-12-30 2017-02-23 Xi'an Institute of Optics and Precision Mechanics of CAS Full-Color Three-Dimensionnal Optical Sectioning Microscopic Imaging System and Method Based on Structured Illumination
CN110443882A (en) * 2019-07-05 2019-11-12 清华大学 Light field microscopic three-dimensional method for reconstructing and device based on deep learning algorithm

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张刚平等: "新型快速自动宽场显微光切片控制系统", 《中国医学物理学杂志》 *
林浩铭等: "散斑照明宽场荧光层析显微成像技术研究", 《物理学报》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112069735A (en) * 2020-09-08 2020-12-11 哈尔滨工业大学 Full-slice digital imaging high-precision automatic focusing method based on asymmetric aberration
CN112069735B (en) * 2020-09-08 2022-08-12 哈尔滨工业大学 Full-slice digital imaging high-precision automatic focusing method based on asymmetric aberration
WO2022148132A1 (en) * 2021-01-05 2022-07-14 清华大学深圳国际研究生院 Deep learning-based all-in-focus microscopic image acquiring method
CN113947565A (en) * 2021-09-03 2022-01-18 中国科学院西安光学精密机械研究所 Structured light illumination super-resolution imaging gene detection method based on deep learning
CN113947565B (en) * 2021-09-03 2023-04-18 中国科学院西安光学精密机械研究所 Structured light illumination super-resolution imaging gene detection method based on deep learning
CN114018962A (en) * 2021-11-01 2022-02-08 北京航空航天大学宁波创新研究院 Synchronous multi-spiral computed tomography method based on deep learning
CN114018962B (en) * 2021-11-01 2024-03-08 北京航空航天大学宁波创新研究院 Synchronous multi-spiral computed tomography imaging method based on deep learning
CN115841423A (en) * 2022-12-12 2023-03-24 之江实验室 Wide-field illumination fluorescence super-resolution microscopic imaging method based on deep learning

Also Published As

Publication number Publication date
CN111429562B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN111429562B (en) Wide-field color light slice microscopic imaging method based on deep learning
US10656403B2 (en) Illumination microscopy systems and methods
EP3420393B1 (en) System for generating a synthetic 2d image with an enhanced depth of field of a biological sample
CN113568156B (en) Spectral microscopic imaging device and implementation method
CN110836877A (en) Light section microscopic imaging method and device based on liquid crystal zoom lens
TW201024792A (en) Wide-field super-resolution optical sectioning microscopy using a spatial light modulator
Bai et al. Full-color optically-sectioned imaging by wide-field microscopy via deep-learning
CN116183568A (en) High-fidelity reconstruction method and device for three-dimensional structured light illumination super-resolution microscopic imaging
JP2015156011A (en) Image acquisition device and method for controlling the same
US11356593B2 (en) Methods and systems for single frame autofocusing based on color- multiplexed illumination
CN113850902A (en) Light field three-dimensional reconstruction method based on light field microscope system
KR102253320B1 (en) Method for displaying 3 dimension image in integral imaging microscope system, and integral imaging microscope system implementing the same
Li et al. Rapid whole slide imaging via learning-based two-shot virtual autofocusing
Ye et al. Compressive confocal microscopy: 3D reconstruction algorithms
US11422349B2 (en) Dual processor image processing
CN112053304A (en) Rapid focusing restoration method for single shooting of full-slice digital imaging
Pan et al. In situ correction of liquid meniscus in cell culture imaging system based on parallel Fourier ptychographic microscopy (96 Eyes)
CN112070887A (en) Depth learning-based full-slice digital imaging depth of field extension method
Xu et al. Fourier Ptychographic Microscopy 10 Years on: A Review
CN117369106B (en) Multi-point confocal image scanning microscope and imaging method
Alonso 3D visualization of multicellular tumor spheroids in fluorescence microscopy
CN220289941U (en) Large-view-field high-flux high-resolution confocal imaging system based on microlens array
CN115060367B (en) Whole-slide data cube acquisition method based on microscopic hyperspectral imaging platform
WO2023061068A1 (en) Translational rapid ultraviolet-excited sectioning tomography assisted with deep learning
CN116540394A (en) Light sheet microscope single-frame self-focusing method based on structured light illumination and deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210520

Address after: 518083 the comprehensive building of Beishan industrial zone and 11 2 buildings in Yantian District, Shenzhen, Guangdong.

Applicant after: Shenzhen Huada Zhizao Technology Co.,Ltd.

Address before: 710119, No. 17, information Avenue, new industrial park, hi tech Zone, Shaanxi, Xi'an

Applicant before: XI'AN INSTITUTE OF OPTICS AND PRECISION MECHANICS, CHINESE ACADEMY OF SCIENCES

GR01 Patent grant
GR01 Patent grant