CN113963046A - A method and device for extending the depth of field of a microscope image - Google Patents

A method and device for extending the depth of field of a microscope image Download PDF

Info

Publication number
CN113963046A
CN113963046A CN202111232216.7A CN202111232216A CN113963046A CN 113963046 A CN113963046 A CN 113963046A CN 202111232216 A CN202111232216 A CN 202111232216A CN 113963046 A CN113963046 A CN 113963046A
Authority
CN
China
Prior art keywords
image
depth
information
depth estimation
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111232216.7A
Other languages
Chinese (zh)
Other versions
CN113963046B (en
Inventor
匡婷娜
周海洋
余飞鸿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Touptek Photoelectric Technology Co ltd
Original Assignee
Hangzhou Touptek Photoelectric Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Touptek Photoelectric Technology Co ltd filed Critical Hangzhou Touptek Photoelectric Technology Co ltd
Priority to CN202111232216.7A priority Critical patent/CN113963046B/en
Publication of CN113963046A publication Critical patent/CN113963046A/en
Application granted granted Critical
Publication of CN113963046B publication Critical patent/CN113963046B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

本发明公开了一种显微镜图像的景深延拓方法:对显微镜图像建立拉普拉斯金字塔,获取图像在不同尺度上的高频信息;对拉普拉斯金字塔的最顶层高斯模糊图层进行深度估计,得到深度估计信息图;将高频信息和深度估计信息图组成深度估计金字塔,融合深度估计金字塔得到大分辨率的相对深度图;将大分辨率的相对深度图以最大值原则获取决策图;由决策图指导显微镜图像清晰度区域提取,再进行图像序列融合,得到景深延拓后的显微镜图像。本发明还公开了一种显微镜图像的景深延拓装置,包括:高频信息提取模块、深度估计模块和融合模块。该方法及装置可以有效减小显微镜直接获取图像的弥散斑对最终大景深图像对比度的影响。

Figure 202111232216

The invention discloses a method for extending the depth of field of a microscope image: establishing a Laplacian pyramid on a microscope image to obtain high-frequency information of the image on different scales; Estimate, get the depth estimation information map; combine the high-frequency information and the depth estimation information map to form a depth estimation pyramid, and fuse the depth estimation pyramid to obtain a large-resolution relative depth map; use the large-resolution relative depth map to obtain a decision map based on the principle of maximum value ; The decision map guides the extraction of the sharpness area of the microscope image, and then performs image sequence fusion to obtain the microscope image after depth of field extension. The invention also discloses a depth-of-field extension device for microscope images, comprising: a high-frequency information extraction module, a depth estimation module and a fusion module. The method and the device can effectively reduce the influence of the diffuse speck of the image directly acquired by the microscope on the contrast of the final image with large depth of field.

Figure 202111232216

Description

Depth of field extension method and device for microscope image
Technical Field
The invention relates to the field of image processing, mainly relates to two technical fields of image fusion and field depth extension, and particularly relates to a method and a device for extending the field depth of a microscope image.
Background
The depth of field continuation technology is a technology for fusing images shot under different focusing layers of the same observed object into a large depth of field image, and has great significance in the field of microscopic digital imaging.
The main method for acquiring the image with the large depth of field at present is to control a Z-axis adjusting knob of a microscope through a stepping motor, image different layers of images to an image sensor, store the images one by one after positioning, extract all the images in a definition area, and finally fuse the images to obtain an image with the large depth of field. A microscope and method for microscopic observation of a sample to present an image with an extended depth of field or a three-dimensional image as disclosed in chinese patent publication No. CN 111989608A.
The other method is to accurately control a Z-axis focusing machine through a stepping motor, and perform equal-step scanning on the Z axis of the ordinary microscope while performing scanning and fusion. The method also needs to modify the traditional microscope, has certain significance for professional application, but has little significance for common users because the cost is increased. A microscope and a method for producing a microscopic image with an extended depth of field as disclosed in chinese patent publication No. CN 112241065A.
In addition, the conventional depth-of-field extension algorithm has many difficulties in real-time processing, such as fusion quality problem, i.e. image tearing phenomenon caused by local fusion, because the diffuse speckles generated by the small depth-of-field lens are easily misjudged into high-frequency information and then fused into the final large depth-of-field image, so that the contrast of the final result image is reduced.
Disclosure of Invention
The invention aims to provide a depth of field extension method and a depth of field extension device for a microscope image, which can effectively reduce the influence of a dispersed spot of an image directly acquired by a microscope on the contrast of a final large-depth-of-field image.
The invention provides the following technical scheme:
a method of depth of field extension of a microscope image, the method comprising the steps of:
(1) establishing a Laplacian pyramid for the microscope image, and acquiring high-frequency information of the image on different scales;
(2) performing depth estimation on the topmost Gaussian fuzzy layer of the Laplacian pyramid to obtain a depth estimation information map;
(3) forming a depth estimation pyramid by the high-frequency information in the step (1) and the depth estimation information map in the step (2), and fusing the depth estimation pyramid to obtain a relative depth map with high resolution;
(4) acquiring a decision map from the large-resolution relative depth map in the step (3) by using a maximum value principle;
(5) and (4) guiding the definition region of the microscope image to be extracted by the decision graph obtained in the step (4), and then carrying out image sequence fusion to obtain the microscope image with extended depth of field.
Preferably, in the invention, a microscope image sequence with a large focal plane change is screened out for the depth-of-field continuation method on the basis of two premises that more images at different focal plane depths participate in depth-of-field continuation as much as possible and computational resources are saved as much as possible.
In the step (1), the method for acquiring the high-frequency information comprises the following steps:
establishing a Laplacian pyramid for a single image input image; let the i-th layer image be GiDown () is a Down-sampling operation, Up () is an Up-sampling operation, Laplacian pyramid ith image LiThe acquisition method comprises the following steps:
Li=Gi-Up(Down(Gi))
the obtained Laplacian pyramid is only a high-frequency component of an image, the pixel value of a residual layer has positive or negative, no matter how the sign of the pixel value belongs to the high-frequency component, the absolute value of the residual layer image of the Laplacian pyramid is taken, the extraction of the multi-scale image high-frequency information is completed, and the finally obtained residual layer expression of the Laplacian pyramid is as follows:
Li=abs(Gi-Up(Down(Gi)))。
in step (1), the high frequency information is used for reconstruction of a subsequent large resolution depth estimation image.
The method comprises the steps of establishing a Laplacian pyramid of an image, carrying out down-sampling on the image to obtain a small-size image, carrying out up-sampling on the small-size image, and obtaining a Laplacian difference image, namely a high-frequency component of the original image, by taking a difference value between the image obtained by up-sampling and the original image.
Preferably, the image sharpness includes image resolution and image sharpness. Studies have shown that the sharpness of a color image depends mainly on the luminance component of the image. The detail and edge part of the image contain more image characteristics and information, and are also the main areas influencing the image definition. Based on the research, the method carries out color space conversion on the input image, converts the image from the RGB space to the HSV space, and establishes the Laplacian difference pyramid by taking the V component in the HSV space.
In the step (2), depth estimation is performed on the topmost gaussian fuzzy layer of the laplacian pyramid, and a method for obtaining a depth estimation information map comprises the following steps: the method is characterized in that a residual image layer of the Laplacian pyramid is used as prior information and combined with a natural image matting theory, and specifically comprises the following steps:
obtaining depth information of each pixel of the image by applying a Matting Laplacian matrix to the prior information, wherein the depth information is obtained by minimizing the following loss function:
Figure BDA0003316426770000031
Figure BDA0003316426770000032
and D is image prior information and image depth information in vector form,
Figure BDA0003316426770000033
is a diagonal matrix that, when the i pixels are at the edge positions of objects in the image,
Figure BDA0003316426770000034
l is a Matting Laplacian matrix, and the element values of the matrix L are as follows:
Figure BDA0003316426770000041
δijis the kronecker function, U3Is a 3-order unit matrix, mukSum-sigmakIs window omegakIs a regularization parameter, IiAnd IjIs the pixel value, | ω, of pixel points i and jkIs the window omegakThe size of (d);
Figure BDA0003316426770000042
and solving the equation to obtain a final depth estimation information map D.
In the step (2), depth estimation is performed on the topmost gaussian-blurred layer of the laplacian pyramid, because the depth estimation algorithm has a large calculation amount, performing depth estimation on a high-resolution image consumes a large amount of memory, and the solution efficiency is low. And performing depth estimation on the small-resolution image at the top layer of the pyramid to obtain a depth estimation information map, combining the obtained high-frequency information (residual image layer of the Laplacian pyramid) with different scales to form a depth estimation pyramid, and fusing the depth estimation pyramid to obtain a relative depth map with a large resolution. For the natural image matting theory: the method comprises the steps that a definition image of an image is used as prior information to be combined with a natural image matting theory, the obtained image foreground is a clear area, the background is a fuzzy area, the distance between an area (foreground) with a large pixel value and a lens is closer, and the relative distance between the background is farther; the high-frequency region of the image is taken as a clear region to obtain a clear image of the image, and the residual image layer of the Laplacian pyramid contains high-frequency signals which are taken as prior information to be input to obtain a depth estimation image.
In the step (4), the method for obtaining the decision graph comprises the following steps: and obtaining a decision graph according to the obtained depth information image sequence pixel-by-pixel large return index value (wherein the pixel value of the decision graph is the sequence number of the input image sequence):
Figure BDA0003316426770000043
where n denotes the number of sets of depth information image sequences, i denotes the i rows in the depth information image, and j denotes the j columns in the depth information image.
In the step (5), median filtering is applied to the decision graph to remove salt and pepper noise, the filtered decision graph is used for guiding the source image sequence to be fused, and the pixel value of the fused image is the pixel value of the pixel point of the corresponding index image. The pixel-by-pixel comparison process is not constrained by local pixels, noise exists in the obtained decision diagram, and in order to obtain a better display effect, denoising needs to be performed.
The invention also provides a depth of field extension device for microscope images, comprising:
the high-frequency information extraction module is used for establishing a Laplacian pyramid for the microscope image, acquiring high-frequency information of the image on different scales and inputting the high-frequency information to the depth estimation module;
the depth estimation module is used for carrying out depth estimation on the topmost Gaussian fuzzy layer of the Laplacian pyramid to obtain a depth estimation information map; forming a depth estimation pyramid by the high-frequency information in the step (1) and the depth estimation information map in the step (2), and fusing the depth estimation pyramid to obtain a relative depth map with high resolution;
the fusion module is used for acquiring the decision graph from the relative depth graph with the large resolution according to the maximum value principle; and guiding the extraction of the definition region of the microscope image by the decision graph, and then carrying out image sequence fusion to obtain the microscope image with extended depth of field.
Preferably, in the high frequency information module: establishing a Laplacian pyramid for a single image input image; let the i-th layer image be GiDown () is a Down-sampling operation, Up () is an Up-sampling operation, Laplacian pyramid ith image LiThe acquisition method comprises the following steps:
Li=Gi-Up(Down(Gi))
the obtained Laplacian pyramid is only a high-frequency component of an image, the pixel value of a residual layer has positive or negative, no matter how the sign of the pixel value belongs to the high-frequency component, the absolute value of the residual layer image of the Laplacian pyramid is taken, the extraction of the multi-scale image high-frequency information is completed, and the finally obtained residual layer expression of the Laplacian pyramid is as follows:
Li=abs(Gi-Up(Down(Gi)))。
preferably, in the depth estimation module:
the method for performing depth estimation on the topmost Gaussian fuzzy layer of the Laplacian pyramid to obtain the depth estimation information map comprises the following steps: the method is characterized in that a residual image layer of the Laplacian pyramid is used as prior information and combined with a natural image matting theory, and specifically comprises the following steps:
obtaining depth information of each pixel of the image by applying a Matting Laplacian matrix to the prior information, wherein the depth information is obtained by minimizing the following loss function:
Figure BDA0003316426770000061
Figure BDA0003316426770000062
and D is image prior information and image depth information in vector form,
Figure BDA0003316426770000063
is a diagonal matrix that, when the i pixels are at the edge positions of objects in the image,
Figure BDA0003316426770000064
l is a Matting Laplacian matrix, and the element values of the matrix L are as follows:
Figure BDA0003316426770000065
δijis the kronecker function, U3Is a 3-order unit matrix, mukSum-sigmakIs window omegakIs a regularization parameter, IiAnd IjIs the pixel value, | ω, of pixel points i and jkIs the window omegakThe size of (d);
Figure BDA0003316426770000066
and solving the equation to obtain a final depth estimation information map D.
Preferably, in the fusion module:
the method for obtaining the decision graph comprises the following steps: and obtaining a decision graph according to the obtained depth information image sequence by taking the large return index value pixel by pixel:
Figure BDA0003316426770000067
preferably, in the fusion module:
and (3) applying median filtering to the decision graph to remove salt and pepper noise, and guiding the source image sequence to be fused by using the filtered decision graph, wherein the pixel value of the fused image is the pixel value of the pixel point of the corresponding index image.
Compared with the prior art, the method and the device for extending the depth of field of the microscope image can effectively reduce the influence of the diffuse speckle of the image directly acquired by the microscope on the contrast of the final large-depth-of-field image, have stronger robustness on depth of field change, and are not easy to influence the final fusion quality by the change of the depth of field change degree or direction.
Drawings
Fig. 1 is an overall flow of a primary fusion process of the depth extension method in the embodiment.
Fig. 2 is an example of multi-scale high-frequency information of an image obtained in the processing process in the embodiment.
Fig. 3 is an example of one-time small-resolution depth estimation in the processing in the embodiment.
Fig. 4 is an example of one-time large-resolution depth estimation in the processing in the embodiment.
Fig. 5 is an example of a decision map and an ultra-depth image obtained in the processing in the embodiment.
Detailed Description
The invention can be used as the supporting software of a microscope camera, and is used for transmitting the small depth-of-field image of the microscope to the input interface of the software when the depth of field is extended, manually adjusting the object distance of the microscope objective table relative to the objective lens, acquiring images with different focal planes, and generating the depth of field extended image by clicking fusion operation.
Fig. 1 is an overall flow of a depth-of-field extension one-time fusion process, including:
s1, establishing a Laplacian pyramid for the microscope image in the high-frequency information module, and acquiring high-frequency information of the image on different scales;
s2, performing depth estimation on the topmost Gaussian blur layer of the Laplacian pyramid in a depth estimation module to obtain a depth estimation information map; forming a depth estimation pyramid by the high-frequency information and the depth estimation information map in the S1, and fusing the depth estimation pyramid to obtain a relative depth map with high resolution;
s3, obtaining a decision graph from the high-resolution relative depth graph in a fusion module according to the maximum value principle; and guiding the extraction of the definition region of the microscope image by the decision graph, and then carrying out image sequence fusion to obtain the microscope image with extended depth of field.
Specifically, the method comprises the following steps:
in S1: fig. 2 shows a specific example of obtaining multi-scale high-frequency information of an image, and the resolution of the processed image is 3072 × 2048. Converting an image from an RGB space to an HSV space, extracting a V component image, performing depth estimation and obtaining a decision diagram, and comprising the following steps of:
1. establishing a five-layer image pyramid for the image, wherein the size of one layer with the smallest dimension is 192 × 148;
2. let the i-th layer image be GiDown () is a Down-sampling operation, Up () is an Up-sampling operation, Laplacian pyramid ith image LiThe acquisition method comprises the following steps:
Li=Gi-Up(Down(Gi))
the obtained Laplacian pyramid is only a high-frequency component of an image, the pixel value of the residual layer has positive or negative, the absolute value of the Laplacian pyramid residual layer image is taken no matter how the sign of the pixel value belongs to the high-frequency component, and the extraction of the high-frequency information of the multi-scale image is completed. The mathematical expression of the final laplacian residual layer is:
Li=abs(Gi-Up(Down(Gi)))
3. the pixel value of the pyramid residual image layer is obtained by taking an absolute value on the basis of the Laplacian pyramid and contains all high-frequency information of the current scale; the minimum size image is the depth information image obtained by applying the natural image matting theory.
In S2: fig. 3 is an example of depth estimation performed on a small-scale image in a depth estimation module, and the obtained depth estimation image is enlarged for display reasons, and the specific steps are as follows:
1. acquiring high-frequency information of a fifth-layer image of the pyramid, and then applying Matting Laplacian to the fifth-layer original Gaussian image to obtain a depth information map;
2. the depth information of each pixel of the image is obtained by applying the matching Laplacian to the high-frequency prior information, and the depth information is the image depth information obtained by minimizing the following loss function:
Figure BDA0003316426770000081
Figure BDA0003316426770000082
and D is image prior information and image depth information in vector form,
Figure BDA0003316426770000083
is a diagonal matrix that, when the i pixels are at the edge positions of objects in the image,
Figure BDA0003316426770000091
l is a Matting Laplacian matrix, and the element values of the matrix L are as follows:
Figure BDA0003316426770000092
δijis the kronecker function, U3Is a 3-order unit matrix, mukSum-sigmakIs window omegakIs a regularization parameter, IiAnd IjIs the pixel value, | ω, of pixel points i and jkIs the window omegakThe size of (2).
Figure BDA0003316426770000093
Solving the equation to obtain a final depth information map D;
fig. 4 is an example of large-scale depth estimation, where a laplacian residual image layer and a depth information image layer are combined into a fusion pyramid, and the fusion pyramid is combined to obtain a large-resolution relative depth map.
In S3: FIG. 5 is a decision diagram obtained by taking a large return index value pixel by pixel from the obtained depth information image sequence
Figure BDA0003316426770000094
Where n denotes the number of sets of depth information image sequences, i denotes the i rows in the depth information image, and j denotes the j columns in the depth information image.
And applying median filtering to the directly obtained decision graph to remove salt and pepper noise, and guiding the source image sequence to be fused by using the filtered decision graph, wherein the pixel value of the fused image is the pixel value of the pixel point of the corresponding index image.
The completion of the steps is a complete depth of field extension process.

Claims (10)

1. A method of depth of field extension of a microscope image, the method comprising the steps of:
(1) establishing a Laplacian pyramid for the microscope image, and acquiring high-frequency information of the image on different scales;
(2) performing depth estimation on the topmost Gaussian fuzzy layer of the Laplacian pyramid to obtain a depth estimation information map;
(3) forming a depth estimation pyramid by the high-frequency information in the step (1) and the depth estimation information map in the step (2), and fusing the depth estimation pyramid to obtain a relative depth map with high resolution;
(4) acquiring a decision map from the large-resolution relative depth map in the step (3) by using a maximum value principle;
(5) and (4) guiding the definition region of the microscope image to be extracted by the decision graph obtained in the step (4), and then carrying out image sequence fusion to obtain the microscope image with extended depth of field.
2. The method for extending the depth of field of a microscope image according to claim 1, wherein in the step (1), the high frequency information is obtained by:
establishing a Laplacian pyramid for a single image input image; let the i-th layer image be GiDown () is a Down-sampling operation, Up () is an Up-sampling operation, Laplacian pyramid ith image LiThe acquisition method comprises the following steps:
Li=Gi-Up(Down(Gi))
the obtained Laplacian pyramid is only a high-frequency component of an image, the pixel value of a residual layer has positive or negative, no matter how the sign of the pixel value belongs to the high-frequency component, the absolute value of the residual layer image of the Laplacian pyramid is taken, the extraction of the multi-scale image high-frequency information is completed, and the finally obtained residual layer expression of the Laplacian pyramid is as follows:
Li=abs(Gi-Up(Down(Gi)))。
3. the method for extending depth of field of a microscope image according to claim 2, wherein in the step (2), the depth estimation is performed on the topmost gaussian-blurred image layer of the laplacian pyramid, and the method for obtaining the depth estimation information map includes: the method is characterized in that a residual image layer of the Laplacian pyramid is used as prior information and combined with a natural image matting theory, and specifically comprises the following steps:
obtaining depth information of each pixel of the image by applying a Matting Laplacian matrix to the prior information, wherein the depth information is obtained by minimizing the following loss function:
Figure FDA0003316426760000027
Figure FDA0003316426760000021
and D is image prior information and image depth information in vector form,
Figure FDA0003316426760000022
is a diagonal matrix that, when the i pixels are at the edge positions of objects in the image,
Figure FDA0003316426760000023
l is a Matting Laplacian matrix, and the element values of the matrix L are as follows:
Figure FDA0003316426760000024
δijis the kronecker function, U3Is a 3-order unit matrix, mukSum ΣkIs window omegakIs a regularization parameter, IiAnd IjIs the pixel value, | ω, of pixel points i and jkIs the window omegakThe size of (d);
Figure FDA0003316426760000025
and solving the equation to obtain a final depth estimation information map D.
4. The method for extending the depth of field of a microscope image according to claim 1, wherein in the step (4), the method for obtaining the decision map comprises: and obtaining a decision graph according to the obtained depth information image sequence by taking the large return index value pixel by pixel:
Figure FDA0003316426760000026
where n denotes the number of sets of depth information image sequences, i denotes the i rows in the depth information image, and j denotes the j columns in the depth information image.
5. The method according to claim 1, wherein in step (5), median filtering is applied to the decision graph to remove salt-pepper noise, and the filtered decision graph is used to guide the source image sequence to be fused, wherein the pixel values of the fused image are the pixel values of the corresponding index image pixels.
6. A depth of field extension apparatus for a microscope image, the apparatus comprising:
the high-frequency information extraction module is used for establishing a Laplacian pyramid for the microscope image, acquiring high-frequency information of the image on different scales and inputting the high-frequency information to the depth estimation module;
the depth estimation module is used for carrying out depth estimation on the topmost Gaussian fuzzy layer of the Laplacian pyramid to obtain a depth estimation information map; forming a depth estimation pyramid by the high-frequency information in the step (1) and the depth estimation information map in the step (2), and fusing the depth estimation pyramid to obtain a relative depth map with high resolution;
the fusion module is used for acquiring the decision graph from the relative depth graph with the large resolution according to the maximum value principle; and guiding the extraction of the definition region of the microscope image by the decision graph, and then carrying out image sequence fusion to obtain the microscope image with extended depth of field.
7. The apparatus according to claim 6, wherein, in the high frequency information module: establishing a Laplacian pyramid for a single image input image; let the i-th layer image be GiDown () is a Down-sampling operation, Up () is an Up-sampling operation, Laplacian pyramid ith image LiThe acquisition method comprises the following steps:
Li=Gi-Up(Down(Gi))
the obtained Laplacian pyramid is only a high-frequency component of an image, the pixel value of a residual layer has positive or negative, no matter how the sign of the pixel value belongs to the high-frequency component, the absolute value of the residual layer image of the Laplacian pyramid is taken, the extraction of the multi-scale image high-frequency information is completed, and the finally obtained residual layer expression of the Laplacian pyramid is as follows:
Li=abs(Gi-Up(Down(Gi)))。
8. the apparatus of claim 6, wherein, in the depth estimation module:
the method for performing depth estimation on the topmost Gaussian fuzzy layer of the Laplacian pyramid to obtain the depth estimation information map comprises the following steps: the method is characterized in that a residual image layer of the Laplacian pyramid is used as prior information and combined with a natural image matting theory, and specifically comprises the following steps:
obtaining depth information of each pixel of the image by applying a Matting Laplacian matrix to the prior information, wherein the depth information is obtained by minimizing the following loss function:
Figure FDA0003316426760000041
Figure FDA0003316426760000047
and D is image prior information and image depth information in vector form,
Figure FDA0003316426760000042
is a diagonal matrix that, when the i pixels are at the edge positions of objects in the image,
Figure FDA0003316426760000043
l is the matrix, moment of Matting LaplacianThe element values of the array L are:
Figure FDA0003316426760000044
δijis the kronecker function, U3Is a 3-order unit matrix, mukSum ΣkIs window omegakIs a regularization parameter, IiAnd IjIs the pixel value, | ω, of pixel points i and jkIs the window omegakThe size of (d);
Figure FDA0003316426760000045
and solving the equation to obtain a final depth estimation information map D.
9. The apparatus according to claim 6, wherein in the fusion module:
the method for obtaining the decision graph comprises the following steps: and obtaining a decision graph according to the obtained depth information image sequence by taking the large return index value pixel by pixel:
Figure FDA0003316426760000046
10. the apparatus according to claim 6, wherein in the fusion module:
and (3) applying median filtering to the decision graph to remove salt and pepper noise, and guiding the source image sequence to be fused by using the filtered decision graph, wherein the pixel value of the fused image is the pixel value of the pixel point of the corresponding index image.
CN202111232216.7A 2021-10-22 2021-10-22 A method and device for extending the depth of field of microscope images Active CN113963046B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111232216.7A CN113963046B (en) 2021-10-22 2021-10-22 A method and device for extending the depth of field of microscope images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111232216.7A CN113963046B (en) 2021-10-22 2021-10-22 A method and device for extending the depth of field of microscope images

Publications (2)

Publication Number Publication Date
CN113963046A true CN113963046A (en) 2022-01-21
CN113963046B CN113963046B (en) 2024-12-13

Family

ID=79466151

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111232216.7A Active CN113963046B (en) 2021-10-22 2021-10-22 A method and device for extending the depth of field of microscope images

Country Status (1)

Country Link
CN (1) CN113963046B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090169102A1 (en) * 2007-11-29 2009-07-02 Chao Zhang Multi-scale multi-camera adaptive fusion with contrast normalization
CN103499879A (en) * 2013-10-16 2014-01-08 北京航空航天大学 Method of acquiring microscopic image with super field depth
CN108020509A (en) * 2017-12-12 2018-05-11 佛山科学技术学院 The method and its device of a kind of optical projection tomography
CN110176060A (en) * 2019-04-28 2019-08-27 华中科技大学 Dense three-dimensional rebuilding method and system based on the guidance of multiple dimensioned Geometrical consistency
CN113096174A (en) * 2021-03-24 2021-07-09 苏州中科广视文化科技有限公司 Multi-plane scanning-based multi-view scene reconstruction method for end-to-end network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090169102A1 (en) * 2007-11-29 2009-07-02 Chao Zhang Multi-scale multi-camera adaptive fusion with contrast normalization
CN103499879A (en) * 2013-10-16 2014-01-08 北京航空航天大学 Method of acquiring microscopic image with super field depth
CN108020509A (en) * 2017-12-12 2018-05-11 佛山科学技术学院 The method and its device of a kind of optical projection tomography
CN110176060A (en) * 2019-04-28 2019-08-27 华中科技大学 Dense three-dimensional rebuilding method and system based on the guidance of multiple dimensioned Geometrical consistency
CN113096174A (en) * 2021-03-24 2021-07-09 苏州中科广视文化科技有限公司 Multi-plane scanning-based multi-view scene reconstruction method for end-to-end network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
彭延军;王瑾瑾;王元红;: "基于拉普拉斯金字塔改进的图像融合方法", 软件导刊, no. 01, 31 January 2016 (2016-01-31) *
赵迪迪;季轶群;: "区域方差和点锐度相结合的多聚焦图像融合", 液晶与显示, no. 03, 15 March 2019 (2019-03-15) *

Also Published As

Publication number Publication date
CN113963046B (en) 2024-12-13

Similar Documents

Publication Publication Date Title
Engin et al. Cycle-dehaze: Enhanced cyclegan for single image dehazing
Jiang et al. Learning to see moving objects in the dark
Prabhakar et al. Towards practical and efficient high-resolution HDR deghosting with CNN
D’Andrès et al. Non-parametric blur map regression for depth of field extension
US9897792B2 (en) Method and system for extended depth of field calculation for microscopic images
CN114372931A (en) Target object blurring method and device, storage medium and electronic equipment
Anwar et al. Deblur and deep depth from single defocus image
CN113052755A (en) High-resolution image intelligent matting method based on deep learning
EP2926558B1 (en) A method and system for extended depth of field calculation for microscopic images
Liu et al. High-speed video generation with an event camera
CN111353955A (en) Image processing method, device, equipment and storage medium
CN112419191A (en) Image motion blur removing method based on convolution neural network
Cao et al. Digital multi-focusing from a single photograph taken with an uncalibrated conventional camera
CN112330613A (en) Method and system for evaluating quality of cytopathology digital image
Liang et al. Scale-invariant structure saliency selection for fast image fusion
CN110852947B (en) A super-resolution method for infrared images based on edge sharpening
Deng et al. Selective kernel and motion-emphasized loss based attention-guided network for HDR imaging of dynamic scenes
Ayub et al. CNN and Gaussian Pyramid-Based Approach For Enhance Multi-Focus Image Fusion
Farhood et al. 3D point cloud reconstruction from a single 4D light field image
CN108364273B (en) A Method of Multi-Focus Image Fusion in Spatial Domain
CN113963046B (en) A method and device for extending the depth of field of microscope images
Čadík et al. Automated outdoor depth-map generation and alignment
Chen et al. Infrared and visible image fusion with deep wavelet-dense network
Jin et al. Boosting single image super-resolution learnt from implicit multi-image prior
Xu et al. Multi-exposure image fusion for dynamic scenes with ghosting removal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant