Disclosure of Invention
The invention aims to provide a depth of field extension method and a depth of field extension device for a microscope image, which can effectively reduce the influence of a dispersed spot of an image directly acquired by a microscope on the contrast of a final large-depth-of-field image.
The invention provides the following technical scheme:
a method of depth of field extension of a microscope image, the method comprising the steps of:
(1) establishing a Laplacian pyramid for the microscope image, and acquiring high-frequency information of the image on different scales;
(2) performing depth estimation on the topmost Gaussian fuzzy layer of the Laplacian pyramid to obtain a depth estimation information map;
(3) forming a depth estimation pyramid by the high-frequency information in the step (1) and the depth estimation information map in the step (2), and fusing the depth estimation pyramid to obtain a relative depth map with high resolution;
(4) acquiring a decision map from the large-resolution relative depth map in the step (3) by using a maximum value principle;
(5) and (4) guiding the definition region of the microscope image to be extracted by the decision graph obtained in the step (4), and then carrying out image sequence fusion to obtain the microscope image with extended depth of field.
Preferably, in the invention, a microscope image sequence with a large focal plane change is screened out for the depth-of-field continuation method on the basis of two premises that more images at different focal plane depths participate in depth-of-field continuation as much as possible and computational resources are saved as much as possible.
In the step (1), the method for acquiring the high-frequency information comprises the following steps:
establishing a Laplacian pyramid for a single image input image; let the i-th layer image be GiDown () is a Down-sampling operation, Up () is an Up-sampling operation, Laplacian pyramid ith image LiThe acquisition method comprises the following steps:
Li=Gi-Up(Down(Gi))
the obtained Laplacian pyramid is only a high-frequency component of an image, the pixel value of a residual layer has positive or negative, no matter how the sign of the pixel value belongs to the high-frequency component, the absolute value of the residual layer image of the Laplacian pyramid is taken, the extraction of the multi-scale image high-frequency information is completed, and the finally obtained residual layer expression of the Laplacian pyramid is as follows:
Li=abs(Gi-Up(Down(Gi)))。
in step (1), the high frequency information is used for reconstruction of a subsequent large resolution depth estimation image.
The method comprises the steps of establishing a Laplacian pyramid of an image, carrying out down-sampling on the image to obtain a small-size image, carrying out up-sampling on the small-size image, and obtaining a Laplacian difference image, namely a high-frequency component of the original image, by taking a difference value between the image obtained by up-sampling and the original image.
Preferably, the image sharpness includes image resolution and image sharpness. Studies have shown that the sharpness of a color image depends mainly on the luminance component of the image. The detail and edge part of the image contain more image characteristics and information, and are also the main areas influencing the image definition. Based on the research, the method carries out color space conversion on the input image, converts the image from the RGB space to the HSV space, and establishes the Laplacian difference pyramid by taking the V component in the HSV space.
In the step (2), depth estimation is performed on the topmost gaussian fuzzy layer of the laplacian pyramid, and a method for obtaining a depth estimation information map comprises the following steps: the method is characterized in that a residual image layer of the Laplacian pyramid is used as prior information and combined with a natural image matting theory, and specifically comprises the following steps:
obtaining depth information of each pixel of the image by applying a Matting Laplacian matrix to the prior information, wherein the depth information is obtained by minimizing the following loss function:
and D is image prior information and image depth information in vector form,
is a diagonal matrix that, when the i pixels are at the edge positions of objects in the image,
l is a Matting Laplacian matrix, and the element values of the matrix L are as follows:
δijis the kronecker function, U3Is a 3-order unit matrix, mukSum-sigmakIs window omegakIs a regularization parameter, IiAnd IjIs the pixel value, | ω, of pixel points i and jkIs the window omegakThe size of (d);
and solving the equation to obtain a final depth estimation information map D.
In the step (2), depth estimation is performed on the topmost gaussian-blurred layer of the laplacian pyramid, because the depth estimation algorithm has a large calculation amount, performing depth estimation on a high-resolution image consumes a large amount of memory, and the solution efficiency is low. And performing depth estimation on the small-resolution image at the top layer of the pyramid to obtain a depth estimation information map, combining the obtained high-frequency information (residual image layer of the Laplacian pyramid) with different scales to form a depth estimation pyramid, and fusing the depth estimation pyramid to obtain a relative depth map with a large resolution. For the natural image matting theory: the method comprises the steps that a definition image of an image is used as prior information to be combined with a natural image matting theory, the obtained image foreground is a clear area, the background is a fuzzy area, the distance between an area (foreground) with a large pixel value and a lens is closer, and the relative distance between the background is farther; the high-frequency region of the image is taken as a clear region to obtain a clear image of the image, and the residual image layer of the Laplacian pyramid contains high-frequency signals which are taken as prior information to be input to obtain a depth estimation image.
In the step (4), the method for obtaining the decision graph comprises the following steps: and obtaining a decision graph according to the obtained depth information image sequence pixel-by-pixel large return index value (wherein the pixel value of the decision graph is the sequence number of the input image sequence):
where n denotes the number of sets of depth information image sequences, i denotes the i rows in the depth information image, and j denotes the j columns in the depth information image.
In the step (5), median filtering is applied to the decision graph to remove salt and pepper noise, the filtered decision graph is used for guiding the source image sequence to be fused, and the pixel value of the fused image is the pixel value of the pixel point of the corresponding index image. The pixel-by-pixel comparison process is not constrained by local pixels, noise exists in the obtained decision diagram, and in order to obtain a better display effect, denoising needs to be performed.
The invention also provides a depth of field extension device for microscope images, comprising:
the high-frequency information extraction module is used for establishing a Laplacian pyramid for the microscope image, acquiring high-frequency information of the image on different scales and inputting the high-frequency information to the depth estimation module;
the depth estimation module is used for carrying out depth estimation on the topmost Gaussian fuzzy layer of the Laplacian pyramid to obtain a depth estimation information map; forming a depth estimation pyramid by the high-frequency information in the step (1) and the depth estimation information map in the step (2), and fusing the depth estimation pyramid to obtain a relative depth map with high resolution;
the fusion module is used for acquiring the decision graph from the relative depth graph with the large resolution according to the maximum value principle; and guiding the extraction of the definition region of the microscope image by the decision graph, and then carrying out image sequence fusion to obtain the microscope image with extended depth of field.
Preferably, in the high frequency information module: establishing a Laplacian pyramid for a single image input image; let the i-th layer image be GiDown () is a Down-sampling operation, Up () is an Up-sampling operation, Laplacian pyramid ith image LiThe acquisition method comprises the following steps:
Li=Gi-Up(Down(Gi))
the obtained Laplacian pyramid is only a high-frequency component of an image, the pixel value of a residual layer has positive or negative, no matter how the sign of the pixel value belongs to the high-frequency component, the absolute value of the residual layer image of the Laplacian pyramid is taken, the extraction of the multi-scale image high-frequency information is completed, and the finally obtained residual layer expression of the Laplacian pyramid is as follows:
Li=abs(Gi-Up(Down(Gi)))。
preferably, in the depth estimation module:
the method for performing depth estimation on the topmost Gaussian fuzzy layer of the Laplacian pyramid to obtain the depth estimation information map comprises the following steps: the method is characterized in that a residual image layer of the Laplacian pyramid is used as prior information and combined with a natural image matting theory, and specifically comprises the following steps:
obtaining depth information of each pixel of the image by applying a Matting Laplacian matrix to the prior information, wherein the depth information is obtained by minimizing the following loss function:
and D is image prior information and image depth information in vector form,
is a diagonal matrix that, when the i pixels are at the edge positions of objects in the image,
l is a Matting Laplacian matrix, and the element values of the matrix L are as follows:
δijis the kronecker function, U3Is a 3-order unit matrix, mukSum-sigmakIs window omegakIs a regularization parameter, IiAnd IjIs the pixel value, | ω, of pixel points i and jkIs the window omegakThe size of (d);
and solving the equation to obtain a final depth estimation information map D.
Preferably, in the fusion module:
the method for obtaining the decision graph comprises the following steps: and obtaining a decision graph according to the obtained depth information image sequence by taking the large return index value pixel by pixel:
preferably, in the fusion module:
and (3) applying median filtering to the decision graph to remove salt and pepper noise, and guiding the source image sequence to be fused by using the filtered decision graph, wherein the pixel value of the fused image is the pixel value of the pixel point of the corresponding index image.
Compared with the prior art, the method and the device for extending the depth of field of the microscope image can effectively reduce the influence of the diffuse speckle of the image directly acquired by the microscope on the contrast of the final large-depth-of-field image, have stronger robustness on depth of field change, and are not easy to influence the final fusion quality by the change of the depth of field change degree or direction.
Detailed Description
The invention can be used as the supporting software of a microscope camera, and is used for transmitting the small depth-of-field image of the microscope to the input interface of the software when the depth of field is extended, manually adjusting the object distance of the microscope objective table relative to the objective lens, acquiring images with different focal planes, and generating the depth of field extended image by clicking fusion operation.
Fig. 1 is an overall flow of a depth-of-field extension one-time fusion process, including:
s1, establishing a Laplacian pyramid for the microscope image in the high-frequency information module, and acquiring high-frequency information of the image on different scales;
s2, performing depth estimation on the topmost Gaussian blur layer of the Laplacian pyramid in a depth estimation module to obtain a depth estimation information map; forming a depth estimation pyramid by the high-frequency information and the depth estimation information map in the S1, and fusing the depth estimation pyramid to obtain a relative depth map with high resolution;
s3, obtaining a decision graph from the high-resolution relative depth graph in a fusion module according to the maximum value principle; and guiding the extraction of the definition region of the microscope image by the decision graph, and then carrying out image sequence fusion to obtain the microscope image with extended depth of field.
Specifically, the method comprises the following steps:
in S1: fig. 2 shows a specific example of obtaining multi-scale high-frequency information of an image, and the resolution of the processed image is 3072 × 2048. Converting an image from an RGB space to an HSV space, extracting a V component image, performing depth estimation and obtaining a decision diagram, and comprising the following steps of:
1. establishing a five-layer image pyramid for the image, wherein the size of one layer with the smallest dimension is 192 × 148;
2. let the i-th layer image be GiDown () is a Down-sampling operation, Up () is an Up-sampling operation, Laplacian pyramid ith image LiThe acquisition method comprises the following steps:
Li=Gi-Up(Down(Gi))
the obtained Laplacian pyramid is only a high-frequency component of an image, the pixel value of the residual layer has positive or negative, the absolute value of the Laplacian pyramid residual layer image is taken no matter how the sign of the pixel value belongs to the high-frequency component, and the extraction of the high-frequency information of the multi-scale image is completed. The mathematical expression of the final laplacian residual layer is:
Li=abs(Gi-Up(Down(Gi)))
3. the pixel value of the pyramid residual image layer is obtained by taking an absolute value on the basis of the Laplacian pyramid and contains all high-frequency information of the current scale; the minimum size image is the depth information image obtained by applying the natural image matting theory.
In S2: fig. 3 is an example of depth estimation performed on a small-scale image in a depth estimation module, and the obtained depth estimation image is enlarged for display reasons, and the specific steps are as follows:
1. acquiring high-frequency information of a fifth-layer image of the pyramid, and then applying Matting Laplacian to the fifth-layer original Gaussian image to obtain a depth information map;
2. the depth information of each pixel of the image is obtained by applying the matching Laplacian to the high-frequency prior information, and the depth information is the image depth information obtained by minimizing the following loss function:
and D is image prior information and image depth information in vector form,
is a diagonal matrix that, when the i pixels are at the edge positions of objects in the image,
l is a Matting Laplacian matrix, and the element values of the matrix L are as follows:
δijis the kronecker function, U3Is a 3-order unit matrix, mukSum-sigmakIs window omegakIs a regularization parameter, IiAnd IjIs the pixel value, | ω, of pixel points i and jkIs the window omegakThe size of (2).
Solving the equation to obtain a final depth information map D;
fig. 4 is an example of large-scale depth estimation, where a laplacian residual image layer and a depth information image layer are combined into a fusion pyramid, and the fusion pyramid is combined to obtain a large-resolution relative depth map.
In S3: FIG. 5 is a decision diagram obtained by taking a large return index value pixel by pixel from the obtained depth information image sequence
Where n denotes the number of sets of depth information image sequences, i denotes the i rows in the depth information image, and j denotes the j columns in the depth information image.
And applying median filtering to the directly obtained decision graph to remove salt and pepper noise, and guiding the source image sequence to be fused by using the filtered decision graph, wherein the pixel value of the fused image is the pixel value of the pixel point of the corresponding index image.
The completion of the steps is a complete depth of field extension process.