CN115294003A - Multi-focus image fusion method - Google Patents
Multi-focus image fusion method Download PDFInfo
- Publication number
- CN115294003A CN115294003A CN202210949597.9A CN202210949597A CN115294003A CN 115294003 A CN115294003 A CN 115294003A CN 202210949597 A CN202210949597 A CN 202210949597A CN 115294003 A CN115294003 A CN 115294003A
- Authority
- CN
- China
- Prior art keywords
- image
- focus
- algorithm
- images
- fusion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000007500 overflow downdraw method Methods 0.000 title claims abstract description 18
- 230000004927 fusion Effects 0.000 claims abstract description 36
- 238000000034 method Methods 0.000 claims abstract description 18
- 238000012545 processing Methods 0.000 claims abstract description 17
- 230000000877 morphologic effect Effects 0.000 claims abstract description 7
- 238000003708 edge detection Methods 0.000 claims description 6
- 239000000126 substance Substances 0.000 claims description 6
- 238000009499 grossing Methods 0.000 claims description 2
- 238000010586 diagram Methods 0.000 abstract description 7
- 230000006870 function Effects 0.000 description 6
- 230000000007 visual effect Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 2
- 238000000354 decomposition reaction Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 208000027697 autoimmune lymphoproliferative syndrome due to CTLA4 haploinsuffiency Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
The invention provides a multi-focus image fusion method, which belongs to the technical field of image fusion and comprises the steps of converting any two images to be fused I into two gradient characteristic images G by using a Sobel operator; obtaining a first initial focusing feature map D according to the two gradient feature images G; processing the first initial focusing characteristic diagram D by using a morphological image processing function, eliminating the influence of image noise or artifacts on the image, and further obtaining a second initial focusing characteristic diagram D1; carrying out image fusion on the focusing feature map D1 by using an image fusion algorithm to obtain a final fusion image If; and fusing the If and the rest images to be fused pairwise by using the method to realize the fusion of a plurality of multi-focus images. The method can realize the fusion of multi-focus images.
Description
Technical Field
The invention belongs to the technical field of image fusion, and particularly relates to a multi-focus image fusion method.
Background
Multi-focus image fusion belongs to a branch of the image fusion field, which combines complementary features of different images shot in the same scene or similar scenes to generate an image. Multi-focus image fusion has a wide application space, such as digital photography, surveillance, non-diffractive imaging systems, remote sensing and mobile microscope processing software, etc.
In the prior art, a multi-focus image fusion method mostly adopts a traditional method and a method based on deep learning, for example, petrovic et al provides a multi-focus image fusion algorithm based on gradient pyramid decomposition. The algorithm uses a new strategy of fusion first and decomposition, and reconstructs a final fusion image by acquiring fusion sub-bands under different scales and performing inverse transformation on the fusion sub-bands. Chai et al have adopted a multi-focus image fusion method based on Lifting Stationary Wavelet Transform (LSWT). Li et al propose a NSCT multi-focus image fusion method based on multi-scale curvature. Guo et al propose a multi-focus image fusion method based on a full convolution network.
However, in the prior art, there are still many difficulties in evaluating the focus area of the input image, removing visual artifacts, and boundary seams. Meanwhile, no matter the traditional method or the method based on deep learning is adopted at present, when the unregistered image pairs are fused, the problems of visual artifacts and boundary seams of edge regions caused by image edge misalignment exist.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a multi-focus image fusion method.
In order to achieve the above purpose, the invention provides the following technical scheme:
a multi-focus image fusion method, comprising:
converting any two images I1 and I2 to be fused into two gradient characteristic images G1 and G2 by using a Sobel operator;
according to the two gradient feature maps G1 and G2, obtaining respective variance comparison maps M1 and M2 and respective average gradient comparison maps S1 and S2, obtaining respective focusing region comparison maps C1 and C2 from M1, M2, S1 and S2, and obtaining a first initial focusing feature map D according to the focusing region comparison maps C1 and C2;
processing the first initial focusing feature map D by using a morphological image processing function to obtain a second initial focusing feature map D1;
inputting the second initial focusing feature map D1 and the two images I1 and I2 to be fused into an image fusion algorithm for image fusion to obtain a final fusion image If;
fusing the If and the rest images to be fused pairwise by using the image fusion algorithm to realize the fusion of a plurality of multi-focus images;
wherein, the image fusion algorithm is as follows:
If=I1⊙D1+I2⊙(1-D1)
in the above formula, I1 is the first image to be fused, and I2 is the second image to be fused, which indicates pixel-by-pixel multiplication.
And (3) fusing the If and the rest images to be fused in a pairwise manner by using an image fusion algorithm If = I1 |, D1+ I2 | (1-D1), so as to realize the fusion of a plurality of multi-focus images.
Further, the calculation method of the Sobel operator is as follows:
wherein the content of the first and second substances,the method is characterized in that the method is a convolution operator, I is an image to be fused, G0 is an initial gradient characteristic image, HI is an image gray value used for transverse edge detection, and HV is an image gray value used for longitudinal edge detection.
Further, still include: further processing the initial gradient characteristic image G0 to obtain a gradient characteristic image G, wherein the processing algorithm is as follows:
further, the morphological image processing function uses a bwaneopen (·) function.
Further, the method also comprises the following steps: the boundary of the second initial focus feature map D1 is smoothed by a gaussian filter.
Further, the algorithm of the variance comparison graphs M1 and M2 is:
wherein V1 and V2 are the variances of G1 and G2; ε, which is used to prevent M1 and M2 from going to infinity, can be set to 0.001;
the algorithm for comparing the average gradients in the graphs S1 and S2 is as follows:
wherein, the first and the second end of the pipe are connected with each other,andis the local mean gradient of G1 and G2, wherein,the algorithm is as follows:
wherein the content of the first and second substances,is the average gradient value within the local sliding window; r is the size of the local window;
the focus area comparison chart C 1 And C 2 The algorithm is as follows:
C 1 =M 1 +λ×S 1
C 2 =M 2 +λ×S 2
where λ is a coefficient for balancing M and S;
the algorithm of the first initial focusing feature map D is:
where T is an artificially set threshold.
Further, the Gaussian filter window is set to [9,9] and the filter variance is set to 2.
The multi-focus image fusion method provided by the invention has the following beneficial effects:
the method and the device for eliminating the interference of the background noise solve the problem that in the prior art, the focusing area of an input image is difficult to evaluate by converting the images I1 and I2 to be fused into the gradient characteristic images; the variance and gradient comparison graph is used for fusing the focus areas, the morphological filtering function is used for processing the first initial focus characteristic graph D, and the problems that in the prior art, the visual artifact and the boundary seam are difficult to remove, and the visual artifact and the boundary seam appear in the edge area due to edge misalignment when the unregistered image pairs are fused are solved.
Drawings
In order to more clearly illustrate the embodiments of the present invention and the design thereof, the drawings required for the embodiments will be briefly described below. The drawings in the following description are only some embodiments of the invention and it will be clear to a person skilled in the art that other drawings can be derived from them without inventive effort.
Fig. 1 is a schematic structural diagram of a multi-focus image fusion method according to an embodiment of the present invention;
FIG. 2 shows two gradient feature images G according to an embodiment of the present invention;
FIG. 3 is a first initial focus profile D of an embodiment of the present invention;
FIG. 4 is a second initial focus characteristic diagram D1 according to an embodiment of the present invention;
FIG. 5 is a focusing feature diagram D2 of an embodiment of the present invention;
FIG. 6 is a graph of experimental results of fusing a pair of registered images according to an embodiment of the present invention;
FIG. 7 results of experiments conducted to fuse unregistered image pairs according to embodiments of the present invention.
Detailed Description
In order that those skilled in the art can better understand the technical solutions of the present invention and can implement the technical solutions, the present invention will be described in detail with reference to the accompanying drawings and specific embodiments. The following examples are only for illustrating the technical solutions of the present invention more clearly, and the protection scope of the present invention is not limited thereby.
In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "axial", "radial", "circumferential", etc. indicate orientations or positional relationships based on those shown in the drawings, merely for convenience of description and simplification of the technical solution of the present invention, and do not indicate or imply that the device or element referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention.
Furthermore, the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In the description of the present invention, it should be noted that, unless explicitly stated or limited otherwise, the terms "connected" and "connected" are to be interpreted broadly, e.g., as a fixed connection, a detachable connection, or an integral connection; can be mechanically or electrically connected; may be directly connected or indirectly connected through an intermediate medium. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations. In the description of the present invention, unless otherwise specified, "a plurality" means two or more, and will not be described in detail herein.
Example (b):
the invention provides a multi-focus image fusion method, which is specifically shown in figures 1-5 and comprises the following steps:
converting any two images I1 and I2 to be fused into two gradient characteristic images G1 and G2 by using a Sobel operator; according to the two gradient feature maps G1 and G2, obtaining respective variance comparison maps M1 and M2 and respective average gradient comparison maps S1 and S2, obtaining respective focusing region comparison maps C1 and C2 from M1, M2, S1 and S2, and obtaining a first initial focusing feature map D according to the focusing region comparison maps C1 and C2; processing the first initial focusing characteristic diagram D by using a morphological image processing function, eliminating the influence of image noise or artifacts on the image, and further obtaining a second initial focusing characteristic diagram D1; smoothing the boundary of the second initial focusing feature map D1 by using a Gaussian filter to further obtain a focusing feature map D2; carrying out image fusion on the focusing feature map D2 by using an image fusion algorithm to obtain a final fusion image If; and fusing the If and the rest images to be fused pairwise by using an image fusion algorithm to realize the fusion of a plurality of multi-focus images.
In this embodiment, the image fusion algorithm is:
If=I1⊙D2+I2⊙(1-D2)
wherein I1 is the first image to be fused, I2 is the second image to be fused, which indicates pixel-by-pixel multiplication, and D2 is the second initial focus characteristic map D1 smoothed by the Gaussian filter.
Specifically, the calculation method of the Sobel operator is as follows:
wherein the content of the first and second substances,the method is characterized in that the method is a convolution operator, I is an image to be fused, G0 is an initial gradient characteristic image, HI is an image gray value used for transverse edge detection, and HV is an image gray value used for longitudinal edge detection.
Specifically, still include: the initial gradient feature image G0 needs to be further processed to obtain a gradient feature image G, and the processing algorithm is as follows:
specifically, the obtaining of the first initial focusing feature map D by using the gradient feature image G specifically includes:
selecting any two gradient feature maps G1 and G2 to obtain respective variance comparison maps M1 and M2 and respective average gradient comparison maps S1 and S2, obtaining respective focusing area comparison maps C1 and C2 from M1, M2, S1 and S2, and obtaining respective first initial focusing feature maps D according to the focusing area comparison maps C1 and C2. Wherein the algorithm of M1 and M2 is as follows:
wherein V1 and V2 are the variances of G1 and G2; ε is used to prevent M1 and M2 from going to infinity and can be set to 0.001.
The algorithm of S1 and S2 is as follows:
wherein the content of the first and second substances,andis the local mean gradient of G1 and G2,can be obtained according to the following formula:
wherein, the first and the second end of the pipe are connected with each other,is the average gradient value within the local sliding window; r is the size of the local window.Can be obtained in the same manner.
Combining the above formulas to obtain a focus area comparison chart C 1 And C 2 。
C 1 =M 1 +λ×S 1
C 2 =M 2 +λ×S 2
Where λ is a coefficient for balancing M and S.
The algorithm of D is as follows:
specifically, the gaussian filter window is set to [9,9], and the filter variance is set to 2, so as to obtain a focusing feature map D2 with smooth boundary.
As shown in fig. 4:
the braware eaopen (-) function is used to remove the small connected region in the focus map, and if the number of pixels in a certain connected region is less than a threshold value, the region is regarded as a small connected region to be deleted. The focus profile c after processing is as follows:
D A =bwareaopen(D,area)
wherein the set connected region area threshold area is R H W; r is a ratio factor that determines the area of the smallest connected region that is filtered out; h and W represent the height and width, respectively, of the source image, here an 8-pass region is used.
As shown in fig. 6 and 7:
fig. 6 and 7 show two sets of experimental results using the present invention, where fig. 6 is the result of fusing a registered image pair using the present invention, and fig. 7 is the result of fusing an unregistered image pair using the present invention. It can be seen that whether the registered image pair or the unregistered image pair is aimed at, the fused image obtained by the method has clear texture and natural transition region, and the most feature and detail information in the source image is reserved, so that the method has the best fusion effect.
The above-mentioned embodiments are only preferred embodiments of the present invention, and the scope of the present invention is not limited thereto, and any simple modifications or equivalent substitutions of the technical solutions that can be obviously obtained by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention.
Claims (7)
1. A multi-focus image fusion method, comprising:
converting any two images I1 and I2 to be fused into two gradient characteristic images G1 and G2 by using a Sobel operator;
according to the two gradient feature maps G1 and G2, obtaining respective variance comparison maps M1 and M2 and respective average gradient comparison maps S1 and S2, obtaining respective focusing area comparison maps C1 and C2 from M1, M2, S1 and S2, and obtaining a first initial focusing feature map D according to the focusing area comparison maps C1 and C2;
processing the first initial focusing feature map D by using a morphological image processing function to obtain a second initial focusing feature map D1;
inputting the second initial focusing feature map D1 and the two images I1 and I2 to be fused into an image fusion algorithm for image fusion to obtain a final fusion image If;
fusing the If and the rest images to be fused pairwise by using the image fusion algorithm to realize the fusion of a plurality of multi-focus images;
wherein, the image fusion algorithm is as follows:
If=I1⊙D1+I2⊙(1-D1)
in the above formula, I1 is the first image to be fused, and I2 is the second image to be fused, which indicates pixel-by-pixel multiplication.
2. The multi-focus image fusion method according to claim 1, wherein the Sobel operator is calculated by:
wherein, the first and the second end of the pipe are connected with each other,the method is characterized in that the method is a convolution operator, I is an image to be fused, G0 is an initial gradient characteristic image, HI is an image gray value used for transverse edge detection, and HV is an image gray value used for longitudinal edge detection.
4. the method for fusing multi-focus images according to claim 1, wherein the morphological image processing function uses a bweareaopen (·) function.
5. The multi-focus image fusion method according to claim 1, further comprising: and smoothing the boundary of the second initial focusing feature map D1 by using a Gaussian filter to further obtain a focusing feature map D2.
6. The multi-focus image fusion method according to claim 1, wherein the algorithm of the variance comparison maps M1 and M2 is:
wherein V1 and V2 are the variances of G1 and G2; ε, which is used to prevent M1 and M2 from going to infinity, can be set to 0.001;
the algorithm for comparing the average gradients in the graphs S1 and S2 is as follows:
wherein the content of the first and second substances,andis the local mean gradient of G1 and G2, wherein,the algorithm of (1) is as follows:
wherein the content of the first and second substances,is the average gradient value within the local sliding window; r is the size of the local window;
the focus area comparison chart C 1 And C 2 The algorithm is as follows:
C 1 =M 1 +λ×S 1
C 2 =M 2 +λ×S 2
where λ is a coefficient for balancing M and S;
the algorithm of the first initial focusing feature map D is:
where T is an artificially set threshold.
7. The method of claim 1, wherein the Gaussian filter window is set to [9,9] and the filter variance is set to 2.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210949597.9A CN115294003A (en) | 2022-08-09 | 2022-08-09 | Multi-focus image fusion method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210949597.9A CN115294003A (en) | 2022-08-09 | 2022-08-09 | Multi-focus image fusion method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115294003A true CN115294003A (en) | 2022-11-04 |
Family
ID=83828365
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210949597.9A Pending CN115294003A (en) | 2022-08-09 | 2022-08-09 | Multi-focus image fusion method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115294003A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116883461A (en) * | 2023-05-18 | 2023-10-13 | 珠海移科智能科技有限公司 | Method for acquiring clear document image and terminal device thereof |
-
2022
- 2022-08-09 CN CN202210949597.9A patent/CN115294003A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116883461A (en) * | 2023-05-18 | 2023-10-13 | 珠海移科智能科技有限公司 | Method for acquiring clear document image and terminal device thereof |
CN116883461B (en) * | 2023-05-18 | 2024-03-01 | 珠海移科智能科技有限公司 | Method for acquiring clear document image and terminal device thereof |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Isaac et al. | Super resolution techniques for medical image processing | |
Fang et al. | A variational approach for pan-sharpening | |
Duran et al. | Self-similarity and spectral correlation adaptive algorithm for color demosaicking | |
CN108596975B (en) | Stereo matching algorithm for weak texture region | |
CN107169947B (en) | Image fusion experimental method based on feature point positioning and edge detection | |
JP2005228342A (en) | Method and system for segmenting scanned document | |
Khattab et al. | Regularization-based multi-frame super-resolution: a systematic review | |
CN115294003A (en) | Multi-focus image fusion method | |
CN109166089A (en) | The method that a kind of pair of multispectral image and full-colour image are merged | |
CN115689960A (en) | Illumination self-adaptive infrared and visible light image fusion method in night scene | |
CN110796612A (en) | Image enhancement method and system | |
CN115984157A (en) | Multi-modal medical image fusion method based on frequency division domain fusion | |
Lee et al. | Integrated spatio-spectral method for efficiently suppressing honeycomb pattern artifact in imaging fiber bundle microscopy | |
CN115100226A (en) | Contour extraction method based on monocular digital image | |
CN113487526B (en) | Multi-focus image fusion method for improving focus definition measurement by combining high-low frequency coefficients | |
González-Ruiz et al. | Optical flow driven interpolation for isotropic FIB-SEM reconstructions | |
CN111652809A (en) | Infrared image noise suppression method for enhancing details | |
Pei et al. | Joint edge detector based on Laplacian pyramid | |
CN107610218B (en) | Three-dimensional structure mesh point three-dimensional image reconstruction-oriented layer data acquisition method | |
KR102599498B1 (en) | Multi-focus microscopic image fusion method using local area feature extraction | |
JP2611723B2 (en) | Image sharpness measuring device | |
Yang et al. | Superpixel based fusion and demosaicing for multi-focus Bayer images | |
KR102119138B1 (en) | Bayesian based image restoration method for camera | |
KR101763376B1 (en) | Confidence based recursive filtering method for depth map refinement | |
Hossain et al. | Image resolution enhancement using improved edge directed interpolation algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |