CN115424102A - Multi-focus image fusion method based on anisotropic guided filtering - Google Patents
Multi-focus image fusion method based on anisotropic guided filtering Download PDFInfo
- Publication number
- CN115424102A CN115424102A CN202210953758.1A CN202210953758A CN115424102A CN 115424102 A CN115424102 A CN 115424102A CN 202210953758 A CN202210953758 A CN 202210953758A CN 115424102 A CN115424102 A CN 115424102A
- Authority
- CN
- China
- Prior art keywords
- image
- filtering
- anisotropic
- result
- fusion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001914 filtration Methods 0.000 title claims abstract description 110
- 238000007500 overflow downdraw method Methods 0.000 title claims abstract description 15
- 230000004927 fusion Effects 0.000 claims abstract description 67
- 238000000034 method Methods 0.000 claims abstract description 24
- 230000000877 morphologic effect Effects 0.000 claims abstract description 22
- 238000005259 measurement Methods 0.000 claims abstract description 13
- 239000002131 composite material Substances 0.000 claims abstract description 11
- 238000000605 extraction Methods 0.000 claims abstract description 10
- 238000012545 processing Methods 0.000 claims description 14
- 238000005457 optimization Methods 0.000 claims description 11
- 238000010586 diagram Methods 0.000 claims description 8
- 238000004364 calculation method Methods 0.000 claims description 7
- 230000007797 corrosion Effects 0.000 claims description 3
- 238000005260 corrosion Methods 0.000 claims description 3
- 238000009499 grossing Methods 0.000 claims description 3
- 230000007547 defect Effects 0.000 abstract description 4
- 238000010276 construction Methods 0.000 abstract description 2
- 238000005516 engineering process Methods 0.000 description 9
- 239000013598 vector Substances 0.000 description 9
- 125000001475 halogen functional group Chemical group 0.000 description 7
- 230000000694 effects Effects 0.000 description 6
- 238000003384 imaging method Methods 0.000 description 6
- 230000009286 beneficial effect Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 2
- 230000008447 perception Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 230000000875 corresponding effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
- G06T5/30—Erosion or dilatation, e.g. thinning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a multi-focus image fusion method based on anisotropic guided filtering. According to the method, firstly, salient features are extracted from a source image, aiming at the defect that noise is easily introduced in the traditional feature extraction method, a salient feature extraction method based on a difference guide frame is adopted, then, aiming at the problem that continuity of a focusing region is poor in a fusion weight construction process, composite focusing degree measurement is carried out on the image by combining gradient features and an intensity variance operator, a rough fusion weight graph is obtained, then, morphological filtering and anisotropic guided filtering are adopted to optimize the rough fusion weight graph so that the boundary of the focusing region is aligned with the boundary of a defocusing region, a final fusion weight graph is obtained, and finally multi-focus image fusion is carried out.
Description
Technical Field
The invention belongs to the field of visible light image processing technology, and particularly relates to a multi-focus image fusion method based on anisotropic guided filtering.
Background
With the progress of the technology level, the sensor technology is rapidly developed. However, due to the limitations of the sensor imaging principles and the state of the art, the information collected by a single sensor is always limited, and it is difficult to meet the requirements for more comprehensive scene information in the context of a specific application. In recent years, how to integrate images acquired by various sensors to generate an image which is rich in information and more beneficial to human perception gradually becomes a research hotspot.
When different sensors are used for jointly imaging the same scene, one of the forms is to adopt the same sensor to carry out imaging integration when different parameters are set. When the same scene is imaged, if the parameter settings of the sensors are different, the imaging results are also different, and thus imaging using only one set of parameters may cause a lack of scene information. For example, in digital photography, it is often difficult to image a scene taken to full depth due to the depth of field limitations of the optical lens. Therefore, the focal length of the camera needs to be adjusted for multiple times of imaging, and clear images are obtained everywhere through multiple partially focused images. One of the multi-source image fusion technologies is a process of extracting most meaningful information from source images with different focal power through a specific mathematical model, and then combining the most meaningful information with the source images to generate an image which is rich in information and beneficial to scene perception and subsequent processing.
The main steps of the multi-focus image fusion technology comprise feature extraction and focus area confirmation. However, a common defect of the conventional feature extraction method is occurrence of "detail halo" and artifacts in the edge region of the image portion during the fusion process, and it is difficult to consider both the contrast of the tiny details and the energy of the region when detecting the focus region, and meanwhile, the fusion weight map is prone to have defects such as bumps, gaps, holes, and the like, so how to solve these problems is the key point of technical research in this field.
Disclosure of Invention
In order to solve the problems in the prior art, a main object of the present invention is to provide a multi-focus image fusion method based on anisotropic guided filtering, so as to improve the image fusion effect.
In order to realize the purpose, the invention adopts the technical scheme that:
a multi-focus image fusion method based on anisotropic guided filtering comprises the following steps:
step 2, respectively carrying out differential operation on the primary filtering result and the secondary filtering result of each source image to obtain a significance characteristic diagram of each source image;
step 4, optimizing the coarse fusion weight graph through morphological filtering and anisotropic guided filtering operation to obtain an optimized weight graph;
and 5, calculating the two source images by using the optimized weight map to obtain a fused image.
In the step 1, the salient features of the image are extracted by a salient feature extraction method based on a differential guide framework by using the characteristic that the anisotropic guide filtering result is strongly correlated with the guide map, so as to reduce artificially introduced noise, and the method specifically comprises the following steps,
step 1.1, respectively using the source images as guide images, and carrying out anisotropic guide filtering processing on the two source images to obtain a primary filtering result, which is expressed as:
F 1,1 =G r,ε,α (I 1 ,I 1 )
F 2,1 =G r,ε,α (I 2 ,I 2 )
in the formula I 1 Representing source images 1, F 1,1 Representing the result of the primary filtering of the source image 1, I 2 Representing source images 2, F 2,1 Representing the result of the primary filtering of the source image 2, G r,ε,α (. -) represents the anisotropic guided filter function, r, ε, α are the guided filter tuning parameters;
step 1.2, the primary filtering result is used as a guide graph, and the anisotropic guide filtering processing is performed on the two source images again to obtain a secondary filtering result, which is expressed as:
F 1,2 =G r,ε,α (I 1 ,F 1,1 )
F 2,2 =G r,ε,α (I 2 ,F 2,1 )
in the formula, F 1,2 Representing the result of the second filtering of the source image 1, F 2,2 Representing the result of the quadratic filtering of the source image 2. In step 2, the calculation of the saliency map is represented as:
D 1 =F 1,2 -F 1,1
D 2 =F 2,2 -F 2,1
in the formula D 1 A saliency map representing the source image 1, D 2 Representing a salient feature map of the source image 2.
In step 3, a composite focusing degree measurement operator is formed by introducing the intensity variance and the gradient feature filter, and the focusing region is preliminarily detected to obtain a coarse fusion weight map, which specifically includes the following steps:
step 3.1, introducing a gradient characteristic filter to measure the focusing power of the image, and expressing that:
where w is the neighborhood of the pixel (x, y), p is the input image, (p) i ,p j ,p k ) Respectively representing the gradient characteristics in the horizontal, vertical and diagonal directions;
and 3.2, introducing the intensity variance of the image to describe the definition of the local area of the image, and expressing as follows:
where μ denotes an average value of the input image p in a local area of size M × N located at the pixel (x, y); i. j represents the offset of (x, y) in the horizontal and vertical directions when the intensity variance is calculated, and M and N represent the horizontal dimension and the vertical dimension of a filter window used for calculating the intensity variance;
step 3.3, introducing a composite focusing power measurement operator to obtain a coarse fusion weight graph, which is expressed as follows:
in the formula, G 1 (x, y) representing a source image 1Pixel point at (x, y) in the image focus power measurement, G 2 (x, y) represents a pixel point at (x, y) in the image focus measurement of the source image 2, IV 1 (x, y) represents a pixel point at (x, y) in the definition description result of the local region of the source image 1, IV 2 (x, y) represents a pixel point at (x, y) in the local region sharpness description result of the source image 2.
In the step 4, the projections, gaps, and holes of the coarse fusion weight map are removed by using morphological filtering to realize optimization, which specifically includes the following steps,
step 4.1, remove the bumps, burrs, etc. of the coarse fusion weight graph using morphological opening operations, expressed as:
wherein R is m Is the result of the morphological opening operation, R represents the coarse fusion weight graph, Θ andsub-table represents corrosion and expansion, and C represents an operation structural unit;
step 4.2, filling the holes, closing the gaps using morphological closing operations, expressed as:
wherein R is n Is the result of a morphological closing operation;
step 4.3, use anisotropic guided Filter pairs R n Filtering, smoothing the boundary of the focusing area and the defocusing area, and filtering out the residual holes to obtain an optimized weight graph represented as:
O=G r,γ,α (R n ,I 1 )
wherein I 1 That is, the source image 1 is a guide map for guiding the filtering process in this step.
In step 5, the calculation formula is as follows:
I F (x,y)=O(x,y)I 1 (x,y)+[1-O(x,y)]I 2 (x,y)
wherein, I 1 (x, y) and I 2 (x, y) represents a pixel point at (x, y) in the source image, O (x, y) represents a pixel point at (x, y) in the optimization weight map, I F And (x, y) is a pixel point at the position of (x, y) in the final fusion image.
Compared with the prior art, the invention has the following beneficial effects:
(1) Compared with the traditional guide filtering technology, the AGF (anisotropic guide filter) adopted by the method can better keep the detail characteristics such as the edge, the texture and the like of the image, greatly reduces the artificial noises such as 'halo', artifacts and the like when the filtering strength is higher, creates conditions for the accurate extraction of the significant characteristics, and simultaneously keeps the low complexity characteristic of the traditional guide filtering algorithm.
(2) Compared with other traditional edge-preserving filtering technologies, the AGF can obtain different filtering results by selecting different guide graphs, and the different filtering results contain different feature information of the source image, so that the source image can be subjected to feature extraction by adopting the difference of the different filtering results.
(3) Aiming at the problem of poor continuity of a focusing region in the construction process of the fusion weight in the traditional technology, the method combines the gradient characteristic and the intensity variance to measure the composite focusing degree, can consider the micro detail contrast and the region energy characteristic of the image at the same time, and effectively solves the problem.
(4) The focusing region extracted by the traditional method has the phenomena of scattered bulges, long and narrow gaps, obvious holes and the like, and the continuity of the focusing region can be ensured by adopting morphological filtering to process the phenomena.
(5) For the final processing result, the performance of the method is superior to that of other technologies in subjective evaluation and objective comparison, and the phenomena of edge blurring, poor light and shade contrast, ghost artifacts, contrast distortion and the like in the traditional multi-focus image fusion method can be effectively solved.
Drawings
FIG. 1 is a flow chart of the present invention.
Fig. 2 is a vector space view of guided filtering. Wherein (a) represents that in the vector space, the mean values of the corresponding neighborhoods are respectively subtracted from the vectors x and g in the two neighborhoods to obtain vectorsAnd a vectorRespectively with vector 1= [1, \8230;, 1]Orthogonal in the hyperplane. (b) Representing a regularization process of the guided filter vector in vector space.
FIG. 3 is a graph comparing the effects of conventional guided filtering and anisotropic guided filtering; wherein (a) is an input image, (b) the two images in the group are GF filtering results (second and third images respectively showing intermediate and final results of GF filtering), and (c) the two images in the group are AGF filtering results (fourth and fifth images respectively showing intermediate and final results of AGF filtering).
FIG. 4 is a graph showing the contrast relationship between a multi-focus image and its gradient map and its 3D model; wherein (a) is a near focus image, (b) is a far focus image, (c) is a gradient of the near focus image, and (d) is a gradient map of the far focus image.
FIG. 5 is a schematic diagram of a fusion weight generation process; wherein (a) and (b) are source images, (c) and (d) are salient feature maps of the source images, (e) is a coarse fusion weight map, (f) is a morphologically filtered fusion weight map, (g) is a final optimized fusion weight map, and (h) is a fusion result.
FIG. 6 shows the fusion result in the example of the present invention; wherein (a) is a source image 1, (b) is a source image 2, and (c) is a fusion result.
FIG. 7 is a difference diagram of the image fusion result and the near-far focused image according to the embodiment of the present invention; wherein (a) is a difference image of the fusion result and the near focusing image, and (b) is a difference image of the fusion result and the far focusing image.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As described above, in multi-focus image fusion, the existing feature extraction method often causes many problems in the edge region and the focus region of an image, and defects such as bulges, gaps, holes and the like are likely to occur in a fusion weight map. In this example, to demonstrate the effectiveness of the method of the invention, real multi-focused images, all of which have been strictly registered beforehand, were used for image fusion. Referring to fig. 1, the basic flow of image fusion for a multi-focus image data set of the present invention is as follows:
In the step, for a multi-focus source image, different guide images are selected by using the characteristic that the anisotropic guide filtering result is strongly related to the guide image, different filtering results are obtained based on the anisotropic guide filtering, different salient features of the image are extracted by adopting a salient feature extraction method based on a difference guide frame, and artificial noises such as 'halo', artifact and the like are reduced while the detailed features such as the edge, texture and the like of the source image are maintained. The method specifically comprises the following steps:
step 1.1, for a source image I n (n =1, 2), the filter window radius r is set to 5, and the tuning parameter γ is set to 10 -2 The modulation parameter α is calculated by the following formula.
α=max(log 10 (γ),0)
And after alpha is obtained through calculation, guiding filtering weight optimization is carried out, and the weight w of the local window during filtering is calculated according to the following formula.
Respectively using the source images as guide images by the following formula according to the calculated weight coefficients, and carrying out anisotropic guide filtering based on weight optimization on the two source images to obtain a primary filtering result F n,1 Expressed as:
F 1,1 =G r,ε,α (I 1 ,I 1 )
F 2,1 =G r,ε,α (I 2 ,I 2 )
in the formula I 1 Representing a source image 1, F 1,1 Representing the result of the primary filtering, I, of the source image 1 2 Representing source images 2, F 2,1 Representing the primary filtering result, G, of the source image 2 r,ε,α (. Cndot.) denotes the anisotropic guided filter function, and r, ε, α are the guided filter tuning parameters.
Step 1.2, for the primary filtering result F n,1 Setting the radius r of the filter window to 5, the tuning parameter gamma is taken to be 10 -2 The modulation parameter α and the window weight coefficient w are also calculated by the above method. Then, the primary filtering result F is obtained by the following formula n,1 Setting the two source images as guide images, and respectively carrying out anisotropic guide filtering processing on the two source images again to obtain a new filtering result F n,2 Expressed as:
F 1,2 =G r,ε,α (I 1 ,F 1,1 )
F 2,2 =G r,ε,α (I 2 ,F 2,1 )
in the formula, F 1,2 Representing the result of the second filtering of the source image 1, F 2,2 Representing the result of the quadratic filtering of the source image 2.
The vector space of the guided filtering is shown in fig. 2, and the weak anisotropy of the conventional guided filtering is further explained with reference to fig. 3. The averaging operation in conventional guided filtering is equivalent to a low-pass filter, and accordingly, the saved details have smooth transition near the edge, which weakens the edge preserving effect and generates artificial noise of 'halo'. This "halo" effect is more pronounced when the filter window is larger. Fig. 3 is a graph comparing the results of GF filtering and AGF filtering, where (a) is the original input, (b) group is the GF filtering result, and (c) group is the filtering result of AGF, from (b) it can be seen that there is a significant "halo" phenomenon at the lighthouse top, and in (c) at the same location, it can be seen that the "halo" phenomenon is significantly suppressed.
And 2, respectively carrying out differential operation on the primary filtering result and the secondary filtering result of each source image, extracting the edge and texture characteristics of the image, and obtaining the significance characteristic diagram of each source image.
In the step, the difference of the two filtering results is used for extracting the features of the source image, so that the edge and texture features of the image are enhanced, the fuzzy area in the image is inhibited, and the salient features of the source image are extracted. The calculation of the saliency map is represented as:
D 1 =F 1,2 -F 1,1
D 2 =F 2,2 -F 2,1
in the formula, D 1 A salient feature map, D, representing the source image 1 2 Representing a salient feature map of the source image 2.
In this step, a focus determination method is set to calculate the image fusion weight. Since the gradient characteristics of the source image are consistent with the salient region, a gradient characteristic filter is constructed by utilizing the gradient characteristics to measure the image focusing region. Meanwhile, the gradient feature is considered to only reflect the tiny detail contrast of the image and lack of reflection of the energy of the image area, so that the intensity variance and the gradient feature filter are introduced to jointly form a composite focusing degree measuring operator, the focusing area is preliminarily detected, and a rough fusion weight graph is obtained. The method specifically comprises the following steps:
step 3.1, as shown in fig. 4, the multi-focus image and its gradient map are compared with their 3D model, where (a) is a near focus image, (b) is a far focus image, (c) is the gradient of the near focus image, and (D) is the gradient map of the far focus image, and from the map, it is seen that the gradient features of the image are consistent with the salient region of the source image, so that a gradient feature filter can be constructed by using the gradient features to measure the focus region of the image. Measuring and calculating the image focusing degree by introducing a gradient characteristic filter to obtain a source image I n Gradient image G of 1 、G 2 The gradient filter is expressed as the following equation:
where w is the neighborhood of pixel (x, y), the neighborhood radius, i.e., the filter window radius, is set to 3.p is the input image, (p) i ,p j ,p k ) Representing the gradient signature in horizontal, vertical and diagonal lines, respectively.
And 3.2, calculating the intensity variance. The invention introduces an intensity variance operator to reflect the energy of the image area and describe the definition of the local area of the image. Considering that the focus area (sharp area) of the image generally has higher gray value than other areas, the intensity variance operator designed by the present invention can be expressed as the following formula.
Where μ denotes an average value of the input image p in a local area of size M × N located at the pixel (x, y); in the present invention, M =2,n =2. i. j represents the amount of shift in the horizontal and vertical directions of (x, y) when calculating the intensity variance, and M and N represent the horizontal dimension and the vertical dimension of the filter window for calculating the intensity variance, respectively.
And 3.3, because the gradient features can only reflect the tiny detail contrast of the image and lack of reflection of the energy of the image area, introducing the intensity variance and the gradient feature filter to jointly form a composite focusing degree measuring operator, and performing primary detection on the focusing area. As shown in fig. 5, the coarse fusion weight image R is obtained by the following formula criteria. Wherein (a) and (b) are source images, (c) and (d) are salient feature maps of the source images, (e) is a rough fusion weight map, (f) is a fusion weight map after morphological filtering, (g) is a final optimized fusion weight map, and (h) is a fusion result.
In the formula, G 1 (x, y) represents a pixel point at (x, y) in the image focus power measurement of the source image 1, G 2 (x, y) represents a pixel point at (x, y) in the image Focus measurement of the source image 2, IV 1 (x, y) represents a pixel point at (x, y) in the definition description result of the local region of the source image 1, IV 2 (x, y) represents a pixel point at (x, y) in the local region sharpness description result of the source image 2.
And 4, optimizing the coarse fusion weight map through morphological filtering and anisotropic guided filtering operation to obtain an optimized weight map.
In the step, the bulges, gaps and cavities of the coarse fusion weight graph are removed by adopting morphological filtering to realize optimization, and the method specifically comprises the following steps,
step 4.1, performing morphological opening operation on the rough fusion weight graph R to remove bulges, burrs and the like, and expressing the operations as follows:
wherein R is m Is the result of the morphological opening operation, R represents the coarse fusion weight graph, Θ andthe sub-table represents corrosion and expansion, and C represents an operation structural unit. The size of C is 3X 3.
And 4.2, performing morphological closing operation processing on the result of the morphological opening operation through the following formula, filling the cavity and closing the gap, and expressing as follows:
wherein R is n Is the result of a morphological close operation; the size of C is 5X 5.
Step 4.3, in the invention, in order to prevent the boundary of the focusing area and the defocusing area of the source image from causing adverse effect on the fusion result, anisotropic guide filtering is used for R n Filtering, smoothing the boundary between the focusing region and the defocusing region, and filtering out the residual holes to obtain an optimized weight graph represented as
O=G r,γ,α (R n ,I 1 )
Wherein O is the final optimized fusion weight image, G (-) is an anisotropic guide filter operator, I 1 That is, the source image 1 is a guide map for guiding the filtering process in this step. The guide filtering tuning parameters are respectively r =5 and gamma =10 -2 ,α=0.5。
And 5, calculating the two source images by using the optimized weight map to obtain a fusion image. The calculation formula is as follows:
I F (x,y)=O(x,y)I 1 (x,y)+[1-O(x,y)]I 2 (x,y)
wherein, I 1 (x, y) and I 2 (x, y) represents a pixel point at (x, y) in the source image, O (x, y) represents a pixel point at (x, y) in the optimization weight map, I F And (x, y) is a pixel point at (x, y) in the final fusion image.
Fig. 6 is a set of example fusion results, and it can be seen that the present invention can better fuse multiple focus images, and there are no common problems of edge blurring and ghost in the fusion results.
FIG. 7 is a difference diagram of the image fusion result and the near and far focused images. The difference image of the ideal fusion result and the near-focus image can mostly filter the near-focus scene information of the fusion result, and mainly keep the detail information of the far-focus scene; similarly, the difference image between the ideal fusion result and the far-focus image should be able to mostly filter out the far-focus scene information of the fusion result, and mainly retain the detail information of the near-focus scene. Fig. 7 (a) is a difference diagram between the fusion result and the near-focus image, and it can be seen that the scene information (clock area) of the near-focus image is completely filtered out, and no information remains. Fig. 7 (b) is a difference diagram between the fusion result and the far-focus image, and it can be seen that there is no detail information (person and surrounding area) of the far-focus scene left. Therefore, the method can effectively retain clear detail information in the multi-focus image and has excellent fusion effect.
Claims (8)
1. A multi-focus image fusion method based on anisotropic guided filtering is characterized by comprising the following steps:
step 1, different images are used as guide images, anisotropic Guide Filtering (AGF) processing based on weight optimization is respectively carried out on two source images, and a primary filtering result and a secondary filtering result of each source image are obtained;
step 2, respectively carrying out differential operation on the primary filtering result and the secondary filtering result of each source image to obtain a significance characteristic diagram of each source image;
step 3, processing the two significant characteristic graphs through a composite focusing degree measurement operator to obtain a coarse fusion weight graph;
step 4, optimizing the coarse fusion weight graph through morphological filtering and anisotropic guided filtering operation to obtain an optimized weight graph;
and 5, calculating the two source images by using the optimized weight map to obtain a fusion image.
2. The multi-focus image fusion method based on anisotropic guided filtering according to claim 1, characterized in that: in the step 1, the salient features of the image are extracted by using the characteristic that the anisotropic guide filtering result is strongly related to the guide image and adopting a salient feature extraction method based on a difference guide frame to reduce artificially introduced noise, and the method specifically comprises the following steps,
step 1.1, respectively using the source images as guide images, and carrying out anisotropic guide filtering processing on the two source images to obtain a primary filtering result, which is expressed as:
F 1,1 =G r,ε,α (I 1 ,I 1 )
F 2,1 =G r,ε,α (I 2 ,I 2 )
in the formula I 1 Representing source images 1, F 1,1 Representing the result of the primary filtering, I, of the source image 1 2 Representing source images 2, F 2,1 Representing the primary filtering result, G, of the source image 2 r,ε,α (. Cndot.) represents an anisotropic guided filter function, r, ε, α are guided filter tuning parameters;
step 1.2, the primary filtering result is used as a guide graph, and the anisotropic guide filtering processing is performed on the two source images again to obtain a secondary filtering result, which is expressed as:
F 1,2 =G r,ε,α (I 1 ,F 1,1 )
F 2,2 =G r,ε,α (I 2 ,F 2,1 )
in the formula, F 1,2 Representing the result of the quadratic filtering of the source image 1, F 2,2 Representing the result of the quadratic filtering of the source image 2.
3. The multi-focus image fusion method based on anisotropic guided filtering according to claim 1, characterized in that: in step 2, the calculation of the saliency map is represented as:
D 1 =F 1,2 -F 1,1
D 2 =F 2,2 -F 2,1
in the formula, D 1 A saliency map representing the source image 1, D 2 Representing a salient feature map of the source image 2.
4. The multi-focus image fusion method based on anisotropic guided filtering according to claim 1, wherein: in the step 3, the intensity variance and the gradient feature filter are introduced to jointly form a composite focusing degree measurement operator, and the focusing region is preliminarily detected to obtain a rough fusion weight map, which specifically comprises the following steps:
step 3.1, introducing a gradient characteristic filter to measure the focusing power of the image, and expressing that:
where w is the neighborhood of the pixel (x, y), p is the input image, (p) i ,p j ,p k ) Respectively representing the gradient characteristics in the horizontal, vertical and diagonal directions;
and 3.2, introducing the intensity variance of the image to describe the definition of the local area of the image, and expressing as follows:
where μ denotes an average value of the input image p in a local area of size M × N located at the pixel (x, y); i. j represents the offset of (x, y) in the horizontal and vertical directions when the intensity variance is calculated, and M and N represent the horizontal dimension and the vertical dimension of a filter window used for calculating the intensity variance;
step 3.3, introducing a composite focusing power measurement operator to obtain a coarse fusion weight graph, which is represented as:
in the formula, G 1 (x, y) represents a pixel point at (x, y) in the image focus power measurement of the source image 1, G 2 (x, y) represents a pixel point at (x, y) in the image focus measurement of the source image 2, IV 1 (x, y) represents a pixel point at (x, y) in the local region definition description result of the source image 1, IV 2 (x, y) representsThe local region sharpness of the source image 2 describes the pixel points at (x, y) in the result.
5. The multi-focus image fusion method based on anisotropic guided filtering according to claim 1, wherein: in said step 3.1, the window radius is set to 3.
6. The multi-focus image fusion method based on anisotropic guided filtering according to claim 1, wherein: in the step 4, the bulges, gaps and cavities of the coarse fusion weight map are removed by adopting morphological filtering to realize optimization, and the method specifically comprises the following steps,
step 4.1, remove the bumps, burrs, etc. of the coarse fusion weight graph using morphological opening operations, expressed as:
wherein R is m Is the result of the morphological opening operation, R represents the coarse fusion weight graph, Θ andsub-table represents corrosion and expansion, and C represents an operation structural unit;
step 4.2, fill voids, close gaps using morphological close operations, expressed as:
wherein R is n Is the result of a morphological closing operation;
step 4.3, use anisotropic guided Filter pairs R n Filtering, smoothing the boundary of the focusing area and the defocusing area, and filtering out the residual holes to obtain an optimized weight graph represented as:
O=G r,γ,α (R n ,I 1 )
wherein I 1 I.e., the source image 1, is a guide map for guiding the filtering process in this step.
7. The multi-focus image fusion method based on anisotropic guided filtering according to claim 1, wherein: in step 5, the calculation formula is as follows:
I F (x,y)=O(x,y)I 1 (x,y)+[1-O(x,y)]I 2 (x,y)
wherein, I 1 (x, y) and I 2 (x, y) represents a pixel point at (x, y) in the source image, O (x, y) represents a pixel point at (x, y) in the optimization weight map, I F And (x, y) is a pixel point at the position of (x, y) in the final fusion image.
8. The multi-focus image fusion method based on anisotropic guided filtering according to claim 1, wherein: the two source images are respectively a near focus image and a far focus image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210953758.1A CN115424102A (en) | 2022-08-10 | 2022-08-10 | Multi-focus image fusion method based on anisotropic guided filtering |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210953758.1A CN115424102A (en) | 2022-08-10 | 2022-08-10 | Multi-focus image fusion method based on anisotropic guided filtering |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115424102A true CN115424102A (en) | 2022-12-02 |
Family
ID=84196811
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210953758.1A Pending CN115424102A (en) | 2022-08-10 | 2022-08-10 | Multi-focus image fusion method based on anisotropic guided filtering |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115424102A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116883461A (en) * | 2023-05-18 | 2023-10-13 | 珠海移科智能科技有限公司 | Method for acquiring clear document image and terminal device thereof |
CN117974655A (en) * | 2024-03-29 | 2024-05-03 | 大连傲盈科技有限公司 | Asphalt road quality detection method based on computer vision |
-
2022
- 2022-08-10 CN CN202210953758.1A patent/CN115424102A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116883461A (en) * | 2023-05-18 | 2023-10-13 | 珠海移科智能科技有限公司 | Method for acquiring clear document image and terminal device thereof |
CN116883461B (en) * | 2023-05-18 | 2024-03-01 | 珠海移科智能科技有限公司 | Method for acquiring clear document image and terminal device thereof |
CN117974655A (en) * | 2024-03-29 | 2024-05-03 | 大连傲盈科技有限公司 | Asphalt road quality detection method based on computer vision |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115424102A (en) | Multi-focus image fusion method based on anisotropic guided filtering | |
De et al. | Multi-focus image fusion using a morphology-based focus measure in a quad-tree structure | |
CN108492262B (en) | No-ghost high-dynamic-range imaging method based on gradient structure similarity | |
TWI489418B (en) | Parallax Estimation Depth Generation | |
JP6190768B2 (en) | Electron microscope apparatus and imaging method using the same | |
WO2018157562A1 (en) | Virtual viewpoint synthesis method based on local image segmentation | |
JP6102928B2 (en) | Image processing apparatus, image processing method, and program | |
CN106898048B (en) | A kind of undistorted integration imaging 3 D displaying method being suitable for complex scene | |
KR20140118031A (en) | Image processing apparatus and method thereof | |
CN111462027B (en) | Multi-focus image fusion method based on multi-scale gradient and matting | |
JP2012249038A (en) | Image signal processing apparatus and image signal processing method | |
CN110322572A (en) | A kind of underwater culvert tunnel inner wall three dimensional signal space method based on binocular vision | |
CN104036481A (en) | Multi-focus image fusion method based on depth information extraction | |
CN113487526B (en) | Multi-focus image fusion method for improving focus definition measurement by combining high-low frequency coefficients | |
Pandey et al. | Automatic image processing based dental image analysis using automatic gaussian fitting energy and level sets | |
CN111179333A (en) | Defocus fuzzy kernel estimation method based on binocular stereo vision | |
CN108805841B (en) | Depth map recovery and viewpoint synthesis optimization method based on color map guide | |
CN113628141A (en) | HDR detail enhancement method based on high and low exposure image fusion | |
Aydin et al. | A New Adaptive Focus Measure for Shape From Focus. | |
Gherardi et al. | Illumination field estimation through background detection in optical microscopy | |
CN111091501A (en) | Parameter estimation method of atmosphere scattering defogging model | |
CN112825189B (en) | Image defogging method and related equipment | |
McCloskey et al. | Removal of partial occlusion from single images | |
Xie et al. | Image defogging method combining light field depth estimation and dark channel | |
Latha et al. | Joint estimation of depth map and focus image in SFF: an optimization framework by sparsity approach |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |