CN109509163B - FGF-based multi-focus image fusion method and system - Google Patents

FGF-based multi-focus image fusion method and system Download PDF

Info

Publication number
CN109509163B
CN109509163B CN201811194833.0A CN201811194833A CN109509163B CN 109509163 B CN109509163 B CN 109509163B CN 201811194833 A CN201811194833 A CN 201811194833A CN 109509163 B CN109509163 B CN 109509163B
Authority
CN
China
Prior art keywords
image
source image
focus
fusion
source
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811194833.0A
Other languages
Chinese (zh)
Other versions
CN109509163A (en
Inventor
张永新
张传才
赵秀英
伍临莉
徐文博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Luoyang Normal University
Original Assignee
Luoyang Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Luoyang Normal University filed Critical Luoyang Normal University
Priority to CN201811194833.0A priority Critical patent/CN109509163B/en
Publication of CN109509163A publication Critical patent/CN109509163A/en
Application granted granted Critical
Publication of CN109509163B publication Critical patent/CN109509163B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The invention belongs to the technical field of optical image processing, and discloses a multi-focus image fusion method and a multi-focus image fusion system based on Fast Guided Filter (FGF). A, a source image is smoothed by mean filtering, small structures in the source image are removed, and the source image is decomposed to obtain a basic layer and a detail layer of the source image; filtering the source image by Laplace filtering and Gaussian low-pass filtering in sequence to obtain a saliency map of the source image; obtaining a weight map of the corresponding source image by comparing the pixel size of the significant image of the source image; taking a source image as a guide image, and decomposing and optimizing the weight graph by using FGF (fibroblast growth factor), so as to respectively obtain an optimized weight graph basic layer and an optimized weight graph detail layer; fusing corresponding pixels of the basic layer and the detail layer according to a certain fusion rule; and combining the fused base layer and the detail layer to obtain a fused image. The method can effectively improve the detection accuracy of the focus region in the source image and greatly improve the subjective and objective quality of the fused image.

Description

Multi-focus image fusion method and system based on FGF
Technical Field
The invention belongs to the technical field of optical image processing, and relates to a multi-focus image fusion method, in particular to a multi-focus image fusion method and system based on FGF.
Background
Because the optical sensor imaging system can only image the object in the focus on the image plane clearly, but image the object out of focus is blurred. Therefore, the problem of limited focus range tends to result in the optical imaging system not being able to image all of the scene objects clearly. If the entire scene object is to be fully understood, a considerable number of similar images need to be analyzed, which wastes both time and energy, and also wastes storage space. The image fusion method is used for obtaining clear images of all objects in the same scene, so that the scene information can be reflected more comprehensively and truly, and the method has important significance for accurate analysis and understanding of the images, and the multi-focus image fusion is one of effective technical means for achieving the aim.
The multi-focus image fusion is to adopt a certain fusion algorithm to a plurality of focus images in a certain scene obtained under the same imaging condition after registration, detect and extract the clear area of each focus image through activity measurement, and then combine the areas according to a certain fusion rule to generate an image with all the clear target objects in the scene. The multi-focus image fusion technology can clearly and completely represent scene target information, and lays a good foundation for feature extraction, target identification, tracking and the like, so that the utilization rate of image information and the reliability of detection and identification of a system on a target table are effectively improved, the space-time range is expanded, and the uncertainty is reduced.
The key of the multi-focus image fusion algorithm is to accurately represent the characteristics of a focus area, accurately position and extract an area or pixel in a focus range, which is one of the problems which are not well solved in the multi-focus image fusion technology. Currently, image fusion studies have been in progress for over thirty years. With the continuous development of computers and imaging technologies, researchers at home and abroad propose hundreds of fusion algorithms with excellent performance aiming at the problems of focus area judgment and extraction in the multi-focus image fusion technology. These multi-focus image fusion algorithms fall into two main categories: a space domain multi-focus image fusion algorithm and a transform domain multi-focus image fusion algorithm. The spatial domain image fusion algorithm extracts the pixel points or regions of the focusing region by using different focusing region characteristic evaluation methods according to the gray value of the pixel points in the source image, and obtains a fusion image according to a fusion rule. The algorithm has the advantages of simple method, easy execution, low calculation complexity and fused image containing the original information of the source image. The disadvantage is that the method is easily interfered by noise and easily generates 'blocking effect'. And transforming the source image by using a transform domain image fusion algorithm, processing the transform coefficient according to a fusion rule, and performing inverse transformation on the processed transform coefficient to obtain a fusion image. The defects are mainly expressed in that the decomposition process is complex and time-consuming, the space of the high-frequency coefficient occupies a large space, and information loss is easily caused in the fusion process. If a transformation coefficient of the fused image is changed, the spatial gray value of the whole image will be changed, and as a result, unnecessary artificial traces are introduced in the process of enhancing the attributes of some image regions. The pixel-level multi-focus image fusion algorithms that are commonly used include the following:
(1) A multi-focus image fusion method based on Laplacian Pyramid (LAP). The main process is to carry out Laplace pyramid decomposition on a source image, then adopt a proper fusion rule to fuse high-frequency and low-frequency coefficients, and carry out inverse transformation on the fused pyramid coefficient to obtain a fused image. The method has good time-frequency local characteristics and good effect, but data between decomposition layers have redundancy, and the data correlation on each decomposition layer cannot be determined. The capability of extracting detail information is poor, the loss of high-frequency information in the decomposition process is serious, and the quality of a fused image is directly influenced.
(2) A multi-focus image fusion method based on Wavelet Transform (DWT). The main process is to carry out wavelet decomposition on a source image, then adopt a proper fusion rule to fuse high-frequency and low-frequency coefficients, and carry out inverse wavelet transform on the fused wavelet coefficients to obtain a fused image. The method has good time-frequency local characteristics and obtains good effect, but the two-dimensional wavelet basis is formed by one-dimensional wavelet basis in a tensor product mode, the expression of singular points in the image is optimal, and the sparse expression of singular lines and surfaces of the image cannot be carried out. In addition, DWT belongs to down-sampling transformation, translation invariance is lacked, information is easily lost in the fusion process, and fusion image distortion is caused.
(3) A multi-focus image fusion method based on Non-sub-sampled Contourlet Transform (NSCT). The main process is to carry out NSCT decomposition on a source image, then to adopt a proper fusion rule to fuse high-frequency and low-frequency coefficients, and to carry out NSCT inverse transformation on the fused wavelet coefficients to obtain a fused image. The method can obtain good fusion effect, but has slow running speed, and the decomposition coefficient needs to occupy a large amount of storage space.
(4) Multi-focus image fusion method based on Principal Component Analysis (PCA). The method mainly comprises the steps of converting source images into column vectors according to row priority or column priority, calculating covariance, solving eigenvectors according to a covariance matrix, determining eigenvectors corresponding to a first principal component, determining the weight of fusion of the source images according to the eigenvectors, and performing weighted fusion according to the weight. According to the method, when some common features exist among source images, a better fusion effect can be obtained; when the feature difference between the source images is large, false information is easily introduced into the fused image, which results in distortion of the fusion result. The method is simple in calculation and high in speed, but the gray value of a single pixel point cannot represent the focusing characteristic of the image area, so that the fused image has the problems of fuzzy outline and low contrast.
(5) Spatial Frequency (SF) -based multi-focus image fusion method. The method mainly comprises the steps of partitioning a source image into blocks, calculating SF of each block, comparing SF of a corresponding block of the source image, and combining corresponding image blocks with large SF values to obtain a fusion image. The method is simple and easy to implement, but the block size is difficult to adaptively determine, the block size is too large, pixels outside a focus are easily included, the fusion quality is reduced, the contrast of a fusion image is reduced, the block effect is easily generated, the representation capability of the block size to the region definition degree is limited, the block is easily selected wrongly, the consistency between adjacent sub-blocks is poor, the obvious detail difference occurs at the junction, and the block effect is generated. In addition, the focusing characteristics of the image sub-blocks are difficult to accurately describe, and how to accurately describe the focusing characteristics of the sub-blocks by using the local features of the image sub-blocks directly influences the accuracy of the selection of the focusing sub-blocks and the quality of the fused image.
(6) Multi-focus image fusion method with Convolution Sparse Representation (CSR). The method mainly comprises the steps of carrying out CSR decomposition on a source image to obtain a base layer and a detail layer of the source image, then fusing the base layer and the detail layer, and finally laminating the fused base layer and the detail layer to obtain a fused image. The method does not directly depend on the focusing characteristics of the source image, but judges the focusing area of the source image through the salient features of the base layer and the detail layer of the source image, and has robustness to noise.
(7) Multi-focus image fusion method based on cartoon-texture Composition (CTD). The method mainly comprises the steps of respectively carrying out cartoon-texture image decomposition on multi-focus source images to obtain cartoon components and texture components of the multi-focus source images, respectively fusing the cartoon components and the texture components of the multi-focus source images, and combining the fused cartoon components and texture components to obtain a fused image. The fusion rule is designed based on the focusing characteristics of the cartoon component and the texture component of the image and does not directly depend on the focusing characteristics of the source image, so that the method is robust to noise and scratch damage.
(8) Multi-focus image Fusion method (GFF) based on Guided filtering. The method mainly comprises the steps of decomposing an image into a base layer containing large-scale intensity change and a detail layer containing small-scale details by using a guide image filter, then constructing a fusion weight map by utilizing the significance and the spatial consistency of the base layer and the detail layer, respectively fusing the base layer and the detail layer of a source image on the basis of the fusion weight map, and finally laminating the fused base layer and the detail layer to obtain a final fusion image.
The eight methods are relatively common multi-focus image fusion methods, but in the methods, wavelet transform (DWT) cannot fully utilize the geometrical characteristics of image data, cannot optimally or most sparsely express images, and easily causes the phenomena of offset and information loss of fused images; the non-subsampled contourlet transform (NSCT) method has a complex decomposition process and a slow operation speed, and in addition, a large amount of storage space is required for a decomposition coefficient. The Principal Component Analysis (PCA) method is easy to reduce the contrast of the fused image and influence the quality of the fused image. Convolution Sparse Representation (CSR), cartoon texture image decomposition (CTD) and guided filtering (GFF) are new methods proposed in recent years, good fusion effects are achieved, edge preservation and translation invariant operation are carried out on the basis of a local nonlinear model, and the calculation efficiency is high; the iteration frame can be used for eliminating small details near the edge while restoring the large-scale edge; the first four common fusion methods have different defects, the speed and the fusion quality are difficult to harmonize, and the application and popularization of the methods are limited.
In summary, the problems of the prior art are as follows:
in the prior art, (1) the traditional spatial domain method is mainly carried out by adopting a region division method, and the excessive region division size can cause the inner region and the outer region of a focus to be positioned in the same region, thereby causing the quality of a fused image to be reduced; the area division size is too small, the characteristics of the subareas cannot fully reflect the characteristics of the areas, the judgment of pixels in a focusing area is inaccurate easily, misselection is generated, the consistency between adjacent areas is poor, obvious detail difference occurs at the junction, and the block effect is generated. (2) In the traditional multi-focus fusion method based on multi-scale decomposition, the whole multi-focus source image is always treated as a single whole, the detail information extraction is incomplete, the detail information such as the edge texture of the source image cannot be well represented in the fusion image, the integrity of the potential information description of the source image by the fusion image is influenced, and the quality of the fusion image is further influenced.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides the FGF-based multi-focus image fusion method and system which can effectively eliminate the blocking effect, expand the depth of field of an optical imaging system and greatly improve the subjective and objective quality of a fusion image. The method solves the problems that the judgment of a focus area is inaccurate, the edge texture information of a source image cannot be effectively extracted, the detailed feature representation of the fused image is incomplete, partial details are lost, the blocking effect is caused, the contrast is reduced and the like in the multi-focus image fusion.
(1) Smoothing the source image by using mean filtering, and decomposing the source image to obtain a basic layer and a detail layer of the source image; (2) Filtering the source image by Laplace filtering and Gaussian low-pass filtering in sequence to obtain a saliency map of the source image; (3) Obtaining a weight map of the corresponding source image by comparing the pixel size of the significant image of the source image; (4) Taking a source image as a guide image, and decomposing and optimizing the weight map by using FGF (fibroblast growth factor), so as to respectively obtain an optimized weight map base layer and an optimized weight map detail layer; (5) According to a certain fusion rule, corresponding pixels of the basic layer and the detail layer are fused by using the optimized weight graph basic layer and the detail layer; (6) And combining the fused base layer and the detail layer to obtain a fused image.
The method is realized by firstly decomposing a source image into a basic layer and a detail layer by using mean filtering; performing significance detection on the source image by utilizing Laplace filtering and Gaussian low-pass filtering to obtain a significance map of the source image; then, a weight map of the corresponding source image is obtained by comparing the pixel size of the significant image of the source image; taking a source image as a guide image, and decomposing and optimizing the weight map by using FGF (fibroblast growth factor), so as to respectively obtain an optimized weight map basic layer and an optimized weight map detail layer; then respectively fusing corresponding pixels of the basic layer and the detail layer according to a certain fusion rule based on the decision matrix; and finally, combining the fused base layer and the detail layer to obtain a fused image.
Further, the FGF-based multi-focus image fusion method is used for carrying out multi-focus image I after registration 1 And I 2 Carrying out the fusion of 1 And I 2 Are all gray scale images, and I 1 ,I 2 ∈i M×N
Figure GSB0000179833660000051
Is a space of size mxn, M and N being positive integers, specifically including:
(1) Using mean wave filter AF to respectively focus multiple focus images I 1 And I 2 Performing smoothing operation to remove the source image I 1 And I 2 Medium small structure, resulting in a source image base layer (B) 1 ,B 2 ) Source image detail layer (D) 1 ,D 2 ). Wherein: (B) 1 ,B 2 )=AF(I 1 ,I 2 ),(D 1 ,D 2 )=(I 1 ,I 2 )-(B 1 ,B 2 )。
(2) Filtering the source image by LF to obtain high-pass filtering image H of the source image 1 And H 2 By GLF on H 1 And H 2 Obtaining a source image saliency map S through low-pass filtering processing 1 And S 2 . Wherein: (H) 1 ,H 2 )=LF(I 1 ,I 2 ),(S 1 ,S 2 )=GLF(H 1 ,H 2 )。
(3) From the source image I 1 And I 2 Corresponding to the pixel S in the saliency map 1 (i, j) and S 2 (i, j) size, and constructing a weight matrix P corresponding to the source image 1 And P 2 . Wherein:
Figure GSB0000179833660000061
Figure GSB0000179833660000062
S 1 (I, j) as source image I 1 The salient map pixel (i, j);
S 2 (I, j) as source image I 2 The salient map pixel (i, j);
P 1 (I, j) as source image I 1 The weight matrix elements (i, j);
P 2 (I, j) as source image I 2 The weight matrix elements (i, j);
i=1,2,3,…,M;j=1,2,3,…,N;
s (i, j) is the element of the ith row and the jth column of the matrix saliency map S;
(4) A source image I 1 And I 2 As a guide image, weight matrix P is weighted by FGF 1 And P 2 Carrying out decomposition optimization to obtain a weight matrix W 1 B ,W 2 B ,W 1 D And W 2 D . Wherein: (W) 1 B ,W 1 D )=FGF(P 1 ,I 1 ), (W 2 B ,W 2 D )=FGF(P 2 ,I 2 )。
(5) Based on the source image base layer (B) 1 ,B 2 ) And a detail layer (D) 1 ,D 2 ) According to the optimized weight matrix W 1 B , W 2 B ,W 1 D And W 2 D Constructing a fused image base layer F B
Figure GSB0000179833660000063
And detail layer F D
Figure GSB0000179833660000064
Obtaining a fused base layer F B And detail layer F D . Wherein, F B =W 1 B B 1 +W 2 B B 2 ,F D =W 1 D D 1 +W 2 D D 2
(6) A fused image F is constructed and the fused image F,
Figure GSB0000179833660000065
obtaining a fused gray level image, wherein: f = F B +F D
Further, carrying out corrosion expansion operation processing on the characteristic matrix constructed in the step (4), and constructing a fusion image by using the processed characteristic matrix.
Another objective of the present invention is to provide a multi-focus image fusion system based on FGF.
Another objective of the present invention is to provide a smart city multi-focus image fusion system using the FGF-based multi-focus image fusion method.
Another objective of the present invention is to provide a medical imaging multi-focus image fusion system using the FGF-based multi-focus image fusion method.
Another objective of the present invention is to provide a safety monitoring multi-focus image fusion system using the FGF-based multi-focus image fusion method.
The invention has the advantages and positive effects that:
(1) Firstly, decomposing a source image into a basic layer and a detail layer by using mean filtering, and then performing significance detection on the source image by using Laplace high-pass filtering and Gaussian low-pass filtering to obtain a significance map of the source image; then, a weight map of the corresponding source image is obtained by comparing the pixel size of the significant image of the source image; and taking the source image as a guide image, decomposing and optimizing the weight map by using FGF (fibroblast growth factor), respectively obtaining an optimized weight map basic layer and an optimized detail layer, respectively fusing the source image basic layer and the optimized detail layer by using the weight map basic layer and the optimized detail layer, and then fusing the fused basic layer and the fused detail layer to obtain a fused image of the source image. The source image is secondarily fused, so that the accuracy of judging the characteristics of the focus area of the source image is improved, the extraction of the target in the clear area is facilitated, the detail information such as edge texture and the like can be better transferred from the source image, and the subjective and objective quality of the fused image is effectively improved.
(2) The image fusion framework is flexible and easy to implement, and can be used for other types of image fusion tasks.
(3) When the fusion algorithm uses the mean filter to carry out smooth operation on the source image, the influence of noise in the source image on the quality of the fusion image can be effectively inhibited.
The image fusion method has flexible framework, has higher accuracy for judging the characteristics of the focus area of the source image, can more accurately extract the target details of the focus area, clearly represents the detail characteristics of the image, effectively eliminates the block effect and effectively improves the subjective and objective quality of the fused image.
Drawings
Fig. 1 is a flowchart of a multi-focus FGF-based image fusion method according to an embodiment of the present invention.
Fig. 2 is an effect diagram of a source image to be fused 'Disk' provided in embodiment 1 of the present invention.
Fig. 3 is a diagram of fusion effects of nine image fusion methods, i.e., laplacian (LAP), wavelet transform (DWT), non-downsampling-based contourlet transform (NSCT), principal Component Analysis (PCA) method, spatial Frequency (SF), convolutional Sparse Representation (CSR), cartoon texture image decomposition (CTD), guided filtering (GFF), and the present invention (deployed), on multiple focused images 'Disk' in fig. 3 (a) and (b).
FIG. 4 is a 'Book' effect diagram of the image to be fused provided by embodiment 2 of the present invention;
fig. 5 is a fusion effect image of Laplacian (LAP), wavelet transform (DWT), non-downsampling-based contourlet transform (NSCT), principal Component Analysis (PCA) method, spatial Frequency (SF), convolution Sparse Representation (CSR), cartoon texture image decomposition (CTD), guided filtering (GFF), and the present invention (deployed) image nine fusion methods on multi-focus image 'boot' fig. 4 (a) and (b).
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In the prior art, a fusion algorithm in the field of multi-focus image fusion judges a focus area of a source image inaccurately, extraction of detail information is incomplete, detail information such as edge texture of the source image cannot be well represented in a fusion image, and the fusion effect is poor.
The following detailed description of the principles of the invention is provided in connection with the accompanying drawings.
As shown in fig. 1, the FGF-based multi-focus image fusion method provided by the embodiment of the present invention includes:
s101: firstly, mean filtering is utilized to decompose a source image into a base layer and a detail layer.
S102: and then, performing significance detection on the source image by utilizing Laplace high-pass filtering and Gaussian low-pass filtering to obtain a significance map of the source image, and obtaining a weight map of the corresponding source image by comparing the pixel size of the significant image of the corresponding source image.
S103: and taking the source image as a guide image, decomposing and optimizing the weight map by using FGF (fibroblast growth factor), respectively obtaining an optimized weight map base layer and an optimized detail layer, and respectively fusing the source image base layer and the optimized detail layer by using the weight map base layer and the optimized weight map detail layer.
S104: and finally, combining the fused base layer and the detail layer to obtain a fused image.
The present invention is further described below with reference to specific schemes.
The FGF-based multi-focus image fusion method provided by the embodiment of the invention comprises the following specific processes:
focusing the multi-focus image I by using the mean value filter AF 1 And I 2 Performing smoothing operation to remove the source image I 1 And I 2 Medium small structure to obtain the source image base layer (B) 1 ,B 2 ) Source image detail layer (D) 1 ,D 2 ). Wherein: (B) 1 ,B 2 )=AF(I 1 ,I 2 ),(D 1 ,D 2 )=(I 1 ,I 2 )-(B 1 ,B 2 );
Filtering the source image by LF to obtain high-pass filtering image H of the source image 1 And H 2 By GLF on H 1 And H 2 Obtaining a source image saliency map S through low-pass filtering processing 1 And S 2 . Wherein: (H) 1 ,H 2 )=LF(I 1 ,I 2 ), (S 1 ,S 2 )=GLF(H 1 ,H 2 ). And from the source image I 1 And I 2 Corresponding to the pixel S in the saliency map 1 (i, j) and S 2 (i, j) size, and constructing a weight matrix P corresponding to the source image 1 And P 2 . Wherein:
Figure GSB0000179833660000091
Figure GSB0000179833660000092
S 1 (I, j) are source images I 1 Of (i, j) is determined);
S 2 (I, j) as source image I 2 The salient map pixel (i, j);
P 1 (I, j) as source image I 1 The weight matrix element (i, j);
P 2 (I, j) are source images I 2 The weight matrix element (i, j);
i=1,2,3,…,M;j=1,2,3,…,N;
s (i, j) is the element of the ith row and the jth column of the matrix saliency map S;
respectively providing a source image I 1 And I 2 As a guide image, weight matrix P is weighted by FGF 1 And P 2 Carrying out decomposition optimization to obtain a weight matrix W 1 B ,W 2 B ,W 1 D And W 2 D . Wherein: (W) 1 B ,W 1 D )=FGF(P 1 ,I 1 ), (W 2 B ,W 2 D )=FGF(P 2 ,I 2 )。
Based on the source image base layer (B) 1 ,B 2 ) And a detail layer (D) 1 ,D 2 ) According to the optimized weight matrix W 1 B ,W 2 B , W 1 D And W 2 D Constructing a fused image base layer F B
Figure GSB0000179833660000101
And detail layer F D
Figure GSB0000179833660000102
Obtaining a fused base layer F B And detail layer F D . Wherein, F B =W 1 B B 1 +W 2 B B 2 ,F D =W 1 D D 1 +W 2 D D 2
A fused image F is constructed and the fused image F,
Figure GSB0000179833660000104
obtaining a fused gray image, wherein: f = F B +F D
The invention is further described below with reference to specific embodiments.
Fig. 2 is an effect diagram of a source image to be fused 'Disk' provided in embodiment 1 of the present invention.
Example 1
Following the scheme of the present invention, this embodiment 1 performs a fusion process on the two source images shown in fig. 2 (a) and (b), and the processing result is shown as the processing result in fig. 3. And simultaneously, fusing the two source images shown in the figures 2 (a) and (b) by utilizing eight image fusion methods, namely Laplacian (LAP), wavelet transform (DWT), non-subsampled contourlet transform (NSCT), principal Component Analysis (PCA), spatial Frequency (SF), convolution Sparse Representation (CSR), cartoon texture image decomposition (CTD) and guided filter (GFF), evaluating the quality of the fused images of different fusion methods, and processing and calculating to obtain the result shown in the table 1.
Table 1 multi focus image 'Disk' fusion image quality evaluation.
Figure GSB0000179833660000103
Example 2:
following the scheme of the present invention, the embodiment performs a fusion process on the two source images shown in fig. 4 (a) and (b), and the processing result is shown as a processing result in fig. 5.
Meanwhile, fusion processing is carried out on two source images (a) and (b) shown in the figure 4 by eight image fusion methods, namely Laplace (LAP), wavelet transform (DWT), contourlet transform (NSCT) based on non-downsampling, principal Component Analysis (PCA), spatial Frequency (SF), convolution Sparse Representation (CSR), cartoon texture image decomposition (CTD) and guided filtering (GFF), quality evaluation is carried out on the fusion images of different fusion methods shown in the figure 5, and the processing calculation is carried out to obtain results shown in a table 2.
Table 2 multi-focus image 'Book' fusion image quality evaluation.
Figure GSB0000179833660000111
In tables 1 and 2: method represents a process; the fusion method comprises eight steps: laplacian (LAP), wavelet transform (DWT), non-downsampling based contourlet transform (NSCT), principal Component Analysis (PCA) methods, spatial Frequency (SF), convolutional Sparse Representation (CSR), cartoon texture image decomposition (CTD), guided filtering (GFF); running Time represents run Time in seconds. MI represents mutual information and is a fusion image quality objective evaluation index based on the mutual information. Q AB/F Representing the total amount of edge information transferred from the source image.
As can be seen from fig. 3 and 5, the frequency domain methods in other methods include Laplacian (LAP), wavelet transform (DWT), and non-downsampling-based contourlet transform (NSCT), and the fused images thereof all have the problems of re-artifact, blur, and poor contrast; the contrast of a fused image is the worst in a Principal Component Analysis (PCA) method in a spatial domain method, the fused image in a Spatial Frequency (SF) method has a block effect phenomenon, and the fusion quality of Convolution Sparse Representation (CSR), cartoon texture image decomposition (CTD) and guided filtering (GFF) is relatively good, but a small amount of partial blurring exists. The subjective visual effect of the method on the fused image of a multi-focus image figure 3'disk' and a multi-focus image figure 5'book' is obviously better than that of other fusion methods.
The fused image shows that the method has obviously better extraction capability on the target edge and the texture of the focus area of the source image than other methods, can well transfer the target information of the focus area in the source image into the fused image, and stores the detail information such as the edge, the texture and the like in the source image. The target detail information of the focusing area can be effectively captured, and the image fusion quality is improved. The method of the invention has good subjective quality.
As can be seen from tables 1 and 2, the image quality objective evaluation index MI of the fused image is 1.5 higher than that of the fused image corresponding to other methods on average, and the image quality objective evaluation index Q of the fused image AB/F Fused graph over other methodsThe image corresponding index is higher than 0.04. The method is proved to have good objective quality for obtaining the fused image.
The above description is only exemplary of the present invention and should not be taken as limiting the invention, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (6)

1. A multi-focus image fusion method based on FGF is characterized in that the multi-focus image fusion method and system based on FGF comprise the following steps:
(1) Decomposing the source image by using Average Filtering (AF) to respectively obtain a basic layer and a detail layer of the source image;
(2) Filtering the source image by Laplacian Filter (LF) and Gaussian Low-pass Filter (GLF) in sequence to obtain a saliency map of the source image;
(3) Constructing a weight map corresponding to the source image according to the pixel size of the significant image corresponding to the source image, taking the source image as a guide image, and performing decomposition optimization on the weight map by using FGF (fibroblast growth factor) to respectively obtain an optimized weight map base layer and an optimized weight map detail layer;
(4) Based on the basic layer and the detail layer of the weight map, respectively fusing corresponding pixels of the basic layer and the detail layer of the source image according to a certain fusion rule;
(5) Merging the fused base layer and the detail layer to obtain a fused image;
the FGF-based multi-focus image fusion method is used for carrying out multi-focus image I after registration 1 And I 2 Carrying out the fusion of 1 And I 2 Are all gray scale images, and I 1 ,I 2 ∈i M×N ,i M×N Is a space of size mxn, M and N being positive integers, specifically including:
(1) Using mean wave filter AF to respectively focus multiple focus images I 1 And I 2 Performing smoothing operation to remove the source image I 1 And I 1 Medium small structure to obtain the source image base layer (B) 1 ,B 2 ),Source image detail layer (D) 1 ,D 2 ) Wherein: (B) 1 ,B 2 )=AF(I 1 ,I 2 ),(D 1 ,D 2 )=(I 1 ,I 2 )-(B 1 ,B 2 );
(2) Filtering the source image by LF to obtain high-pass filtering image H of the source image 1 And H 2 By GLF on H 1 And H 2 Obtaining a source image saliency map S through low-pass filtering processing 1 And S 2 Wherein: (H) 1 ,H 2 )=LF(I 1 ,I 2 ),(S 1 ,S 2 )=GLF(H 1 ,H 2 );
(3) From the source image I 1 And I 2 Corresponding to the pixel S in the saliency map 1 (i, j) and S 2 (i, j) size, and constructing a weight matrix P corresponding to the source image 1 And P 2 (ii) a Wherein:
Figure FSB0000200568780000011
Figure FSB0000200568780000021
S 1 (I, j) are source images I 1 The salient map pixel (i, j);
S 2 (I, j) are source images I 2 The salient map pixel (i, j);
P 1 (I, j) as source image I 1 The weight matrix elements (i, j);
P 2 (I, j) as source image I 2 The weight matrix elements (i, j);
i=1,2,3,…,M;j=1,2,3,…,N;
s (i, j) is the element of the ith row and the jth column of the matrix saliency map S;
(4) A source image I 1 And I 2 As a guide image, the weight matrix P is processed by FGF 1 And P 2 Carrying out decomposition optimization to obtain a weight matrix W 1 B
Figure FSB0000200568780000022
W 1 D And
Figure FSB0000200568780000023
wherein: (W) 1 B ,W 1 D )=FGF(P 1 ,I 1 ),
Figure FSB0000200568780000024
(5) Base layer based on source image (B) 1 ,B 2 ) And a detail layer (D) 1 ,D 2 ) According to the optimized weight matrix W 1 B
Figure FSB0000200568780000025
W 1 D And
Figure FSB0000200568780000026
constructing a fused image base layer F B ,F B ∈i M×N And detail layer F D ,F D ∈i M×N To obtain a fused base layer F B And detail layer F D Wherein, in the step (A),
Figure FSB0000200568780000027
(6) Constructing a fusion image F, wherein F belongs to i M×N Obtaining a fused gray level image, wherein: f = F B +F D
2. The FGF-based multi-focus image fusion method as claimed in claim 1, characterized in that the weight matrix constructed in step (4) is subjected to an optimized decomposition process, and a base layer and a detail layer of the source image are fused by using the processed weight matrix, thereby constructing a fusion image.
3. An FGF-based multi-focus image fusion system using the FGF-based multi-focus image fusion method according to claim 1.
4. A smart city multi-focus image fusion system using the FGF-based multi-focus image fusion method according to claim 1.
5. A medical imaging multi-focus image fusion system using the FGF-based multi-focus image fusion method of claim 1.
6. A safety monitoring multi-focus image fusion system using the FGF-based multi-focus image fusion method of claim 1.
CN201811194833.0A 2018-09-28 2018-09-28 FGF-based multi-focus image fusion method and system Active CN109509163B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811194833.0A CN109509163B (en) 2018-09-28 2018-09-28 FGF-based multi-focus image fusion method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811194833.0A CN109509163B (en) 2018-09-28 2018-09-28 FGF-based multi-focus image fusion method and system

Publications (2)

Publication Number Publication Date
CN109509163A CN109509163A (en) 2019-03-22
CN109509163B true CN109509163B (en) 2022-11-11

Family

ID=65746461

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811194833.0A Active CN109509163B (en) 2018-09-28 2018-09-28 FGF-based multi-focus image fusion method and system

Country Status (1)

Country Link
CN (1) CN109509163B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110211081B (en) * 2019-05-24 2023-05-16 南昌航空大学 Multimode medical image fusion method based on image attribute and guided filtering
CN110648302B (en) * 2019-10-08 2022-04-12 太原科技大学 Light field full-focus image fusion method based on edge enhancement guide filtering
CN111223069B (en) * 2020-01-14 2023-06-02 天津工业大学 Image fusion method and system
CN111815549A (en) * 2020-07-09 2020-10-23 湖南大学 Night vision image colorization method based on guided filtering image fusion
CN112884690B (en) * 2021-02-26 2023-01-06 中国科学院西安光学精密机械研究所 Infrared and visible light image fusion method based on three-scale decomposition
CN117391985B (en) * 2023-12-11 2024-02-20 安徽数分智能科技有限公司 Multi-source data information fusion processing method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011051134A1 (en) * 2009-10-30 2011-05-05 Siemens Aktiengesellschaft A body fluid analyzing system and an imaging processing device and method for analyzing body fluids
CN103455991A (en) * 2013-08-22 2013-12-18 西北大学 Multi-focus image fusion method
CN104036479A (en) * 2013-11-11 2014-09-10 西北大学 Multi-focus image fusion method based on non-negative matrix factorization
CN107909560A (en) * 2017-09-22 2018-04-13 洛阳师范学院 A kind of multi-focus image fusing method and system based on SiR
CN108230282A (en) * 2017-11-24 2018-06-29 洛阳师范学院 A kind of multi-focus image fusing method and system based on AGF

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011051134A1 (en) * 2009-10-30 2011-05-05 Siemens Aktiengesellschaft A body fluid analyzing system and an imaging processing device and method for analyzing body fluids
CN103455991A (en) * 2013-08-22 2013-12-18 西北大学 Multi-focus image fusion method
CN104036479A (en) * 2013-11-11 2014-09-10 西北大学 Multi-focus image fusion method based on non-negative matrix factorization
CN107909560A (en) * 2017-09-22 2018-04-13 洛阳师范学院 A kind of multi-focus image fusing method and system based on SiR
CN108230282A (en) * 2017-11-24 2018-06-29 洛阳师范学院 A kind of multi-focus image fusing method and system based on AGF

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
结合向导滤波与复轮廓波变换的多聚焦图像融合算法;刘帅奇等;《信号处理》;20160325(第03期);全文 *
结合引导滤波和卷积稀疏表示的红外与可见光图像融合;刘先红等;《光学精密工程》;20180515(第05期);全文 *

Also Published As

Publication number Publication date
CN109509163A (en) 2019-03-22

Similar Documents

Publication Publication Date Title
CN109509163B (en) FGF-based multi-focus image fusion method and system
CN109509164B (en) Multi-sensor image fusion method and system based on GDGF
Yeh et al. Multi-scale deep residual learning-based single image haze removal via image decomposition
Liu et al. Wavelet-based dual-branch network for image demoiréing
CN111209952B (en) Underwater target detection method based on improved SSD and migration learning
Bhat et al. Multi-focus image fusion techniques: a survey
CN107909560A (en) A kind of multi-focus image fusing method and system based on SiR
CN111091503B (en) Image defocusing and blurring method based on deep learning
CN109636766B (en) Edge information enhancement-based polarization difference and light intensity image multi-scale fusion method
Saha et al. Mutual spectral residual approach for multifocus image fusion
Hassan et al. Real-time image dehazing by superpixels segmentation and guidance filter
Bhatnagar et al. A novel image fusion framework for night-vision navigation and surveillance
Yue et al. CID: Combined image denoising in spatial and frequency domains using Web images
Zhao et al. A deep cascade of neural networks for image inpainting, deblurring and denoising
Ma et al. Curvelet-based snake for multiscale detection and tracking of geophysical fluids
Zhao et al. Infrared and visible image fusion algorithm based on saliency detection and adaptive double-channel spiking cortical model
Ding et al. U 2 D 2 Net: Unsupervised unified image dehazing and denoising network for single hazy image enhancement
CN110751680A (en) Image processing method with fast alignment algorithm
Wang et al. Video deraining via nonlocal low-rank regularization
Pujar et al. Medical image segmentation based on vigorous smoothing and edge detection ideology
CN113421210A (en) Surface point cloud reconstruction method based on binocular stereo vision
Khmag Natural digital image mixed noise removal using regularization Perona–Malik model and pulse coupled neural networks
Wang et al. A new method of denoising crop image based on improved SVD in wavelet domain
Yin et al. Combined window filtering and its applications
Zin et al. Local image denoising using RAISR

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant