CN113298147A - Image fusion method and device based on regional energy and intuitionistic fuzzy set - Google Patents
Image fusion method and device based on regional energy and intuitionistic fuzzy set Download PDFInfo
- Publication number
- CN113298147A CN113298147A CN202110568871.3A CN202110568871A CN113298147A CN 113298147 A CN113298147 A CN 113298147A CN 202110568871 A CN202110568871 A CN 202110568871A CN 113298147 A CN113298147 A CN 113298147A
- Authority
- CN
- China
- Prior art keywords
- image
- fusion
- images
- low
- formula
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Processing (AREA)
Abstract
The invention discloses an image fusion method and device based on regional energy and an intuitionistic fuzzy set. The infrared and visible light images are first decomposed into low frequency and a series of high frequency sub-images using a non-subsampled shear wave transform (NSST). Fusing a series of high-frequency sub-images by adopting a fusion algorithm based on wavelet transformation; fusing low-frequency sub-images by adopting a rule of combining regional energy and an intuitionistic fuzzy set, firstly carrying out primary fusion on a source image by adopting the regional energy, then carrying out secondary fusion by adopting a fusion algorithm of the intuitionistic fuzzy set, and finally obtaining fused low-frequency sub-images; the fused image is reconstructed using the NSST inverse transform. The experimental results are qualitatively and quantitatively analyzed through subjective evaluation and objective evaluation indexes, and the method has higher contrast, richer detail information and local characteristics.
Description
Technical Field
The invention relates to the field of image processing, in particular to an image fusion method based on regional energy and an intuitionistic fuzzy set.
Background
The infrared and visible light image fusion technology aims at synthesizing a plurality of images into a comprehensive image and can be applied to face recognition[1]Target detection[2]Image enhancement[3]Medical field[4]The field of remote sensing[5]And the like. The infrared image sensor can capture heat information radiated from an object, but the amount of information in the infrared image is small. In contrast, visible light images can provide rich background and detail information. Therefore, effectively fusing the two images can provide more useful information for subsequent research work[6][7]。
In recent years, multi-scale transformation has been widely used as a decomposition tool for image fusion[8]. A common decomposition method is Wavelet Transform (WT)[9]Laplacian pyramid transform (LP)[10]Curvelet transform (CURV)[11]Contourlet Transform (CT)[12]Non-subsampled contourlet transform (NSCT)[13]Shear wave transformation (ST)[14]And other methods[15][16]. Compared to the above methods, NSST[17]The method has the advantages of multi-directionality, translation invariance and the like, so that the NSST is selected as a preferred multi-scale transformation tool.
The essence of image fusion is to reasonably select valuable pixel points in the multi-source image and fuse the valuable pixel points into an image. Image fusion can be regarded as a many-to-one image information mapping process with strong uncertainty, and many researchers use fuzzy theory to solve the uncertainty problem. Tirupal provides a multimode medical image fusion method based on Sugeno Intuitive Fuzzy Set (SIFS)[18]The image obtained by the algorithm can clearly distinguish the edges of soft tissues and blood vessels, and is helpful for case diagnosis. Seng provides a through-the-wall radar image fusion method based on probability fuzzy logic[19]Experiments show that the fused image has higher contrast and is beneficial to improving the detection rate of the target. The invention provides an NSST domain image fusion method and system based on regional energy and an intuitionistic fuzzy setThe images are fused, so that the contrast is improved, and the infrared target information is enhanced. The fusion of the high frequency components is realized by adopting a rule based on wavelet transformation. And finally reconstructing a fusion result by using NSST inverse transformation. Experiments show that the image fused by the method has stable quality, obvious target and higher contrast, has rich detail information and local characteristics, and is generally superior to the existing fusion method.
Disclosure of Invention
1. Objects of the invention
The invention provides an image fusion method and device based on regional energy and an intuitive fuzzy set, aiming at improving the information acquisition precision of low-frequency sub-images and high-frequency sub-images.
2. The technical scheme adopted by the invention
The invention provides an image fusion method based on regional energy and an intuitionistic fuzzy set, which comprises the following steps:
decomposing the infrared and visible light images into low-frequency and high-frequency sub-images by using non-subsampled shear wave transformation (NSST);
fusing the series of high-frequency sub-images by adopting a fusion algorithm based on wavelet transformation;
fusing low-frequency sub-images by adopting a rule of combining regional energy and an intuitionistic fuzzy set, firstly carrying out primary fusion on a source image by adopting the regional energy, then carrying out secondary fusion by adopting a fusion algorithm of the intuitionistic fuzzy set, and finally obtaining fused low-frequency sub-images;
the fused image is reconstructed using the NSST inverse transform.
Decomposing the infrared and visible light images into low-frequency and high-frequency sub-images by using non-subsampled shear wave transformation (NSST);
affine system AAB(Ψ) is:
wherein ψ ∈ L2(Z2) Z is an integer field; a and B are both 2 × 2 invertible matrices, matrix AjIs an anisotropic expansion matrix, matrix BlIs a shear matrix, and | detB | ═ 1; j is a decomposition scale,/is a direction parameter, and j, l belongs to Z; k is a shearing parameter, k is as large as Z2;
When in useWhere a is a scale variable and s is an orientation variable, typically a is 4 and s is 1; psiast(x) Is a shear wave, and the resulting system is represented by the formula (2):
decomposing a source image f into a low-pass image by non-subsampled pyramid transformation (NSP)Sum band pass imageDecomposition using a shear filter bankRealizing direction localization to obtain a direction sub-band image; then, each layer of NSP carries out iterative decomposition on the low-frequency component obtained by the upper-layer decomposition, so as to obtain a low-frequency subband image and a group of high-frequency direction subband images;
preferably, the fusion step is performed based on the regional energy and the set of intuitive ambiguitiesAMembership function mu for a fuzzy subset of U A The following can be defined:
as can be seen from the formula (5), the fuzzy set theory is established on the basis of the membership function; therefore, the membership function plays a very important role in fuzzy mathematics; the resolution of the image can be viewed as a set of blurred pixels, as shown in equation (6):
wherein x isijRepresents the gray value, mu, to which the pixel point (i, j) belongsijRepresents the degree of membership of the pixel point (i, j), and muij∈[0,1];μijCalculated from the membership function, { mu }ijDenotes the fuzzy characteristic plane, different membership functions can get different muijThereby can be aligned to muijAdjustments are made to obtain different blur feature planes.
Preferably, it is characterized in that:
fusing the low-frequency sub-images; IRLAnd VISLLow-frequency sub-images, which are respectively an infrared image and a visible light image, are fused according to the region energy, as shown in formula (7):
in the formula, Es(m, n) is the energy of the region centered at point (m, n), s represents the infrared or visible light component; Ω (m, n) is a neighborhood window centered at point (m, n); a. thes(i, j) is the coefficient for the (i, j) position in the neighborhood; w (i, j) is the function value of the mask window at the (i, j) position; the mask window matrix with smaller difference between the central value and the neighborhood value can better reflect the area contrast, the size of the window function W of the invention is set to be 3 multiplied by 3, and can be expressed as formula (8):
after the regional energy of the infrared and visible light low-frequency sub-images is obtained, the low-frequency sub-images are fused by adopting a weighted average method, and the fusion weight is shown as the formula (9) and the formula (10):
w2=1-w1 (10)
f=w1×IRL+w2×VISL (11)
in the formula (I), the compound is shown in the specification,andare respectively IRLAnd VISLThe initial fusion image f contains information of two source images, can be regarded as a transition image of the two source images, and plays a role in adjusting the brightness of an infrared image background and a visible light image;
expressing the membership degree of the coefficient by using a Gaussian membership function, and finally fusing the coefficient membership degree of the defuzzified low-frequency sub-image; first define IRLMembership function of coefficientAnd non-membership functionsAs shown in formulas (12) and (13):
in the formula (I), the compound is shown in the specification,representing infrared low-frequency coefficients IRLIs determined by the average value of (a) of (b),represents the standard deviation; k is a radical of1And k2Is a gaussian function adjustment parameter, set to 0.8 and 1.2, respectively; by membership functionsAnd non-membership functionsDetermining a hesitation functionDefuzzification is carried out on the intuitionistic fuzzy set membership function by a difference correction method to obtain a fuzzy set membership function Andas shown in formulas (14) and (15):
similarly, VIS is defined according to formulas (12) to (15)LIs an intuitive fuzzy set membership functionNon-membership functionHesitation functionAnd stripping the moldFuzzy membership function
Finally, the fused low-frequency sub-image is defined as shown in equation (16):
wherein, when the infrared membership degree is larger than the visible light membership degree, the visible light low-frequency coefficient VIS is selectedL(x, y) as a fusion coefficient to prevent oversaturation of background areas; the membership degree of the point where the infrared target is located is usually smaller than the membership degree of the corresponding point of the visible light, so that the coefficient of the image F (x, y) is selected as a fusion coefficient, the infrared target is kept, and finally, a low-frequency fusion coefficient F is obtainedL(x,y);
Preferably, the wavelet transform-based fusion algorithm fuses the high-frequency sub-images, and wavelet decomposition is performed on the obtained high-frequency component again to obtain an approximation layerAnd detail layer Selecting a Haar wavelet as a wavelet base used in decomposition, and setting a decomposition layer as 1;
the approximate layer fusion adopts a weighted average rule, and the calculation formula is shown as formula (17):
in the formula (I), the compound is shown in the specification,andare respectively approximate layersThe coefficient at point (i, j),a fused image of each hierarchical approximation layer; the detail layer fusion adopts the rule that the absolute value is large, as shown in formula (18):
in the formula (I), the compound is shown in the specification,andrespectively a detail layerThe coefficient at point (i, j);the image is fused by each level of detail layer, and the fused high-frequency sub-image is reconstructed by utilizing inverse wavelet transformFinally, FLAndand obtaining a fused image F through NSST inverse transformation.
The invention provides an image fusion device based on regional energy and an intuitionistic fuzzy set, which comprises a memory and a processor, wherein the memory stores a computer program and is characterized in that; the processor realizes the method steps when executing the computer program.
3. Advantageous effects adopted by the present invention
The NSST provided by the invention and an infrared and visible light fusion algorithm based on regional energy and an intuitive fuzzy set use six pairs of infrared and visible light images to test the performance of the algorithm. The subjective effect of the fused image is observed, the algorithm provided by the invention can improve the contrast of the fused image, enhance the infrared target information and enable the fused image to be more in line with the visual perception of human eyes; from objective evaluation index analysis, the algorithm of the invention can obtain more superior and more stable evaluation index values than other algorithms. Qualitative and quantitative analysis shows that the algorithm is reliable and effective, and can be better applied to the fields of target detection, medical diagnosis, target tracking and the like under the condition of weak illumination.
Drawings
FIG. 1 is a NSST breakdown structure;
FIG. 2 is an algorithm fusion framework of the present invention;
FIG. 3 shows the result of fusion of five sets of infrared and visible images, (a)2_ Men in front of house; (b) bunker;
FIG. 4 shows the result of fusion of five sets of infrared and visible images, (a)2Men in front of house; (b) sandpath;
FIG. 5 is a graph comparing the results obtained by the present invention and M1, M2, M3, M4, and M5 with objective evaluation indexes, five sets of fused infrared and visible light images, (a)2_ Men in front of house; (b) bunker;
(c)sandpath;(d)Nato_camp_sequence;(e)Kaptein_1123;
FIG. 6 is a graph comparing the results obtained by the present invention with those obtained by M6, M7, M8 and M9.
Detailed Description
The technical solutions in the examples of the present invention are clearly and completely described below with reference to the drawings in the examples of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without inventive step, are within the scope of the present invention.
The present invention will be described in further detail with reference to the accompanying drawings.
Example 1
NSST principle
Compared with other multi-scale transformation decomposition tools, NSST can more accurately express image information, has translation invariance, can overcome the pseudo Gibbs effect existing in the image reconstruction process, and a large number of research results show that a fused image obtained through NSST decomposition and reconstruction is more suitable for a human visual system. Let n be 2, affine system aAB(Ψ) is defined as follows:
wherein ψ ∈ L2(Z2) And Z is an integer field. A and B are both 2 × 2 invertible matrices, matrix AjIs an anisotropic expansion matrix, matrix BlIs the shear matrix and | detB | ═ 1. j is a decomposition scale,/is a direction parameter, and j, l belongs to Z; k is a shearing parameter, k is as large as Z2。
When in useIn the formula, a is a scale variable, s is a direction variable, and a is usually 4, and s is usually 1. Psiast(x) Is a shear wave, and the resulting system is represented by the formula (2):
fig. 1 shows a two-level NSST decomposition structure. Decomposing a source image f into a low-pass image by non-subsampled pyramid transformation (NSP)Sum band pass imageDecomposition using a shear filter bankAnd realizing direction localization to obtain a direction sub-band image. Then, the NSP of each layer iteratively decomposes the low-frequency component obtained by the upper-layer decomposition, thereby obtaining a low-frequency subband image and a set of high-frequency directional subband images.
Theory of fuzzy sets
In fuzzy set theory, the membership degree quantifies uncertainty information by using an interval [0, 1], wherein 0 represents that the uncertainty information does not belong to the membership degree, and 1 represents that the uncertainty information completely belongs to the membership degree. The fuzzy set theory is good at expressing the qualitative knowledge that the boundary is not clear, and plays an important role in eliminating the fuzziness in the image. Many researches show that the image fusion method based on the fuzzy set theory is superior to other conventional algorithm models, and the composite method combining the fuzzy set with other representation methods can more accurately select effective information in a source image.
In common set theory, there is a relationship of attribution and non-attribution between elements and sets. Assuming that U is a set, U ∈ U,characteristic function χ of AAThe definition is as follows:
in fuzzy set theory, the definitions of membership functions are evolved from the eigenfunctions in general set theory. Is provided withAMembership function mu for a fuzzy subset of U A The following can be defined:
as shown in the formula (5), the fuzzy set theory is based on the membership function. Therefore, membership functions play a very important role in fuzzy mathematics. The resolution of the image can be viewed as a set of blurred pixels, as shown in equation (6):
wherein x isijRepresents the gray value, mu, to which the pixel point (i, j) belongsijRepresents the degree of membership of the pixel point (i, j), and muij∈[0,1]。μijCalculated from the membership function, { mu }ijDenotes the fuzzy characteristic plane, different membership functions can get different muijThereby can be aligned to muijAdjustments are made to obtain different blur feature planes.
The invention provides two rules for fusing low-frequency sub-images and high-frequency sub-images respectively.
As shown in fig. 2, the overall framework of the algorithm of the present invention uses NSST and NSST inverse transformation as decomposition and reconstruction tools, respectively, and the fusion process is mainly divided into three steps: firstly, decomposing an image into sub-images of different levels; secondly, fusing the images of the corresponding levels according to a specific fusion rule; and finally, performing inverse transformation on each fused image, and reconstructing to obtain a fused image.
Low frequency fusion rule
The low frequency components of the visible image contain detailed information and spectral information, while the energy information contained in the low frequency components of the infrared image reflects the general outline of the image content, and salient objects in the infrared image are usually located in areas of greater energy[24]. In order to enhance target information and improve image contrast, the invention provides a fusion rule based on regional energy and an intuitive fuzzy set.
IRLAnd VISLLow-frequency sub-images, which are respectively an infrared image and a visible light image, are fused according to the region energy, as shown in formula (7):
in the formula, Es(m, n) is the energy of the region centered at point (m, n), s represents the infrared or visible light component; Ω (m, n) is a neighborhood window centered at point (m, n); a. thes(i, j) is the coefficient for the (i, j) position in the neighborhood; w (i, j) is the function value of the mask window at the (i, j) position. The mask window matrix with smaller difference between the central value and the neighborhood value can better reflect the area contrast, the size of the window function W of the invention is set to be 3 multiplied by 3, and can be expressed as formula (8):
after the regional energy of the infrared and visible light low-frequency sub-images is obtained, the low-frequency sub-images are fused by adopting a weighted average method, and the fusion weight is shown as the formula (9) and the formula (10):
w2=1-w1 (10)
f=w1×IRL+w2×VISL (11)
in the formula (I), the compound is shown in the specification,andare respectively IRLAnd VISLThe initial fusion image f contains information of two source images, can be regarded as a transition image of the two source images, and plays a role in adjusting the brightness of an infrared image background and a visible light image.
And expressing the membership degree of the coefficient by using a Gaussian membership function, and finally fusing the coefficient membership degree of the defuzzified low-frequency sub-image. First define IRLMembership function of coefficientAnd non-membership functionsAs shown in formulas (12) and (13):
in the formula (I), the compound is shown in the specification,representing infrared low-frequency coefficients IRLIs determined by the average value of (a) of (b),the standard deviation is indicated. k is a radical of1And k2Are gaussian function tuning parameters, set to 0.8 and 1.2, respectively, in the inventive experiment. By membership functionsAnd non-membership functionsDetermining a hesitation functionDefuzzification is carried out on the intuitionistic fuzzy set membership function by a difference correction method to obtain a fuzzy set membership function Andas shown in formulas (14) and (15):
similarly, VISL's intuitive fuzzy set membership function is defined according to equations (12) to (15)Non-membership functionHesitation functionAnd deblurred membership functions
In the conventional infrared and visible light image fusion algorithm, an image scene is distorted due to an overlarge infrared component proportion of a fused image. In the algorithm of the invention, the IR of the infrared source image is replaced by the image f in the final fusion of the low-frequency sub-imagesLUsing VISLThe background of the infrared image is reversely adjusted, so that the problem of overlarge infrared component proportion is effectively solved. Finally, the fused low-frequency sub-image is defined as shown in equation (16):
wherein, when the infrared membership degree is larger than the visible light membership degree, the visible light low-frequency coefficient VIS is selectedL(x, y) as a fusion coefficient to prevent oversaturation of background areas; infrared target stationThe membership degree of the point is usually smaller than that of the visible light corresponding point, so that the coefficient of the image F (x, y) is selected as a fusion coefficient, the infrared target is kept, and finally, a low-frequency fusion coefficient F is obtainedL(x,y)。
High frequency fusion rule
Unlike the low frequency components, the high frequency components are typically used to represent texture and edge information. NSST has a strong flexibility in the direction representation, while wavelet transform excels at handling the singularity problem of points. Thus, the combination of wavelet transform and NSST can extract more detailed information. The method of the invention carries out wavelet decomposition on the high-frequency component obtained by NSST again to obtain an approximate layerAnd detail layerA Haar wavelet is selected as the wavelet basis used in the decomposition with the decomposition level set to 1.
The approximate layer fusion adopts a weighted average rule, and the calculation formula is shown as formula (17):
in the formula (I), the compound is shown in the specification,andare respectively approximate layersThe coefficient at point (i, j),the fused image of layers is approximated for each layer level. The detail layer fusion adopts the rule that the absolute value is large, as shown in formula (18):
in the formula (I), the compound is shown in the specification,andrespectively a detail layerCoefficient at point (i, j).The image is fused by each level of detail layer, and the fused high-frequency sub-image is reconstructed by utilizing inverse wavelet transformFinally, FLAndand obtaining a fused image F through NSST inverse transformation.
Analysis of results
In order to verify the practicability and effectiveness of the method, two groups of experiments are set for comparison. In the first group, we compare the method of the present invention with other multi-scale transform-based methods and intuitive fuzzy set algorithms, the comparison algorithms are: NSST-based, NSCT-based, SWT-based (the low-frequency and high-frequency component fusion coefficients in the three algorithms are respectively averaged and maximized), NSCT-Bala[25](NSCT and Bala fuzzy sets), NSCT-Gauss[26](NSCT and Gaussian blur set) named M1, M2, M3, M4, M5. In the second group, the method of the present invention was compared to four mainstream advanced methods, including SR[27](sparse regularization), FPDE[28](partial differential equation of fourth order), DRTV[29](Total variation model) and VSM[30](visual saliency map) named M6, M7, M8, M9. Experimental ginsengThe number settings are as follows:
(1) the computer is configured as follows: intel Core i5 CPU, 2.6GHz, 4GB memory, all experimental codes run on Matlab2017b platform;
(2) all infrared and visible light source images are from TNO Image Fusion database;
(3) in NSST, the pyramid filter selects "maxflat";
(4) the number of decomposition layers and the number of directions are 3 and {16,16,16}, respectively.
Subjective evaluation analysis
Fig. 3 shows that from top to bottom: infrared image, visible image source image, and the fusion results of M1, M2, M3, M4, M5 and the algorithm of the present invention.
The image obtained by fusing the algorithm and the five multi-scale transformation-based methods is shown in FIG. 3, and experimental results show that all the methods can effectively fuse infrared and visible light images, but the image quality is good and uneven. Wherein, the background edge of the M4 fusion result is obviously blurred; m3 fusing the edge information of the resulting missing image (e.g., "tree", "window", "road", etc. in the figure); the fused images of M1, M2, and M5 are significantly darker; the method of the present invention has the highest significance of the target, the higher contrast between the target and the background, and more detail information, and taking the two images of fig. 3(a) and (e) as an example, the result obtained by the method of the present invention has the highest degree of reduction relative to the corresponding position in the source image.
FIG. 4 shows five sets of fused infrared and visible images, (a)2_ Men in front of house; (b) sandpath; (c) nato _ camp _ sequence; (d) kaptein _ 1123; (e) kaptein _ 1654. From top to bottom in the figure are respectively: infrared image, visible image source image, and the fusion results of M6, M7, M8, M9 and the algorithm of the present invention.
As shown in fig. 4, the fusion result of the method of the present invention and the mainstream advanced algorithm is compared. From the significance analysis of infrared targets, differences between the various methods are readily compared. In fig. 4(a) where M6, M7, and M9 are fused, the infrared target brightness is not high; in the fused fig. 4(b) and (c) of M7, the infrared target is small and the contour is distorted. In the M8 fusion of all images, although the infrared target is highlighted, the background brightness is low, and the texture information is seriously lost, so that the scene is difficult to recognize. M6 and M7 retained more detail than the other two advanced methods, but in contrast they were still inferior to the proposed method. By integrating all fusion results, the method can highlight the infrared target, and the complete contour is beneficial to accurately capturing the target in the background; the image fused by the algorithm has high contrast and abundant texture details, and can accurately restore the background brightness of the visible light source image, so that the background is more hierarchical and better conforms to the human visual system.
Evaluation analysis
The performance of each algorithm was analyzed by six objective evaluation indices. Wherein, the larger the values of five evaluation indexes of information entropy (E), Average Gradient (AG), Standard Deviation (SD), Spatial Frequency (SF) and Mutual Information (MI) are, the higher the quality of the fused image is; the smaller the value of the Cross Entropy (CE), the smaller the difference in image information between the fused image and the source image.
(1) Information entropy (E)
E is used for measuring the information content contained in the image, and the calculation formula is shown in formula (19):
wherein L represents the total gray scale, piIs the probability that the grey value i appears in the image.
(2) Average Gradient (AG)
AG reflects microscopic detail contrast and texture change of the image, and the calculation formula is shown as formula (20):
wherein, in a region of size M × N with pixel (M, N) as the center, Δ FxAnd Δ FyIs the difference in gray values of the fused image in the x and y directions.
(3) Standard Deviation (SD)
SD reflects the gray level difference of the image pixel, and the calculation formula is shown in equation (21):
where F (i, j) is the grayscale value at the image (i, j) position, μ represents the average grayscale value of the entire image, and the image size is M × N.
(4) Spatial frequency (sF)
The SF reflects the overall activity degree of the image space, and the calculation formula is shown as the formula (24):
where F (m, n) is the grayscale value at the image (m, n) location, RF represents the row frequency and CF represents the column frequency.
(5) Cross Entropy (CE)
CE reflects the difference degree of the gray distribution of the fused image and the source image, and the calculation formula is shown as formula (25):
wherein L is the total number of gray levels, piAnd q isiRespectively representing the probability of the occurrence of gray values i of the source image and the fused image.
(6) Mutual Information (MI)
MI reflects the similarity of the source image and the fused image, and a calculation formula is shown as a formula (27):
MI=MIA,F+MIB,F (27)
wherein, PXAnd PFRespectively representing the gray distribution, P, of the source and fused imagesX,FIs the joint probability distribution density. MIA,FAnd MIB,FThe sum of which represents the mutual information value.
FIG. 5 is a graph comparing objective evaluation indexes obtained by the algorithm of the present invention with results obtained from M1, M2, M3, M4, and M5.
FIG. 6 is a graph comparing the objective evaluation indexes obtained by the algorithm of the present invention with those obtained by M6, M7, M8 and M9.
As shown in fig. 5 and fig. 6, objective evaluation index comparison curves of the method of the present invention and the multi-scale transformation-based method and the mainstream advanced algorithm are shown, respectively. Obviously, the method has obvious advantages on all objective evaluation indexes. The method has the advantages that the fusion result of the method retains abundant texture detail information of the source image, so that the difference between the source image and the fusion image is reduced, the image is clearer, and the quality of the fusion image is improved. In conclusion, the fused image of the method contains remarkable infrared target information, richer detail information and local features, is rich in hierarchy and is more adaptive to the human visual system.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (6)
1. An image fusion method based on region energy and an intuitionistic fuzzy set is characterized in that:
decomposing the infrared and visible light images into low-frequency and high-frequency sub-images by using non-subsampled shear wave transformation (NSST);
fusing the series of high-frequency sub-images by adopting a fusion algorithm based on wavelet transformation;
fusing low-frequency sub-images by adopting a rule of combining regional energy and an intuitionistic fuzzy set, firstly carrying out primary fusion on a source image by adopting the regional energy, then carrying out secondary fusion by adopting a fusion algorithm of the intuitionistic fuzzy set, and finally obtaining fused low-frequency sub-images;
the fused image is reconstructed using the NSST inverse transform.
2. The image fusion method based on regional energy and intuitive blur set according to claim 1, characterized in that: decomposing the infrared and visible light images into low-frequency and high-frequency sub-images by using non-subsampled shear wave transformation (NSST);
affine system AAB(Ψ) is:
wherein ψ ∈ L2(Z2) Z is an integer field; a and B are both 2 × 2 invertible matrices, matrix AjIs an anisotropic expansion matrix, matrix BlIs a shear matrix, and | detB | ═ 1; j is a decomposition scale, l is a direction parameter, and j, l belongs to Z; k is a shearing parameter, k is as large as Z2;
When in useWhere a is a scale variable and s is an orientation variable, typically a is 4 and s is 1; psiast(x) Is a shear wave, and the resulting system is represented by the formula (2):
decomposing a source image f into a low-pass image by non-subsampled pyramid transformation (NSP)Sum band pass imageDecomposition using a shear filter bankRealizing direction localization to obtain a direction sub-band image; then, the NSP of each layer iteratively decomposes the low-frequency component obtained by the upper-layer decomposition, thereby obtaining a low-frequency subband image and a set of high-frequency directional subband images.
3. The image fusion method based on regional energy and intuitive blur set according to claim 1, characterized in that: performing a fusion step based on the region energy and the intuitive fuzzy setMembership function mu for a fuzzy subset of U A The following can be defined:
as can be seen from the formula (5), the fuzzy set theory is established on the basis of the membership function; therefore, the membership function plays a very important role in fuzzy mathematics; the resolution of the image can be viewed as a set of blurred pixels, as shown in equation (6):
wherein x isijRepresents the gray value, mu, to which the pixel point (i, j) belongsijRepresents the degree of membership of the pixel point (i, j), and muij∈[0,1];μijCalculated from the membership function, { mu }ijDenotes the fuzzy feature plane, different membership functionsThe number can be differentijThereby can be aligned to muijAdjustments are made to obtain different blur feature planes.
4. The image fusion method based on regional energy and intuitive blur set according to claim 3, characterized in that:
fusing the low-frequency sub-images; IRLAnd VISLLow-frequency sub-images, which are respectively an infrared image and a visible light image, are fused according to the region energy, as shown in formula (7):
in the formula, Es(m, n) is the energy of the region centered at point (m, n), s represents the infrared or visible light component; Ω (m, n) is a neighborhood window centered at point (m, n); a. thes(i, j) is the coefficient for the (i, j) position in the neighborhood; w (i, j) is the function value of the mask window at the (i, j) position; the mask window matrix with smaller difference between the central value and the neighborhood value can better reflect the area contrast, the size of the window function W of the invention is set to be 3 multiplied by 3, and can be expressed as formula (8):
after the regional energy of the infrared and visible light low-frequency sub-images is obtained, the low-frequency sub-images are fused by adopting a weighted average method, and the fusion weight is shown as the formula (9) and the formula (10):
w2=1-w1 (10)
f=w1×IRL+w2×VISL (11)
in the formula (I), the compound is shown in the specification,andare respectively IRLAnd VISLArea energy of, w1And w2For fusion weight, the initial fusion image f contains information of two source images, can be regarded as a transition image of the two source images, and plays a role in adjusting the brightness of the infrared image background and the visible light image;
expressing the membership degree of the coefficient by using a Gaussian membership function, and finally fusing the coefficient membership degree of the defuzzified low-frequency sub-image; first define IRLMembership function of coefficientAnd non-membership functionsAs shown in formulas (12) and (13):
in the formula (I), the compound is shown in the specification,representing infrared low-frequency coefficients IRLIs determined by the average value of (a) of (b),represents the standard deviation; k is a radical of1And k2Is a gaussian function adjustment parameter, set to 0.8 and 1.2, respectively; by membership functionsAnd non-membership functionsDetermining a hesitation functionDefuzzification is carried out on the intuitionistic fuzzy set membership function by a difference correction method to obtain a fuzzy set membership functionAndas shown in formulas (14) and (15):
similarly, VIS is defined according to formulas (12) to (15)LIs an intuitive fuzzy set membership functionNon-membership functionHesitation functionAnd deblurred membership functions
Finally, the fused low-frequency sub-image is defined as shown in equation (16):
wherein, when the infrared membership degree is larger than the visible light membership degree, the visible light low-frequency coefficient VIS is selectedL(x, y) as a fusion coefficient to prevent oversaturation of background areas; the membership degree of the point where the infrared target is located is usually smaller than the membership degree of the corresponding point of the visible light, so that the coefficient of the image F (x, y) is selected as a fusion coefficient, the infrared target is kept, and finally, a low-frequency fusion coefficient F is obtainedL(x,y)。
5. The image fusion method based on regional energy and intuitive blur set according to claim 4, characterized in that:
fusing the high-frequency sub-images based on the fusion algorithm of wavelet transformation, and performing wavelet decomposition on the obtained high-frequency components again to obtain an approximate layerAnd detail layerSelecting a Haar wavelet as a wavelet base used in decomposition, and setting a decomposition layer as 1;
the approximate layer fusion adopts a weighted average rule, and the calculation formula is shown as formula (17):
in the formula (I), the compound is shown in the specification,andare respectively approximate layersThe coefficient at point (i, j),a fused image of each hierarchical approximation layer; the detail layer fusion adopts the rule that the absolute value is large, as shown in formula (18):
in the formula (I), the compound is shown in the specification,andrespectively a detail layerThe coefficient at point (i, j);the image is fused by each level of detail layer, and the fused high-frequency sub-image is reconstructed by utilizing inverse wavelet transformFinally, FLAndand obtaining a fused image F through NSST inverse transformation.
6. An image fusion device based on regional energy and an intuitionistic fuzzy set comprises a memory and a processor, wherein the memory stores a computer program and is characterized in that; the processor, when executing the computer program, realizes the method steps of any of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110568871.3A CN113298147B (en) | 2021-05-25 | 2021-05-25 | Image fusion method and device based on regional energy and intuitionistic fuzzy set |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110568871.3A CN113298147B (en) | 2021-05-25 | 2021-05-25 | Image fusion method and device based on regional energy and intuitionistic fuzzy set |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113298147A true CN113298147A (en) | 2021-08-24 |
CN113298147B CN113298147B (en) | 2022-10-25 |
Family
ID=77324588
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110568871.3A Active CN113298147B (en) | 2021-05-25 | 2021-05-25 | Image fusion method and device based on regional energy and intuitionistic fuzzy set |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113298147B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116012607A (en) * | 2022-01-27 | 2023-04-25 | 华南理工大学 | Image weak texture feature extraction method and device, equipment and storage medium |
CN117252794A (en) * | 2023-09-25 | 2023-12-19 | 徐州医科大学 | Multi-wavelength transmission image fusion device in frequency domain |
CN117876321A (en) * | 2024-01-10 | 2024-04-12 | 中国人民解放军91977部队 | Image quality evaluation method and device |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109035189A (en) * | 2018-07-17 | 2018-12-18 | 桂林电子科技大学 | Infrared and weakly visible light image fusion method based on Cauchy's ambiguity function |
CN111127380A (en) * | 2019-12-26 | 2020-05-08 | 云南大学 | Multi-focus image fusion method based on novel intuitionistic fuzzy similarity measurement technology |
WO2020129950A1 (en) * | 2018-12-21 | 2020-06-25 | Sharp Kabushiki Kaisha | Systems and methods for performing inter prediction in video coding |
-
2021
- 2021-05-25 CN CN202110568871.3A patent/CN113298147B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109035189A (en) * | 2018-07-17 | 2018-12-18 | 桂林电子科技大学 | Infrared and weakly visible light image fusion method based on Cauchy's ambiguity function |
WO2020129950A1 (en) * | 2018-12-21 | 2020-06-25 | Sharp Kabushiki Kaisha | Systems and methods for performing inter prediction in video coding |
CN111127380A (en) * | 2019-12-26 | 2020-05-08 | 云南大学 | Multi-focus image fusion method based on novel intuitionistic fuzzy similarity measurement technology |
Non-Patent Citations (6)
Title |
---|
张林发等: "《基于直觉模糊集和亮度增强的医学图像融合》", 《计算机应用》 * |
李晓军等: "基于非下采样剪切波变换的医学图像边缘融合算法研究", 《光电子?激光》 * |
王子睿: "《直觉模糊理论在图像融合中的应用》", 《内蒙古大学》 * |
王焕清: "结合NSCT和邻域特性的红外与可见光图像融合", 《信息通信》 * |
邢笑雪等: "《Infrared and Visible Image Fusion Based on nonlinear enhancement and NSST decomposition》", 《RESEARCH SQUARE》 * |
陈贞等: "《基于非下采样剪切波变换的医学图像融合算法》", 《沈阳工业大学学报》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116012607A (en) * | 2022-01-27 | 2023-04-25 | 华南理工大学 | Image weak texture feature extraction method and device, equipment and storage medium |
CN116012607B (en) * | 2022-01-27 | 2023-09-01 | 华南理工大学 | Image weak texture feature extraction method and device, equipment and storage medium |
CN117252794A (en) * | 2023-09-25 | 2023-12-19 | 徐州医科大学 | Multi-wavelength transmission image fusion device in frequency domain |
CN117252794B (en) * | 2023-09-25 | 2024-04-16 | 徐州医科大学 | Multi-wavelength transmission image fusion device in frequency domain |
CN117876321A (en) * | 2024-01-10 | 2024-04-12 | 中国人民解放军91977部队 | Image quality evaluation method and device |
Also Published As
Publication number | Publication date |
---|---|
CN113298147B (en) | 2022-10-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113298147B (en) | Image fusion method and device based on regional energy and intuitionistic fuzzy set | |
Jin et al. | A survey of infrared and visual image fusion methods | |
CN108399611B (en) | Multi-focus image fusion method based on gradient regularization | |
CN112950518B (en) | Image fusion method based on potential low-rank representation nested rolling guide image filtering | |
CN107169944B (en) | Infrared and visible light image fusion method based on multi-scale contrast | |
CN103020933B (en) | A kind of multisource image anastomosing method based on bionic visual mechanism | |
Tan et al. | Remote sensing image fusion via boundary measured dual-channel PCNN in multi-scale morphological gradient domain | |
CN110163818A (en) | A kind of low illumination level video image enhancement for maritime affairs unmanned plane | |
CN108921809B (en) | Multispectral and panchromatic image fusion method based on spatial frequency under integral principle | |
CN113837974B (en) | NSST domain power equipment infrared image enhancement method based on improved BEEPS filtering algorithm | |
Xiang et al. | Visual attention and background subtraction with adaptive weight for hyperspectral anomaly detection | |
CN112669249A (en) | Infrared and visible light image fusion method combining improved NSCT (non-subsampled Contourlet transform) transformation and deep learning | |
Gao et al. | Improving the performance of infrared and visible image fusion based on latent low-rank representation nested with rolling guided image filtering | |
CN106897999A (en) | Apple image fusion method based on Scale invariant features transform | |
Xiao et al. | Image Fusion | |
Han et al. | Local sparse structure denoising for low-light-level image | |
CN114387195A (en) | Infrared image and visible light image fusion method based on non-global pre-enhancement | |
CN113592729A (en) | Infrared image enhancement method for electrical equipment based on NSCT domain | |
Xing et al. | Infrared and visible image fusion based on nonlinear enhancement and NSST decomposition | |
CN111815550A (en) | Infrared and visible light image fusion method based on gray level co-occurrence matrix | |
Nercessian et al. | Multiresolution decomposition schemes using the parameterized logarithmic image processing model with application to image fusion | |
CN112734683B (en) | Multi-scale SAR and infrared image fusion method based on target enhancement | |
Xiong et al. | Multitask Sparse Representation Model Inspired Network for Hyperspectral Image Denoising | |
Yang et al. | Infrared and visible image fusion based on QNSCT and Guided Filter | |
CN109584192B (en) | Target feature enhancement method and device based on multispectral fusion and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |