CN113298147A - Image fusion method and device based on regional energy and intuitionistic fuzzy set - Google Patents

Image fusion method and device based on regional energy and intuitionistic fuzzy set Download PDF

Info

Publication number
CN113298147A
CN113298147A CN202110568871.3A CN202110568871A CN113298147A CN 113298147 A CN113298147 A CN 113298147A CN 202110568871 A CN202110568871 A CN 202110568871A CN 113298147 A CN113298147 A CN 113298147A
Authority
CN
China
Prior art keywords
image
fusion
images
low
formula
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110568871.3A
Other languages
Chinese (zh)
Other versions
CN113298147B (en
Inventor
邢笑雪
刘城
商微微
罗聪
闫明晗
周健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchun University
Original Assignee
Changchun University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun University filed Critical Changchun University
Priority to CN202110568871.3A priority Critical patent/CN113298147B/en
Publication of CN113298147A publication Critical patent/CN113298147A/en
Application granted granted Critical
Publication of CN113298147B publication Critical patent/CN113298147B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image fusion method and device based on regional energy and an intuitionistic fuzzy set. The infrared and visible light images are first decomposed into low frequency and a series of high frequency sub-images using a non-subsampled shear wave transform (NSST). Fusing a series of high-frequency sub-images by adopting a fusion algorithm based on wavelet transformation; fusing low-frequency sub-images by adopting a rule of combining regional energy and an intuitionistic fuzzy set, firstly carrying out primary fusion on a source image by adopting the regional energy, then carrying out secondary fusion by adopting a fusion algorithm of the intuitionistic fuzzy set, and finally obtaining fused low-frequency sub-images; the fused image is reconstructed using the NSST inverse transform. The experimental results are qualitatively and quantitatively analyzed through subjective evaluation and objective evaluation indexes, and the method has higher contrast, richer detail information and local characteristics.

Description

Image fusion method and device based on regional energy and intuitionistic fuzzy set
Technical Field
The invention relates to the field of image processing, in particular to an image fusion method based on regional energy and an intuitionistic fuzzy set.
Background
The infrared and visible light image fusion technology aims at synthesizing a plurality of images into a comprehensive image and can be applied to face recognition[1]Target detection[2]Image enhancement[3]Medical field[4]The field of remote sensing[5]And the like. The infrared image sensor can capture heat information radiated from an object, but the amount of information in the infrared image is small. In contrast, visible light images can provide rich background and detail information. Therefore, effectively fusing the two images can provide more useful information for subsequent research work[6][7]
In recent years, multi-scale transformation has been widely used as a decomposition tool for image fusion[8]. A common decomposition method is Wavelet Transform (WT)[9]Laplacian pyramid transform (LP)[10]Curvelet transform (CURV)[11]Contourlet Transform (CT)[12]Non-subsampled contourlet transform (NSCT)[13]Shear wave transformation (ST)[14]And other methods[15][16]. Compared to the above methods, NSST[17]The method has the advantages of multi-directionality, translation invariance and the like, so that the NSST is selected as a preferred multi-scale transformation tool.
The essence of image fusion is to reasonably select valuable pixel points in the multi-source image and fuse the valuable pixel points into an image. Image fusion can be regarded as a many-to-one image information mapping process with strong uncertainty, and many researchers use fuzzy theory to solve the uncertainty problem. Tirupal provides a multimode medical image fusion method based on Sugeno Intuitive Fuzzy Set (SIFS)[18]The image obtained by the algorithm can clearly distinguish the edges of soft tissues and blood vessels, and is helpful for case diagnosis. Seng provides a through-the-wall radar image fusion method based on probability fuzzy logic[19]Experiments show that the fused image has higher contrast and is beneficial to improving the detection rate of the target. The invention provides an NSST domain image fusion method and system based on regional energy and an intuitionistic fuzzy setThe images are fused, so that the contrast is improved, and the infrared target information is enhanced. The fusion of the high frequency components is realized by adopting a rule based on wavelet transformation. And finally reconstructing a fusion result by using NSST inverse transformation. Experiments show that the image fused by the method has stable quality, obvious target and higher contrast, has rich detail information and local characteristics, and is generally superior to the existing fusion method.
Disclosure of Invention
1. Objects of the invention
The invention provides an image fusion method and device based on regional energy and an intuitive fuzzy set, aiming at improving the information acquisition precision of low-frequency sub-images and high-frequency sub-images.
2. The technical scheme adopted by the invention
The invention provides an image fusion method based on regional energy and an intuitionistic fuzzy set, which comprises the following steps:
decomposing the infrared and visible light images into low-frequency and high-frequency sub-images by using non-subsampled shear wave transformation (NSST);
fusing the series of high-frequency sub-images by adopting a fusion algorithm based on wavelet transformation;
fusing low-frequency sub-images by adopting a rule of combining regional energy and an intuitionistic fuzzy set, firstly carrying out primary fusion on a source image by adopting the regional energy, then carrying out secondary fusion by adopting a fusion algorithm of the intuitionistic fuzzy set, and finally obtaining fused low-frequency sub-images;
the fused image is reconstructed using the NSST inverse transform.
Decomposing the infrared and visible light images into low-frequency and high-frequency sub-images by using non-subsampled shear wave transformation (NSST);
affine system AAB(Ψ) is:
Figure BDA0003081870070000021
wherein ψ ∈ L2(Z2) Z is an integer field; a and B are both 2 × 2 invertible matrices, matrix AjIs an anisotropic expansion matrix, matrix BlIs a shear matrix, and | detB | ═ 1; j is a decomposition scale,/is a direction parameter, and j, l belongs to Z; k is a shearing parameter, k is as large as Z2
When in use
Figure BDA0003081870070000022
Where a is a scale variable and s is an orientation variable, typically a is 4 and s is 1; psiast(x) Is a shear wave, and the resulting system is represented by the formula (2):
Figure BDA0003081870070000023
decomposing a source image f into a low-pass image by non-subsampled pyramid transformation (NSP)
Figure BDA0003081870070000024
Sum band pass image
Figure BDA0003081870070000025
Decomposition using a shear filter bank
Figure BDA0003081870070000026
Realizing direction localization to obtain a direction sub-band image; then, each layer of NSP carries out iterative decomposition on the low-frequency component obtained by the upper-layer decomposition, so as to obtain a low-frequency subband image and a group of high-frequency direction subband images;
preferably, the fusion step is performed based on the regional energy and the set of intuitive ambiguitiesAMembership function mu for a fuzzy subset of U A The following can be defined:
Figure BDA0003081870070000031
as can be seen from the formula (5), the fuzzy set theory is established on the basis of the membership function; therefore, the membership function plays a very important role in fuzzy mathematics; the resolution of the image can be viewed as a set of blurred pixels, as shown in equation (6):
Figure BDA0003081870070000032
wherein x isijRepresents the gray value, mu, to which the pixel point (i, j) belongsijRepresents the degree of membership of the pixel point (i, j), and muij∈[0,1];μijCalculated from the membership function, { mu }ijDenotes the fuzzy characteristic plane, different membership functions can get different muijThereby can be aligned to muijAdjustments are made to obtain different blur feature planes.
Preferably, it is characterized in that:
fusing the low-frequency sub-images; IRLAnd VISLLow-frequency sub-images, which are respectively an infrared image and a visible light image, are fused according to the region energy, as shown in formula (7):
Figure BDA0003081870070000033
in the formula, Es(m, n) is the energy of the region centered at point (m, n), s represents the infrared or visible light component; Ω (m, n) is a neighborhood window centered at point (m, n); a. thes(i, j) is the coefficient for the (i, j) position in the neighborhood; w (i, j) is the function value of the mask window at the (i, j) position; the mask window matrix with smaller difference between the central value and the neighborhood value can better reflect the area contrast, the size of the window function W of the invention is set to be 3 multiplied by 3, and can be expressed as formula (8):
Figure BDA0003081870070000034
after the regional energy of the infrared and visible light low-frequency sub-images is obtained, the low-frequency sub-images are fused by adopting a weighted average method, and the fusion weight is shown as the formula (9) and the formula (10):
Figure BDA0003081870070000035
w2=1-w1 (10)
f=w1×IRL+w2×VISL (11)
in the formula (I), the compound is shown in the specification,
Figure BDA0003081870070000041
and
Figure BDA0003081870070000042
are respectively IRLAnd VISLThe initial fusion image f contains information of two source images, can be regarded as a transition image of the two source images, and plays a role in adjusting the brightness of an infrared image background and a visible light image;
expressing the membership degree of the coefficient by using a Gaussian membership function, and finally fusing the coefficient membership degree of the defuzzified low-frequency sub-image; first define IRLMembership function of coefficient
Figure BDA0003081870070000043
And non-membership functions
Figure BDA0003081870070000044
As shown in formulas (12) and (13):
Figure BDA0003081870070000045
Figure BDA0003081870070000046
in the formula (I), the compound is shown in the specification,
Figure BDA0003081870070000047
representing infrared low-frequency coefficients IRLIs determined by the average value of (a) of (b),
Figure BDA0003081870070000048
represents the standard deviation; k is a radical of1And k2Is a gaussian function adjustment parameter, set to 0.8 and 1.2, respectively; by membership functions
Figure BDA0003081870070000049
And non-membership functions
Figure BDA00030818700700000410
Determining a hesitation function
Figure BDA00030818700700000411
Defuzzification is carried out on the intuitionistic fuzzy set membership function by a difference correction method to obtain a fuzzy set membership function
Figure BDA00030818700700000412
Figure BDA00030818700700000413
And
Figure BDA00030818700700000414
as shown in formulas (14) and (15):
Figure BDA00030818700700000415
Figure BDA00030818700700000416
similarly, VIS is defined according to formulas (12) to (15)LIs an intuitive fuzzy set membership function
Figure BDA00030818700700000417
Non-membership function
Figure BDA00030818700700000418
Hesitation function
Figure BDA00030818700700000419
And stripping the moldFuzzy membership function
Figure BDA00030818700700000420
Finally, the fused low-frequency sub-image is defined as shown in equation (16):
Figure BDA00030818700700000421
wherein, when the infrared membership degree is larger than the visible light membership degree, the visible light low-frequency coefficient VIS is selectedL(x, y) as a fusion coefficient to prevent oversaturation of background areas; the membership degree of the point where the infrared target is located is usually smaller than the membership degree of the corresponding point of the visible light, so that the coefficient of the image F (x, y) is selected as a fusion coefficient, the infrared target is kept, and finally, a low-frequency fusion coefficient F is obtainedL(x,y);
Preferably, the wavelet transform-based fusion algorithm fuses the high-frequency sub-images, and wavelet decomposition is performed on the obtained high-frequency component again to obtain an approximation layer
Figure BDA0003081870070000051
And detail layer
Figure BDA0003081870070000052
Figure BDA0003081870070000053
Selecting a Haar wavelet as a wavelet base used in decomposition, and setting a decomposition layer as 1;
the approximate layer fusion adopts a weighted average rule, and the calculation formula is shown as formula (17):
Figure BDA0003081870070000054
in the formula (I), the compound is shown in the specification,
Figure BDA0003081870070000055
and
Figure BDA0003081870070000056
are respectively approximate layers
Figure BDA0003081870070000057
The coefficient at point (i, j),
Figure BDA0003081870070000058
a fused image of each hierarchical approximation layer; the detail layer fusion adopts the rule that the absolute value is large, as shown in formula (18):
Figure BDA0003081870070000059
in the formula (I), the compound is shown in the specification,
Figure BDA00030818700700000510
and
Figure BDA00030818700700000511
respectively a detail layer
Figure BDA00030818700700000512
The coefficient at point (i, j);
Figure BDA00030818700700000513
the image is fused by each level of detail layer, and the fused high-frequency sub-image is reconstructed by utilizing inverse wavelet transform
Figure BDA00030818700700000514
Finally, FLAnd
Figure BDA00030818700700000515
and obtaining a fused image F through NSST inverse transformation.
The invention provides an image fusion device based on regional energy and an intuitionistic fuzzy set, which comprises a memory and a processor, wherein the memory stores a computer program and is characterized in that; the processor realizes the method steps when executing the computer program.
3. Advantageous effects adopted by the present invention
The NSST provided by the invention and an infrared and visible light fusion algorithm based on regional energy and an intuitive fuzzy set use six pairs of infrared and visible light images to test the performance of the algorithm. The subjective effect of the fused image is observed, the algorithm provided by the invention can improve the contrast of the fused image, enhance the infrared target information and enable the fused image to be more in line with the visual perception of human eyes; from objective evaluation index analysis, the algorithm of the invention can obtain more superior and more stable evaluation index values than other algorithms. Qualitative and quantitative analysis shows that the algorithm is reliable and effective, and can be better applied to the fields of target detection, medical diagnosis, target tracking and the like under the condition of weak illumination.
Drawings
FIG. 1 is a NSST breakdown structure;
FIG. 2 is an algorithm fusion framework of the present invention;
FIG. 3 shows the result of fusion of five sets of infrared and visible images, (a)2_ Men in front of house; (b) bunker;
FIG. 4 shows the result of fusion of five sets of infrared and visible images, (a)2Men in front of house; (b) sandpath;
FIG. 5 is a graph comparing the results obtained by the present invention and M1, M2, M3, M4, and M5 with objective evaluation indexes, five sets of fused infrared and visible light images, (a)2_ Men in front of house; (b) bunker;
(c)sandpath;(d)Nato_camp_sequence;(e)Kaptein_1123;
FIG. 6 is a graph comparing the results obtained by the present invention with those obtained by M6, M7, M8 and M9.
Detailed Description
The technical solutions in the examples of the present invention are clearly and completely described below with reference to the drawings in the examples of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without inventive step, are within the scope of the present invention.
The present invention will be described in further detail with reference to the accompanying drawings.
Example 1
NSST principle
Compared with other multi-scale transformation decomposition tools, NSST can more accurately express image information, has translation invariance, can overcome the pseudo Gibbs effect existing in the image reconstruction process, and a large number of research results show that a fused image obtained through NSST decomposition and reconstruction is more suitable for a human visual system. Let n be 2, affine system aAB(Ψ) is defined as follows:
Figure BDA0003081870070000061
wherein ψ ∈ L2(Z2) And Z is an integer field. A and B are both 2 × 2 invertible matrices, matrix AjIs an anisotropic expansion matrix, matrix BlIs the shear matrix and | detB | ═ 1. j is a decomposition scale,/is a direction parameter, and j, l belongs to Z; k is a shearing parameter, k is as large as Z2
When in use
Figure BDA0003081870070000062
In the formula, a is a scale variable, s is a direction variable, and a is usually 4, and s is usually 1. Psiast(x) Is a shear wave, and the resulting system is represented by the formula (2):
Figure BDA0003081870070000063
fig. 1 shows a two-level NSST decomposition structure. Decomposing a source image f into a low-pass image by non-subsampled pyramid transformation (NSP)
Figure BDA0003081870070000064
Sum band pass image
Figure BDA0003081870070000065
Decomposition using a shear filter bank
Figure BDA0003081870070000066
And realizing direction localization to obtain a direction sub-band image. Then, the NSP of each layer iteratively decomposes the low-frequency component obtained by the upper-layer decomposition, thereby obtaining a low-frequency subband image and a set of high-frequency directional subband images.
Theory of fuzzy sets
In fuzzy set theory, the membership degree quantifies uncertainty information by using an interval [0, 1], wherein 0 represents that the uncertainty information does not belong to the membership degree, and 1 represents that the uncertainty information completely belongs to the membership degree. The fuzzy set theory is good at expressing the qualitative knowledge that the boundary is not clear, and plays an important role in eliminating the fuzziness in the image. Many researches show that the image fusion method based on the fuzzy set theory is superior to other conventional algorithm models, and the composite method combining the fuzzy set with other representation methods can more accurately select effective information in a source image.
In common set theory, there is a relationship of attribution and non-attribution between elements and sets. Assuming that U is a set, U ∈ U,
Figure BDA0003081870070000071
characteristic function χ of AAThe definition is as follows:
Figure BDA0003081870070000072
Figure BDA0003081870070000073
in fuzzy set theory, the definitions of membership functions are evolved from the eigenfunctions in general set theory. Is provided withAMembership function mu for a fuzzy subset of U A The following can be defined:
Figure BDA0003081870070000074
as shown in the formula (5), the fuzzy set theory is based on the membership function. Therefore, membership functions play a very important role in fuzzy mathematics. The resolution of the image can be viewed as a set of blurred pixels, as shown in equation (6):
Figure BDA0003081870070000075
wherein x isijRepresents the gray value, mu, to which the pixel point (i, j) belongsijRepresents the degree of membership of the pixel point (i, j), and muij∈[0,1]。μijCalculated from the membership function, { mu }ijDenotes the fuzzy characteristic plane, different membership functions can get different muijThereby can be aligned to muijAdjustments are made to obtain different blur feature planes.
The invention provides two rules for fusing low-frequency sub-images and high-frequency sub-images respectively.
As shown in fig. 2, the overall framework of the algorithm of the present invention uses NSST and NSST inverse transformation as decomposition and reconstruction tools, respectively, and the fusion process is mainly divided into three steps: firstly, decomposing an image into sub-images of different levels; secondly, fusing the images of the corresponding levels according to a specific fusion rule; and finally, performing inverse transformation on each fused image, and reconstructing to obtain a fused image.
Low frequency fusion rule
The low frequency components of the visible image contain detailed information and spectral information, while the energy information contained in the low frequency components of the infrared image reflects the general outline of the image content, and salient objects in the infrared image are usually located in areas of greater energy[24]. In order to enhance target information and improve image contrast, the invention provides a fusion rule based on regional energy and an intuitive fuzzy set.
IRLAnd VISLLow-frequency sub-images, which are respectively an infrared image and a visible light image, are fused according to the region energy, as shown in formula (7):
Figure BDA0003081870070000081
in the formula, Es(m, n) is the energy of the region centered at point (m, n), s represents the infrared or visible light component; Ω (m, n) is a neighborhood window centered at point (m, n); a. thes(i, j) is the coefficient for the (i, j) position in the neighborhood; w (i, j) is the function value of the mask window at the (i, j) position. The mask window matrix with smaller difference between the central value and the neighborhood value can better reflect the area contrast, the size of the window function W of the invention is set to be 3 multiplied by 3, and can be expressed as formula (8):
Figure BDA0003081870070000082
after the regional energy of the infrared and visible light low-frequency sub-images is obtained, the low-frequency sub-images are fused by adopting a weighted average method, and the fusion weight is shown as the formula (9) and the formula (10):
Figure BDA0003081870070000083
w2=1-w1 (10)
f=w1×IRL+w2×VISL (11)
in the formula (I), the compound is shown in the specification,
Figure BDA0003081870070000084
and
Figure BDA0003081870070000085
are respectively IRLAnd VISLThe initial fusion image f contains information of two source images, can be regarded as a transition image of the two source images, and plays a role in adjusting the brightness of an infrared image background and a visible light image.
And expressing the membership degree of the coefficient by using a Gaussian membership function, and finally fusing the coefficient membership degree of the defuzzified low-frequency sub-image. First define IRLMembership function of coefficient
Figure BDA0003081870070000086
And non-membership functions
Figure BDA0003081870070000087
As shown in formulas (12) and (13):
Figure BDA0003081870070000088
Figure BDA0003081870070000089
in the formula (I), the compound is shown in the specification,
Figure BDA00030818700700000810
representing infrared low-frequency coefficients IRLIs determined by the average value of (a) of (b),
Figure BDA00030818700700000811
the standard deviation is indicated. k is a radical of1And k2Are gaussian function tuning parameters, set to 0.8 and 1.2, respectively, in the inventive experiment. By membership functions
Figure BDA0003081870070000091
And non-membership functions
Figure BDA0003081870070000092
Determining a hesitation function
Figure BDA0003081870070000093
Defuzzification is carried out on the intuitionistic fuzzy set membership function by a difference correction method to obtain a fuzzy set membership function
Figure BDA0003081870070000094
Figure BDA0003081870070000095
And
Figure BDA0003081870070000096
as shown in formulas (14) and (15):
Figure BDA0003081870070000097
Figure BDA0003081870070000098
similarly, VISL's intuitive fuzzy set membership function is defined according to equations (12) to (15)
Figure BDA0003081870070000099
Non-membership function
Figure BDA00030818700700000910
Hesitation function
Figure BDA00030818700700000911
And deblurred membership functions
Figure BDA00030818700700000912
In the conventional infrared and visible light image fusion algorithm, an image scene is distorted due to an overlarge infrared component proportion of a fused image. In the algorithm of the invention, the IR of the infrared source image is replaced by the image f in the final fusion of the low-frequency sub-imagesLUsing VISLThe background of the infrared image is reversely adjusted, so that the problem of overlarge infrared component proportion is effectively solved. Finally, the fused low-frequency sub-image is defined as shown in equation (16):
Figure BDA00030818700700000913
wherein, when the infrared membership degree is larger than the visible light membership degree, the visible light low-frequency coefficient VIS is selectedL(x, y) as a fusion coefficient to prevent oversaturation of background areas; infrared target stationThe membership degree of the point is usually smaller than that of the visible light corresponding point, so that the coefficient of the image F (x, y) is selected as a fusion coefficient, the infrared target is kept, and finally, a low-frequency fusion coefficient F is obtainedL(x,y)。
High frequency fusion rule
Unlike the low frequency components, the high frequency components are typically used to represent texture and edge information. NSST has a strong flexibility in the direction representation, while wavelet transform excels at handling the singularity problem of points. Thus, the combination of wavelet transform and NSST can extract more detailed information. The method of the invention carries out wavelet decomposition on the high-frequency component obtained by NSST again to obtain an approximate layer
Figure BDA00030818700700000914
And detail layer
Figure BDA00030818700700000915
A Haar wavelet is selected as the wavelet basis used in the decomposition with the decomposition level set to 1.
The approximate layer fusion adopts a weighted average rule, and the calculation formula is shown as formula (17):
Figure BDA00030818700700000916
in the formula (I), the compound is shown in the specification,
Figure BDA00030818700700000917
and
Figure BDA00030818700700000918
are respectively approximate layers
Figure BDA00030818700700000919
The coefficient at point (i, j),
Figure BDA00030818700700000920
the fused image of layers is approximated for each layer level. The detail layer fusion adopts the rule that the absolute value is large, as shown in formula (18):
Figure BDA0003081870070000101
in the formula (I), the compound is shown in the specification,
Figure BDA0003081870070000102
and
Figure BDA0003081870070000103
respectively a detail layer
Figure BDA0003081870070000104
Coefficient at point (i, j).
Figure BDA0003081870070000105
The image is fused by each level of detail layer, and the fused high-frequency sub-image is reconstructed by utilizing inverse wavelet transform
Figure BDA0003081870070000106
Finally, FLAnd
Figure BDA0003081870070000107
and obtaining a fused image F through NSST inverse transformation.
Analysis of results
In order to verify the practicability and effectiveness of the method, two groups of experiments are set for comparison. In the first group, we compare the method of the present invention with other multi-scale transform-based methods and intuitive fuzzy set algorithms, the comparison algorithms are: NSST-based, NSCT-based, SWT-based (the low-frequency and high-frequency component fusion coefficients in the three algorithms are respectively averaged and maximized), NSCT-Bala[25](NSCT and Bala fuzzy sets), NSCT-Gauss[26](NSCT and Gaussian blur set) named M1, M2, M3, M4, M5. In the second group, the method of the present invention was compared to four mainstream advanced methods, including SR[27](sparse regularization), FPDE[28](partial differential equation of fourth order), DRTV[29](Total variation model) and VSM[30](visual saliency map) named M6, M7, M8, M9. Experimental ginsengThe number settings are as follows:
(1) the computer is configured as follows: intel Core i5 CPU, 2.6GHz, 4GB memory, all experimental codes run on Matlab2017b platform;
(2) all infrared and visible light source images are from TNO Image Fusion database;
(3) in NSST, the pyramid filter selects "maxflat";
(4) the number of decomposition layers and the number of directions are 3 and {16,16,16}, respectively.
Subjective evaluation analysis
Fig. 3 shows that from top to bottom: infrared image, visible image source image, and the fusion results of M1, M2, M3, M4, M5 and the algorithm of the present invention.
The image obtained by fusing the algorithm and the five multi-scale transformation-based methods is shown in FIG. 3, and experimental results show that all the methods can effectively fuse infrared and visible light images, but the image quality is good and uneven. Wherein, the background edge of the M4 fusion result is obviously blurred; m3 fusing the edge information of the resulting missing image (e.g., "tree", "window", "road", etc. in the figure); the fused images of M1, M2, and M5 are significantly darker; the method of the present invention has the highest significance of the target, the higher contrast between the target and the background, and more detail information, and taking the two images of fig. 3(a) and (e) as an example, the result obtained by the method of the present invention has the highest degree of reduction relative to the corresponding position in the source image.
FIG. 4 shows five sets of fused infrared and visible images, (a)2_ Men in front of house; (b) sandpath; (c) nato _ camp _ sequence; (d) kaptein _ 1123; (e) kaptein _ 1654. From top to bottom in the figure are respectively: infrared image, visible image source image, and the fusion results of M6, M7, M8, M9 and the algorithm of the present invention.
As shown in fig. 4, the fusion result of the method of the present invention and the mainstream advanced algorithm is compared. From the significance analysis of infrared targets, differences between the various methods are readily compared. In fig. 4(a) where M6, M7, and M9 are fused, the infrared target brightness is not high; in the fused fig. 4(b) and (c) of M7, the infrared target is small and the contour is distorted. In the M8 fusion of all images, although the infrared target is highlighted, the background brightness is low, and the texture information is seriously lost, so that the scene is difficult to recognize. M6 and M7 retained more detail than the other two advanced methods, but in contrast they were still inferior to the proposed method. By integrating all fusion results, the method can highlight the infrared target, and the complete contour is beneficial to accurately capturing the target in the background; the image fused by the algorithm has high contrast and abundant texture details, and can accurately restore the background brightness of the visible light source image, so that the background is more hierarchical and better conforms to the human visual system.
Evaluation analysis
The performance of each algorithm was analyzed by six objective evaluation indices. Wherein, the larger the values of five evaluation indexes of information entropy (E), Average Gradient (AG), Standard Deviation (SD), Spatial Frequency (SF) and Mutual Information (MI) are, the higher the quality of the fused image is; the smaller the value of the Cross Entropy (CE), the smaller the difference in image information between the fused image and the source image.
(1) Information entropy (E)
E is used for measuring the information content contained in the image, and the calculation formula is shown in formula (19):
Figure BDA0003081870070000111
wherein L represents the total gray scale, piIs the probability that the grey value i appears in the image.
(2) Average Gradient (AG)
AG reflects microscopic detail contrast and texture change of the image, and the calculation formula is shown as formula (20):
Figure BDA0003081870070000112
wherein, in a region of size M × N with pixel (M, N) as the center, Δ FxAnd Δ FyIs the difference in gray values of the fused image in the x and y directions.
(3) Standard Deviation (SD)
SD reflects the gray level difference of the image pixel, and the calculation formula is shown in equation (21):
Figure BDA0003081870070000121
where F (i, j) is the grayscale value at the image (i, j) position, μ represents the average grayscale value of the entire image, and the image size is M × N.
(4) Spatial frequency (sF)
The SF reflects the overall activity degree of the image space, and the calculation formula is shown as the formula (24):
Figure BDA0003081870070000122
Figure BDA0003081870070000123
Figure BDA0003081870070000124
where F (m, n) is the grayscale value at the image (m, n) location, RF represents the row frequency and CF represents the column frequency.
(5) Cross Entropy (CE)
CE reflects the difference degree of the gray distribution of the fused image and the source image, and the calculation formula is shown as formula (25):
Figure BDA0003081870070000125
wherein L is the total number of gray levels, piAnd q isiRespectively representing the probability of the occurrence of gray values i of the source image and the fused image.
(6) Mutual Information (MI)
MI reflects the similarity of the source image and the fused image, and a calculation formula is shown as a formula (27):
Figure BDA0003081870070000126
MI=MIA,F+MIB,F (27)
wherein, PXAnd PFRespectively representing the gray distribution, P, of the source and fused imagesX,FIs the joint probability distribution density. MIA,FAnd MIB,FThe sum of which represents the mutual information value.
FIG. 5 is a graph comparing objective evaluation indexes obtained by the algorithm of the present invention with results obtained from M1, M2, M3, M4, and M5.
FIG. 6 is a graph comparing the objective evaluation indexes obtained by the algorithm of the present invention with those obtained by M6, M7, M8 and M9.
As shown in fig. 5 and fig. 6, objective evaluation index comparison curves of the method of the present invention and the multi-scale transformation-based method and the mainstream advanced algorithm are shown, respectively. Obviously, the method has obvious advantages on all objective evaluation indexes. The method has the advantages that the fusion result of the method retains abundant texture detail information of the source image, so that the difference between the source image and the fusion image is reduced, the image is clearer, and the quality of the fusion image is improved. In conclusion, the fused image of the method contains remarkable infrared target information, richer detail information and local features, is rich in hierarchy and is more adaptive to the human visual system.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (6)

1. An image fusion method based on region energy and an intuitionistic fuzzy set is characterized in that:
decomposing the infrared and visible light images into low-frequency and high-frequency sub-images by using non-subsampled shear wave transformation (NSST);
fusing the series of high-frequency sub-images by adopting a fusion algorithm based on wavelet transformation;
fusing low-frequency sub-images by adopting a rule of combining regional energy and an intuitionistic fuzzy set, firstly carrying out primary fusion on a source image by adopting the regional energy, then carrying out secondary fusion by adopting a fusion algorithm of the intuitionistic fuzzy set, and finally obtaining fused low-frequency sub-images;
the fused image is reconstructed using the NSST inverse transform.
2. The image fusion method based on regional energy and intuitive blur set according to claim 1, characterized in that: decomposing the infrared and visible light images into low-frequency and high-frequency sub-images by using non-subsampled shear wave transformation (NSST);
affine system AAB(Ψ) is:
Figure FDA0003081870060000011
wherein ψ ∈ L2(Z2) Z is an integer field; a and B are both 2 × 2 invertible matrices, matrix AjIs an anisotropic expansion matrix, matrix BlIs a shear matrix, and | detB | ═ 1; j is a decomposition scale, l is a direction parameter, and j, l belongs to Z; k is a shearing parameter, k is as large as Z2
When in use
Figure FDA0003081870060000012
Where a is a scale variable and s is an orientation variable, typically a is 4 and s is 1; psiast(x) Is a shear wave, and the resulting system is represented by the formula (2):
Figure FDA0003081870060000013
decomposing a source image f into a low-pass image by non-subsampled pyramid transformation (NSP)
Figure FDA0003081870060000014
Sum band pass image
Figure FDA0003081870060000015
Decomposition using a shear filter bank
Figure FDA0003081870060000016
Realizing direction localization to obtain a direction sub-band image; then, the NSP of each layer iteratively decomposes the low-frequency component obtained by the upper-layer decomposition, thereby obtaining a low-frequency subband image and a set of high-frequency directional subband images.
3. The image fusion method based on regional energy and intuitive blur set according to claim 1, characterized in that: performing a fusion step based on the region energy and the intuitive fuzzy set
Figure FDA0003081870060000017
Membership function mu for a fuzzy subset of U A The following can be defined:
Figure FDA0003081870060000021
as can be seen from the formula (5), the fuzzy set theory is established on the basis of the membership function; therefore, the membership function plays a very important role in fuzzy mathematics; the resolution of the image can be viewed as a set of blurred pixels, as shown in equation (6):
Figure FDA0003081870060000022
wherein x isijRepresents the gray value, mu, to which the pixel point (i, j) belongsijRepresents the degree of membership of the pixel point (i, j), and muij∈[0,1];μijCalculated from the membership function, { mu }ijDenotes the fuzzy feature plane, different membership functionsThe number can be differentijThereby can be aligned to muijAdjustments are made to obtain different blur feature planes.
4. The image fusion method based on regional energy and intuitive blur set according to claim 3, characterized in that:
fusing the low-frequency sub-images; IRLAnd VISLLow-frequency sub-images, which are respectively an infrared image and a visible light image, are fused according to the region energy, as shown in formula (7):
Figure FDA0003081870060000023
in the formula, Es(m, n) is the energy of the region centered at point (m, n), s represents the infrared or visible light component; Ω (m, n) is a neighborhood window centered at point (m, n); a. thes(i, j) is the coefficient for the (i, j) position in the neighborhood; w (i, j) is the function value of the mask window at the (i, j) position; the mask window matrix with smaller difference between the central value and the neighborhood value can better reflect the area contrast, the size of the window function W of the invention is set to be 3 multiplied by 3, and can be expressed as formula (8):
Figure FDA0003081870060000024
after the regional energy of the infrared and visible light low-frequency sub-images is obtained, the low-frequency sub-images are fused by adopting a weighted average method, and the fusion weight is shown as the formula (9) and the formula (10):
Figure FDA0003081870060000025
w2=1-w1 (10)
f=w1×IRL+w2×VISL (11)
in the formula (I), the compound is shown in the specification,
Figure FDA0003081870060000026
and
Figure FDA0003081870060000027
are respectively IRLAnd VISLArea energy of, w1And w2For fusion weight, the initial fusion image f contains information of two source images, can be regarded as a transition image of the two source images, and plays a role in adjusting the brightness of the infrared image background and the visible light image;
expressing the membership degree of the coefficient by using a Gaussian membership function, and finally fusing the coefficient membership degree of the defuzzified low-frequency sub-image; first define IRLMembership function of coefficient
Figure FDA00030818700600000318
And non-membership functions
Figure FDA0003081870060000031
As shown in formulas (12) and (13):
Figure FDA0003081870060000032
Figure FDA0003081870060000033
in the formula (I), the compound is shown in the specification,
Figure FDA0003081870060000034
representing infrared low-frequency coefficients IRLIs determined by the average value of (a) of (b),
Figure FDA0003081870060000035
represents the standard deviation; k is a radical of1And k2Is a gaussian function adjustment parameter, set to 0.8 and 1.2, respectively; by membership functions
Figure FDA0003081870060000036
And non-membership functions
Figure FDA0003081870060000037
Determining a hesitation function
Figure FDA0003081870060000038
Defuzzification is carried out on the intuitionistic fuzzy set membership function by a difference correction method to obtain a fuzzy set membership function
Figure FDA0003081870060000039
And
Figure FDA00030818700600000310
as shown in formulas (14) and (15):
Figure FDA00030818700600000311
Figure FDA00030818700600000312
similarly, VIS is defined according to formulas (12) to (15)LIs an intuitive fuzzy set membership function
Figure FDA00030818700600000313
Non-membership function
Figure FDA00030818700600000314
Hesitation function
Figure FDA00030818700600000315
And deblurred membership functions
Figure FDA00030818700600000316
Finally, the fused low-frequency sub-image is defined as shown in equation (16):
Figure FDA00030818700600000317
wherein, when the infrared membership degree is larger than the visible light membership degree, the visible light low-frequency coefficient VIS is selectedL(x, y) as a fusion coefficient to prevent oversaturation of background areas; the membership degree of the point where the infrared target is located is usually smaller than the membership degree of the corresponding point of the visible light, so that the coefficient of the image F (x, y) is selected as a fusion coefficient, the infrared target is kept, and finally, a low-frequency fusion coefficient F is obtainedL(x,y)。
5. The image fusion method based on regional energy and intuitive blur set according to claim 4, characterized in that:
fusing the high-frequency sub-images based on the fusion algorithm of wavelet transformation, and performing wavelet decomposition on the obtained high-frequency components again to obtain an approximate layer
Figure FDA0003081870060000041
And detail layer
Figure FDA0003081870060000042
Selecting a Haar wavelet as a wavelet base used in decomposition, and setting a decomposition layer as 1;
the approximate layer fusion adopts a weighted average rule, and the calculation formula is shown as formula (17):
Figure FDA0003081870060000043
in the formula (I), the compound is shown in the specification,
Figure FDA0003081870060000044
and
Figure FDA0003081870060000045
are respectively approximate layers
Figure FDA0003081870060000046
The coefficient at point (i, j),
Figure FDA0003081870060000047
a fused image of each hierarchical approximation layer; the detail layer fusion adopts the rule that the absolute value is large, as shown in formula (18):
Figure FDA0003081870060000048
in the formula (I), the compound is shown in the specification,
Figure FDA0003081870060000049
and
Figure FDA00030818700600000410
respectively a detail layer
Figure FDA00030818700600000411
The coefficient at point (i, j);
Figure FDA00030818700600000412
the image is fused by each level of detail layer, and the fused high-frequency sub-image is reconstructed by utilizing inverse wavelet transform
Figure FDA00030818700600000413
Finally, FLAnd
Figure FDA00030818700600000414
and obtaining a fused image F through NSST inverse transformation.
6. An image fusion device based on regional energy and an intuitionistic fuzzy set comprises a memory and a processor, wherein the memory stores a computer program and is characterized in that; the processor, when executing the computer program, realizes the method steps of any of claims 1-5.
CN202110568871.3A 2021-05-25 2021-05-25 Image fusion method and device based on regional energy and intuitionistic fuzzy set Active CN113298147B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110568871.3A CN113298147B (en) 2021-05-25 2021-05-25 Image fusion method and device based on regional energy and intuitionistic fuzzy set

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110568871.3A CN113298147B (en) 2021-05-25 2021-05-25 Image fusion method and device based on regional energy and intuitionistic fuzzy set

Publications (2)

Publication Number Publication Date
CN113298147A true CN113298147A (en) 2021-08-24
CN113298147B CN113298147B (en) 2022-10-25

Family

ID=77324588

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110568871.3A Active CN113298147B (en) 2021-05-25 2021-05-25 Image fusion method and device based on regional energy and intuitionistic fuzzy set

Country Status (1)

Country Link
CN (1) CN113298147B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116012607A (en) * 2022-01-27 2023-04-25 华南理工大学 Image weak texture feature extraction method and device, equipment and storage medium
CN117252794A (en) * 2023-09-25 2023-12-19 徐州医科大学 Multi-wavelength transmission image fusion device in frequency domain
CN117876321A (en) * 2024-01-10 2024-04-12 中国人民解放军91977部队 Image quality evaluation method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109035189A (en) * 2018-07-17 2018-12-18 桂林电子科技大学 Infrared and weakly visible light image fusion method based on Cauchy's ambiguity function
CN111127380A (en) * 2019-12-26 2020-05-08 云南大学 Multi-focus image fusion method based on novel intuitionistic fuzzy similarity measurement technology
WO2020129950A1 (en) * 2018-12-21 2020-06-25 Sharp Kabushiki Kaisha Systems and methods for performing inter prediction in video coding

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109035189A (en) * 2018-07-17 2018-12-18 桂林电子科技大学 Infrared and weakly visible light image fusion method based on Cauchy's ambiguity function
WO2020129950A1 (en) * 2018-12-21 2020-06-25 Sharp Kabushiki Kaisha Systems and methods for performing inter prediction in video coding
CN111127380A (en) * 2019-12-26 2020-05-08 云南大学 Multi-focus image fusion method based on novel intuitionistic fuzzy similarity measurement technology

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
张林发等: "《基于直觉模糊集和亮度增强的医学图像融合》", 《计算机应用》 *
李晓军等: "基于非下采样剪切波变换的医学图像边缘融合算法研究", 《光电子?激光》 *
王子睿: "《直觉模糊理论在图像融合中的应用》", 《内蒙古大学》 *
王焕清: "结合NSCT和邻域特性的红外与可见光图像融合", 《信息通信》 *
邢笑雪等: "《Infrared and Visible Image Fusion Based on nonlinear enhancement and NSST decomposition》", 《RESEARCH SQUARE》 *
陈贞等: "《基于非下采样剪切波变换的医学图像融合算法》", 《沈阳工业大学学报》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116012607A (en) * 2022-01-27 2023-04-25 华南理工大学 Image weak texture feature extraction method and device, equipment and storage medium
CN116012607B (en) * 2022-01-27 2023-09-01 华南理工大学 Image weak texture feature extraction method and device, equipment and storage medium
CN117252794A (en) * 2023-09-25 2023-12-19 徐州医科大学 Multi-wavelength transmission image fusion device in frequency domain
CN117252794B (en) * 2023-09-25 2024-04-16 徐州医科大学 Multi-wavelength transmission image fusion device in frequency domain
CN117876321A (en) * 2024-01-10 2024-04-12 中国人民解放军91977部队 Image quality evaluation method and device

Also Published As

Publication number Publication date
CN113298147B (en) 2022-10-25

Similar Documents

Publication Publication Date Title
CN113298147B (en) Image fusion method and device based on regional energy and intuitionistic fuzzy set
Jin et al. A survey of infrared and visual image fusion methods
CN108399611B (en) Multi-focus image fusion method based on gradient regularization
CN112950518B (en) Image fusion method based on potential low-rank representation nested rolling guide image filtering
CN107169944B (en) Infrared and visible light image fusion method based on multi-scale contrast
CN103020933B (en) A kind of multisource image anastomosing method based on bionic visual mechanism
Tan et al. Remote sensing image fusion via boundary measured dual-channel PCNN in multi-scale morphological gradient domain
CN110163818A (en) A kind of low illumination level video image enhancement for maritime affairs unmanned plane
CN108921809B (en) Multispectral and panchromatic image fusion method based on spatial frequency under integral principle
CN113837974B (en) NSST domain power equipment infrared image enhancement method based on improved BEEPS filtering algorithm
Xiang et al. Visual attention and background subtraction with adaptive weight for hyperspectral anomaly detection
CN112669249A (en) Infrared and visible light image fusion method combining improved NSCT (non-subsampled Contourlet transform) transformation and deep learning
Gao et al. Improving the performance of infrared and visible image fusion based on latent low-rank representation nested with rolling guided image filtering
CN106897999A (en) Apple image fusion method based on Scale invariant features transform
Xiao et al. Image Fusion
Han et al. Local sparse structure denoising for low-light-level image
CN114387195A (en) Infrared image and visible light image fusion method based on non-global pre-enhancement
CN113592729A (en) Infrared image enhancement method for electrical equipment based on NSCT domain
Xing et al. Infrared and visible image fusion based on nonlinear enhancement and NSST decomposition
CN111815550A (en) Infrared and visible light image fusion method based on gray level co-occurrence matrix
Nercessian et al. Multiresolution decomposition schemes using the parameterized logarithmic image processing model with application to image fusion
CN112734683B (en) Multi-scale SAR and infrared image fusion method based on target enhancement
Xiong et al. Multitask Sparse Representation Model Inspired Network for Hyperspectral Image Denoising
Yang et al. Infrared and visible image fusion based on QNSCT and Guided Filter
CN109584192B (en) Target feature enhancement method and device based on multispectral fusion and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant